Is Artificial Intelligence (AI) rational? You may (or may not!) be surprised to find out that machines are not as rational as you think they are.
AI has always been regarded as rational and free of human biases, simply for the fact that people don't often see it conveying or perceiving emotions. However, it has recently been proved that AI can contain bias when it is ‘fed’ with materials such as books written in the very past (1800s or less), which are loaded with biasses.
The word-embedding association test (WEAT) was used to test bias for machines in a research at the University of Bath (UK). The results of the tests showed that the words “Brett” and “Allison” were more often associated with positive words while names like “Alonzo” and “Shaniqua” were more similar to negative words.
This study emphasizes the importance of choosing proper wordings, correct materials, and data sets when executing machine learning or training. The findings also present important implications on how AI bias could influence cultural and political issues in the future.
One of the great promises of artificial intelligence (AI) is a world free of petty human biases. Hiring by algorithm would give men and women an equal chance at work, the thinking goes, and predicting criminal behavior with big data would sidestep racial prejudice in policing. But a new study shows that computers can be biased as well, especially when they learn from us. When algorithms glean the meaning of words by gobbling up lots of human-written text, they adopt stereotypes very similar to our own.