Artificial Intelligence is big. And it's only getting bigger, as every industry looks for ways to exploit its uses.
But if we're programming computers to think, we shouldn't be including the flaws of human intelligence: racism, sexism, and other forms of human partiality.
Yet, AI algorithms are already showing their fair share of bias. In 2016, AI used to predict crimes were racist towards African Americans, treating them unjustly. And Microsoft's AI chatbot quickly learned to spew out racist and misogynistic tweets.
Here at Goat Ventures, we believe AI shouldn't just be a new breed of intelligence -- but instead a more neutral and fair form of thought. Now that we've seen its potential power and uses, it's time to consider setting their ethical boundaries and limitations. Should robots have their own civil rights bill?
Aside from allowing researchers and computer scientists to crunch numbers an awful lot faster, AI and the algorithms behind them are many times propped up as neutral judicators in decision-making, unbiased by any human. But is this really possible?