Can we eliminate bias in Artificial Intelligence?

The ultimate goal of artificial intelligence is to create technology that will allow computers and machines to mimic the human brain and exhibit human-like behaviour. Although there’s a long way until current AI will become similar or equivalent to human intelligence, there is, nevertheless, a certain characteristic that machine learning has already picked up from us. Being biased.

What exactly is biased in AI?

Being biased means displaying a tendency to lean in a certain direction, either in favour of or against a particular thing, without having a neutral point of view on that aspect.

In AI, especially in machine learning, this means that the process makes erroneous assumptions due to the limitations of a dataset. More specifically, the data we fed into an ML model can contain human interpretation and cognitive assessments that will also influence the result. For instance, an ML model used in human resources to screen resumes might be inappropriately filtering the candidates based on attributes such as race, colour, marital status, etc.

Some of the common features that might result in bias include:
– Race
– Gender
– Colour
– Religion
– National origin
– Marital status
– Sexual orientation
– Education background
– Source of income
– Age

How can an algorithm become biased?

Machine learning is the subset of artificial intelligence that allows computers to improve themselves over time as a result of experience and practice. A machine learning algorithm represents a set of programming commands that help a regular computer learn how to solve problems with AI. The algorithm and the data used are the most important components of an ML model.

Continue reading on Strongbytes’ blog.