30 March 2023

AI and Dilemmas – Biases

AI is not a living entity with its own thoughts and emotions about others, but it is a system coded by humans, and therefore susceptible to human error and bias. Let’s explore what this means and what are some of…

What seems to be the problem?
As AI becomes widely used in various fields, a lot of questions come up around its use and the way it works. AI is an “intelligence”, but not in the same way a human is considered intelligent. Humans can think and form opinions, have emotions and discuss abstract ideas. We have an intrinsic intelligence inside of us which makes us capable of performing many tasks and contemplating the world around us. AI is not something alive, it is a computer system, a set of code simulating human intelligence. It is a very complex and advanced system, but a computer system nevertheless. This basic fact that humans are the ones who created AI is very important in the consideration of ethical and moral dilemmas around its functioning.

When you create something, be it a piece of art, research paper or a computer program, you are always creating it from your own life perspective, with many opinions and ideas about the world and the social and economical state of our society. Even with experienced academics trained to take an unbiased perspective and look at everything objectively, it is nearly impossible to avoid at least some subjective thought or prejudice. The same happens with the creation of AI. When a skilled programmer is working on an AI function or system, he might believe he is making it objective and fair to all members of society, but there are biases ingrained in our society which we don’t even notice at the surface level. For example, women and minority groups are still underrepresented in tech, and this reflects under the surface with situations where AI was seen to discriminate against these groups while processing images or candidates for certain job positions. This is of course not done on purpose, but it is a problem which needs to be acknowledged and worked on.

How should we approach this issue?
After seeing that our own bias can easily get transferred to our creations, how do we approach this issue and what can we do to try and correct it? The tricky part is acknowledging there are ingrained inequalities and biases even within individuals who consider themselves fair and accepting of all people. This isn’t a personal issue, it’s a societal one, and we need to question ourselves much deeper in order to understand which factors are influencing this, and if we are actually ready to work on creating an AI which is truly fair and ethical.
Including many different people is crucial in this development, as one person from a certain culture can never understand what another culture or country needs and what their norms of behavior and functioning are. This seemingly simple step of including minorities and different cultures in the creation and programming of AI from the start can make a big difference in its performance later on.

Another issue related to ethical AI is the question of accountability. This concerns every AI developer, no matter what type of project or system they are working on, and it comes from the fact that all AI systems are made by humans. If the AI system in question is, for example, an autonomous vehicle making decisions concerning human safety, the developer ultimately carries at least part of the responsibility behind those decisions. Such dilemmas are something we wouldn’t usually give much thought in the beginning stages of programming and creating the AI, but this original code is what all future learning and decisions will be based on, so it is important to be accountable and create a solid ethical base.

Why is it important to work towards a more ethical and less biased AI?
AI and similar advanced technologies have an enormous potential to increase the quality of life in our world and improve economic and social equality. However, if used wrong, they can also deepen all the inequalities and issues we currently have. AI starts with certain “knowledge” and abilities with which humans program it, but often it will also have the capacity to learn from its own mistakes and actions. This means that having barely noticeable biases or unethical tendencies in the start can make them grow into something worse if the AI continuously learns through that behaviour.

This is why collaboration and communication between many people and cultures is important during the development stages of all AI systems. This collaboration is the only way to try and build an AI with no discriminatory tendencies and an ethical base from which to learn and develop further. To have a fair world we need to be fair and accountable with new technologies and processes. We still have a long way to go, and we need to embrace the learning process that awaits with open minds and a readiness to grow and work on our own errors and biases.

Artificial Intelligence: examples of ethical dilemmas
Ethical dilemmas of AI: fairness, transparency, collaboration, trust, accountability & morality

This blog appears with kind permission from youevolve.net, where our colleague Judith Eberl works on her particular passion, the impact of AI on our world

Written by the YouEvolve Team


You are using an outdated browser which can not show modern web content.

We suggest you download Chrome or Firefox.