30 March 2023

Let’s talk about AI and decision making

How much decision making are we willing to let go of.

How much decision making are we willing to let go of?
One common trait of many humans is that we love making our lives easier and more automated, which is why technology and innovations are usually welcomed and not over-questioned. We enjoy doing less, thinking less, and not having to make hard and frustrating decisions. Some people love having the power and freedom to make their own decisions, while others enjoy delegating and not having to weigh and consider options and consequences. No matter if you enjoy decisions or avoid them at all cost, how would you feel if the one helping you make a decision or even deciding for you was a machine and not another human? How much would you be willing to let go of?

Let’s talk about AI and decision making!

How does AI make decisions?
When we consider technology, we are aware that the way it functions is very complex and to many of us – simply incomprehensible. But we are also aware that it is usually structured, and that someone programmed and created it, giving it the ability to perform all the functions it is meant to do. Even very complex examples of technology like AI are based on certain algorithms and principles, which are usually very logical to the people who created them and who use them in their business and daily lives. AI learns because it’s given the ability to analyze countless examples and adjust to the input it was given. There are AI systems which analyze and produce results based on set codes and criteria which do not change, and there are complex AI systems which rely on deep learning technologies, and that have the ability to learn from their own actions and results, allowing them to develop more and more on their own.

Before an AI system is set up and able to function and make decisions on its own, a lot of decisions have already been made – in the set up and training of the algorithm, in the acceptable error margins, the data sets it is trained on, how it should assess values and outcomes, how the system was designed etc. When an AI system is trained well, it can provide incredible results and outperform humans in certain analysis and outcome prediction. But even if we might get a tendency to trust and rely on an AI system providing us with good results, there is a need to continue developing and checking the outcomes, because not all situations require the same criteria and there is always the possibility of error, or even bias in the original programming.

AI systems have an amazing computational power and are able to go through millions of scenarios in a few seconds. However, in order to do this and to make sense of and assess those options, they need specific criteria. The issue here is that decision making does not always come with a specific set of criteria.

Decision making is an intricate analysis of many factors and their interactions, and we have to be able to consider the given situation, the environment of the problem, as well as be educated about possible outcomes and consequences. For an AI to make decisions, it needs to be given a good description of the factors in play, ideally as specific as possible. AI bases decisions on the instructions and programming it was provided which defines what is “good”, “logical” or “correct” in a given situation. When given examples of good and bad decisions, AI can apply this to various other situations to try getting an optimal result. This can work extremely well in certain complex environments which have defined outlines, while not being suitable at all for some other situations. If a given situation does not fit any of the previously programmed examples, or has complex moral consequences for which not enough context is given, AI can struggle a lot, and we can’t be sure that the decision it comes to would be one that a human would consider optimal.

Understanding AI decisions and the responsibility behind them
We don’t really understand where AI decision making comes from, and if the way someone programmed a certain algorithm agrees with our principles of right and wrong. AI can optimize decisions based on defined factors, but those criteria can be biased or suboptimal as they come from a human, who has an own worldview and is not necessarily someone we would agree with or think of as a good decision maker.

Along with this, there is also the issue of liability for the decision made. Who is responsible if a decision is left to AI, and the result turns out suboptimal, or even catastrophic? Is the person who let the machine decide culpable? Maybe the person who originally programmed the AI? Someone else who was involved with the process? Is the machine itself at fault, and if so are we even allowed to blame someone who was only a small part of the process, and would not have made the decision the AI ultimately took?

It is quite difficult to understand how AI weighs facts and decisions, especially complex moral ones for which even humans would not agree on the right course of action. In fact, we don’t really understand human decision making either. It is an extremely complex process and we are often not sure why we come to certain decisions, as they are not always rational and explainable. With humans though, this is something we can just accept as feeling or intuition, while for AI we are not at ease with not knowing and accepting the fact – we have a need to understand and explain it. But as our reasoning is also limited – maybe we need to let go of trying to understand AI decisions completely, and let it help us in the areas where it outperforms us?

A smart collaboration
The situation is not black and white, and we are not forced to either give up all of our decision making to machines or not use them at all. As already mentioned – AI has very strong computational power and would beat us in simple fast analysis every time, but it lacks the understanding of context, new unpredictable situations, or moral and ethical implications. The good side is that we can use AI for the things it excels in, while keeping final executive decisions for humans.

Decision-making is a hard topic. Humans often struggle with hard decisions, but when things go wrong, it is clear that a wrong decision was made, and it is usually clear who made the wrong decision too. Involving AI in the process can complicate the chain of liability and responsibility, but it can also help us consider many factors and options in a short time, a task which usually comes as a long and tedious job for humans.

So how should we move forward? Do we include technology into our decisions and important processes, and to which extent? We will never all agree if AI should be allowed to make decisions for us, and if these decisions are the right ones. But we can probably agree that AI could take over certain types of decisions, and help us analyze situations and prepare the ground for other ones. AI can be the processing machine which gives us plenty of facts and data. This in turn lets us humans focus on more intricate and delicate issues such as moral implications.

As with every other issue concerning AI, it is a complex combination of factors which we need to think about well to be able to assume responsibility and ensure good ethical practices. AI systems are not 100% robust, and they are not always easily explainable and transparent, but they do provide us with the capabilities to put in perspective very complex situations. Having the help of AI is already helping experts in many industries with decision making and various other processes. The vast computational power of AI supports us in covering a number of options which a human brain could never consider and process in a limited time. The collaboration of humans and AI can ensure faster and more data-based solutions for many problems, but human experts are still needed for supervision or executive decision-making in a lot of situations.

In this complex environment and the absence of perfect systems and technologies, we all need to reflect on what types of decisions we are willing to let go of, which we want to keep for ourselves, and for which a collaboration of humans and AI systems can bring the best results.

Have you thought about AI and decision making before? What kind of decisions would you feel comfortable giving away to AI, and for which decisions would you welcome AI support?

Sources:
This blog appears with kind permission from youevolve.net, where our colleague Judith Eberl works on her particular passion, the impact of AI on our world

https://www.forbes.com/sites/forbestechcouncil/2022/08/23/how-artificial-intelligence-can-improve-organizational-decision-making/?sh=5240f8572a1c
https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions
https://www.comidor.com/knowledge-base/machine-learning/ai-decision-making/


Written by the YouEvolve Team

Search

You are using an outdated browser which can not show modern web content.

We suggest you download Chrome or Firefox.