Close
Written by

Gabriela Wiktorzak

Don’t ask the robot to volunteer

Mundane tasks are what they are supposed to free us from. Working faster and cheaper, giving more space to human creativity and unleashing the potential that we, people, have. To some folks, it sounds creepy though. Artificial intelligence fulfilling a colleague’s duties? And what if bots replace us also in the typically interpersonal matters? Will you ever be hugged by a compassionate robot?

Being thrilled by the idea of having an army of selfless, non-complaining and never tired robots in my CSR tasks – I [Kasia Kubat] interviewed my brainy colleagues: Gabriela Wiktorzak, an AI enthusiast and experienced legal practitioner, and Krzysztof Rudnicki, an IT Technical Consultant and software development expert, passionate about all kinds of emerging and important technologies. Both have been all eyes and ears in relation to AI for a long time.

Kasia Kubat, CSR manager at Objectivity: Can a robot decide about volunteering, if it does not feel a need to help?

Gabriela Wiktorzak, solicitor at Objectivity: The answer to that question would most likely be no, but it needs some context, as well as establishing what we mean by “robot”. Within such context it may refer to a cognitive process – especially reasoning. Such a robot makes decisions based on data and once their analysis is made – the decision follows. In order to change such a decision, we would need to understand why it was made and re-train the robot using a different set of data. Doing something we are not willing to do is still within the scope of human capabilities, not a robot’s because it does not have the same intellectual abilities as humans.

Krzysztof Rudnicki, an IT Technical Consultant and software development expert, at Objectivity: I am not sure if we are not mixing up concepts. I would rather use the „AI algorithm” than “a robot”. If we need to visualize the object, I would call it “the AI-powered bot”. The robot is hardware, a machine, and an AI bot or an AI algorithm is just a decision supporting a tool. This leads us to ask a prior question: is the world ready for AI in the service for humanity? Because I am sure there is a big need for that.

KK: What do you mean?
KR: As well as in other areas I believe AI can be also applied in more or less challenging tasks related to CSR. There are surely some activities, where people need to collect, review and understand large data sets. For instance, imagine requests for donations (e.g. time -volunteering work, money, material) flooding into large organizations. How to assign the best fitting solution – so that it matches best with the beneficiary’s needs? Also, the AI excels at tasks, where some unobvious, hidden patterns are affecting decision-making and where important details are easily omitted by human perception.

KK: Speaking of human perception… Guys, how do the intellectual abilities between bots and humans differ?

GW: If we look at the predictive analytics process, it basically boils down to extracting the data, refining and preparing it, sussing out what needs to be predicted, creating that prediction and finally applying it. This can be done using data mining or machine learning, which are driven by a huge number of algorithms, e.g. a decision tree that is based on the notion that you make a certain conclusion by asking “yes or no” questions. You can therefore imagine that there is not much room for maneouvre – the decision-making process flows in accordance with a certain logical pattern. When it comes to human’s decision process – this is far more complex, because whilst we are able to process information in a logical, rational way, sometimes – depending on internal or external circumstances – this process can be disrupted: as we often take into consideration other factors, pulled out from subconscious sets of the data we store within us.

KR: It’s worth emphasising that AI algorithms are still mathematical calculations that analyse a provided set of training data. So their logic conclusions and decisions will be as good as fair the delivered training data is. When the training data is insufficient, the AI bot results will be unpredictable. In the case of a human: if we receive a weak data set, we additionally use our past experiences, knowledge and empathy. On the other hand, AI cannot just get tired or be in a bad or good mood. Those factors of the human nature are not in AI’s dictionary. Once AI is trained well enough with finely prepared training data, its decisions will not change because of some unknown reasons. Yet those missing human features may be perceived as being cold-hearted or even unfair to some of us – as we have something AI is far from: this special kind of social intelligence called empathy. That is why we are held responsible for our creations and it is on us to decide when our ethics allows us to apply AI supported decisions influence reality.

KK: How to get this fair, good and right data? If humans provide data – and we are all unconsciously biased – how can we deliver something unbiased, neutral and fair to bots?

KR: Technicalities of building training data set, i.e. unbiased and of good quality, are a subject of studies across many different domains of our very existence. There are many areas where the impact of AI-powered solutions on the environment or lives is not significant. However, when it comes to such complex areas of application, where a person may suffer one way or another, the question is much more different. It should probably be answered by a team with a deep understanding of many disciplines of science and humanities. Diversity is always good, this is no exception.

GW: Yes, I think diversity is the key, and this concerns not only the data sets used, but also the teams, who are meddling in the data and algorithms. We need to remember that the AI bias can have a crucial impact on decision-making. Whilst there are no dedicated legal frameworks to minimize such risks, many companies have already recognized this issue and created their own policies to ensure that their algorithmic solutions are free of bias. There is also an interesting concept of providing algorithms with a sense of uncertainty, where an algorithm computes multiple solutions, and then comes up with a list of possible options. Recently, the European expert group, appointed by the European Commission, published draft ethics guidelines for trustworthy artificial intelligence. These guidelines set out how developers and users could ensure that fairness, safety, transparency, accessibility, the future of work and democracy are not put at risk by AI. Regardless of what rules we are going to put in place, the companies will need to be prepared to make their own individual assessments. I like the ideas formally introduced by GDPR, which created a framework within which we are meant to be proactive, and not reactive; preventative, and not remedial. I think similar principles could be applied in AI development, which should be done while having ethical issues and privacy in mind at every step.

KK: Can AI un-learn the biased, e.g. homophobic or racist behavior?

GW: Yes. The fundamental software principle of “garbage in, garbage out” applies to AI too. Machine learning and deep learning is about taking large sets of data, learning from it and delivering recommendations. However, when the input datasets are biased, the result will most likely be biased too. The algorithms as such may also be biased. Prof. Deirdre Mulligan from UC Berkeley School of Information said that machine-learning algorithms haven’t been optimized for any definition of fairness; they are developed to perform tasks. AI is a tool, which should be monitored and retrained, when necessary. But in order to create objective data sets, those who create them would also need some training. Hence, maintaining diversity in such teams is very important.

KR: Technically speaking, it’s about preparing large and unbiased data sets again, as boring as it may sound. Working out a way to collect, filter and prepare datasets with covering all the diversity that may be important. The other part is to test or cross-validate the trained model against additional datasets that weren’t used for training. Such techniques, along with many more methods still being developed, are the key to build models with no ‘poisonous prejudice’ translated as bias. Despite the method, the goal is usually the same – the more equally distributed and diverse the training data is, the less error prone the final model will be.

KK: But why is this diversity so important?

GW: Because when a decision is being made, we would like to make sure that all possible circumstances are taken into consideration. This usually results in delivering the most objective outcome and applies to us as well as to robots. If we placed ten people, who share the same background, education, interests, political views etc. in one room, and then asked them to describe what they have seen inside, what they would report would not probably differ. However, if you place ten people, who come from different backgrounds and have had different experiences in their life, their stories would be different because the prism through which they perceive the world is quite different.

KR: Exactly, and because having single training data set you will end up having model trained with very similar qualities no matter how many times you try. Since the end product – the AI model – may be treated as one person, you’d like that person to be objective and unbiased. That’s why taking into account different viewpoints is important to achieve such a model capable of providing fair answers when asked a question.

KK: How about replacing volunteers with robots or AI bots? Could I use them for solving community problems instead of asking my colleagues to volunteer? Would the result be faster and better? Would I have to ask the bots to agree to do it – since the essence of volunteering is to perform a free will activity?

GW: At this stage robots do not have an “electronic personality”, and it is unlikely to happen any time soon. It is a concept which appeared in one of the EU draft reports. The idea was that, at some point, robots would need to acquire some rights and obligations, which was heavily criticized. At the moment robots could be programmed and asked to do anything, which obviously might turn out to be a very dangerous, and most likely, unethical exercise. We might be facing the next era of automated slaves. I think that every responsible company, which plans to deploy robots to do corporate volunteering, should first come up with an ethical framework and guidance for such ventures. There aren’t any rules in place to cover it, and we might need to wait for a legitimate study showing the link between robots abuse or cruelty, and for instance violence, before the legislators decide it is something which needs to be regulated.

KR: That’s all true and maybe it’s good to some extent that we are not in a place where such robots exist. In my opinion, as a society we are still not ready. However, to a certain degree, AI bots – algorithms to be more specific, can be used to support those questioned activities. Since such AI algorithms have no personality of any kind, they are just unconscious programs with no rights nor obeying any rules – so they would not ever understand the effects of their output, and the output will only be as good as training data. When it comes to speed or quality – this is a technical question – it depends on given resources and the type of a problem to be solved. AI willingness to do things is a completely different matter – right now AI has no personality, so it won’t tell ‘No’.

KK: Let’s narrow it down. What do you think is the main difference between an employee doing voluntary service and AI thing doing the same voluntary service?

GW: I think an employee who is a dedicated volunteer cannot be replaced even with the most sophisticated AI. Whilst a robot will carry out all necessary tasks, helping others requires empathy, and that is something you cannot teach an AI. I think AI would be very helpful in carrying out repetitive tasks, and therefore could provide great support to any organization, however, as it lacks emotions, I don’t think it will be able to come up with creative ideas how to challenge social issues or understand those in need nor relate to their problems.

KR: AI with today’s capabilities are definitely not to be thought as a replacement of people in those activities, that’s the main difference – AI is not an employee. However, well thought, smart AI mechanism would definitely help in activities that require any kind of assessment or are repeatable, but still some decision making is involved.

KK: Then how do you think AI might transform or influence the processes of volunteering?

GW: There are companies which are already using AI for recruitment – it is used for screening CVs, which is a very time-consuming activity. AI could be used in looking for the best fit when it comes to activists. If a prospective volunteer is an active participant, the AI will be able to spot this, as it has access to social media websites, where details of any social actions are posted. There are people who were offered a job following a telephone or video interview, where AI was used for biometric and psychometric analysis. AI could be used for evaluation and the most appropriate training plan. All it needs is data. When it comes to targeting potential employees to join a good cause, this could be done the same way the personalisation in advertising works, although there will be some GDPR related issues, so let’s not go there for now.

KR: Upcoming years will burst with new ideas about AI application. Please remember that nowadays AI is frequently associated with the discipline that’s best developed until today – Machine Learning. There are also other AI technologies, well known, yet still improved thanks to technological advancement like Genetic Algorithms and Evolutionary Strategies – especially good in solving optimization problems – and Neural Networks – great when it comes to patterns matching or recognition. Each of those technologies are powerful and known for a long time now, but it’s today’s technology that allows us to combine them together in a way not seen before. To answer the question: I’m not sure – and nobody is – how quickly it happens, but in the end there will be a network of AI assisted robots “who” will take care of all of us who need any kind of help. The future seems very bright in that light.

KK: Thank you.

Share this post on

Tags

Leave a Reply

Required fields are marked *

Read next

Comparing object detection models

Computer Vision with AI is amazing technology. Our eyes and brains have evolved to easily search complex images for details with incredible speed. But our ability to repeat this reliably and consistently over long durations or with similar images is limited. We get bored, we get tired, we get distracted. And there are many business and health applications where the […]

Read more