• Henrik

How to ethically deploy Artificial Intelligence

Updated: Nov 24, 2020

Some of the most common phrases used to promote an innovative company these days are “artificial intelligence” and “machine-learning” and in this post we want to explore these topics and explain our non-technical approach to the use of such technologies.

While a lot of companies claim to work with such technology, when one looks under the hood it is often found to be in far less use than claimed. Just like the overuse of ‘blockchain’ by a vast number of start-up projects, it’s driven more by trying to sound appealing than actually using it. But what are these terms, and how do they relate to Yoba Smart Money, and how can we ensure an ethical usage of new technology? Firstly, Artificial Intelligence is a broad term used to describe a group of technologies, of which machine learning is a sub-group. The EU has the following definition: Artificial intelligence (AI) commonly refers to a combination of: machine learning techniques used for searching and analysing large volumes of data; robotics dealing with the conception, design, manufacture and operation of programmable machines; and algorithms and automated decision-making systems (ADMS) able to predict human and machine behaviour and to make autonomous decisions. The following graph shows the wide range of tools used within AI:

Artificial intelligence therefore consists of a variety of different components, that are collaborating to achieve an end. What this “end” is determines whether the technology is used for nefarious purposes, such as what we now refer to as the Facebook-Cambridge Analytica scandal, or other usages such as analysing the spread of viruses such as COVID19. Even if the term AI is sometimes over-hyped, its usage has increased rapidly over the past couple of years and is likely to continue. This has naturally resulted in a lot of focus, both by governments and the media, towards ensuring that the technology is used for good purposes within an ethical framework. What constitutes “good purposes” is not clear and rapidly changing based on how much consumers are willing to allow technology to replace human decision-making, which is why most of the focus has been on securing an ethical framework for its usage. Ethics are easier to define and change less frequently. What are these ethical principles? First and foremost, the EU Parliament has issued a document that seeks to ensure that any usage of AI is developed with a human-centric approach that is respectful of European values and principles and is deployed in accordance with EU laws. They´ve also developed a number of key EU requirements for achieving a safe and sound usage of AI, whom are listed here below:

  • human agency and oversight which means that there needs to be a human in charge, somewhere, in the process. It also means that users should be able to understand and interact with AI systems to a satisfactory degree as well as having the right to not be subject to a decision based solely on automated processing especially when this produces a legal effect on users or significantly affects them.

  • robustness and safety which covers cybersecurity as well as that all vulnerabilities should be considered when building an algorithm.

  • privacy and data governance which is about ensuring the rights of individuals and ensuring, for instance, that data can be erased or amended.

  • transparency which includes making sure humans are aware that they are dealing with an AI as well as ensuring that the processes that are used are documented and traceable.

  • diversity, non-discrimination and fairness which is very much connected to the transparency principle in the sense that no discrimination shall be implemented in the AI, and it therefore needs continuous monitoring to ensure its objective fairness.

  • accountability which means that there needs to be an audit of the functionality of the AI as well as someone that is accountable if something goes wrong, especially those deployments that may impact users negatively.

Yoba is seeking to deploy certain variations of AI in our risk management processes, including credit risk, AML and fraud prevention with the overarching goal to make the process faster, more efficient and effective for us and more importantly our customers.

This comes, as stated above, with a number of responsibilities which Yoba will seek to implement as properties in its usage and deployment in all of its technology. We will develop our products with a transparent and human-centric outlook bearing in mind not only our regulatory obligations but also the ethical aspects and benefits to the wider society. Our usage of AI will also importantly need to bring benefits to our customer base, something that we believe is possible in areas where a large number of data points can provide wider insights over time.

In case you want to read more about these topics, please follow one of the links below:

EU Guidelines on ethics in AI

Stanford University: Ethics of AI and Robotics

49 views0 comments

Yoba S.A.
info@yoba.com │ 25 Boulevard Prince Henri, 1724 Luxembourg

© 2020 by Yoba  Privacy Policy