Who should own the responsibility for AI services ?

Jayanti prasad Ph.D
3 min readMar 5, 2024

--

In the entire human history of more than 5000 years this may be the first time when human beings are thinking to disown their responsibility of the consequences of the tools they use. Humans have been using all kind of tools, mechanical (machines) as well as cognitive (science) to achieve certain goals, aligned to their well beings. Science can help to make weapons of mass destruction and medicines which can save billion people. Machines and methods (science) neither have values nor goals (although there are many Hollywood movies which show machines have goals / purpose !). There are human beings who develop goals and values within the ecosystems of biological and cultural evolution.

The debate of ‘free will’ is not over — it is still not clear whether human beings can make decisions which are not determined by their biological blueprint (DNA) , cultural upbringing and immediate environmental stimulus. It has been debated multiple times in multiple courts that if there are forces out of the control of an individual which force an individual to commit a crime then why that individual should be punished. This is another chicken-egg problem and any solutions is still not visible.

No one can list down all the type of documents one can type in Microsoft word. From love letters, to ransom letters and medical prescription and poem; anyone can type anything and send, there will be consequences but should we consider Microsoft accountable for those ? The same is true for all the tools & technologies — finally these are the end users of those who should be held accountable for any consequence. Technology can make the path to achieve any goal short but it cannot create a goal itself.

Apart from AI, there is no other technology that is created out of the way people think & behave. Yes, there are tools and technologies which work from the data taken from the nature — for example, whether forecasting. But in AI, particularly in Natural Language Processing (NLP) and Large Language Models (LLM) there are tools being developed which uses the data that represent the way people write, speak and think. This makes these tools (such as ChatGPT) to mimic the goals, values, styles, concerns, biases, smartness/dumbness of people ! The existence of these AI tools is entirely based on the hope that on some day they will completely think & behave like humans. This is possible if they are developed in a completely neutral environment. We must keep in mind good bias is also a bias ! Let us come to the point of bias. What is a bias ?

In the Bayesian framework we can identify the bias with the prior we use and we learn something from the data (likelihood) and update our knowledge or belief. How much we should let the data change our belief ? Obviously not entirely so we consider ‘prior’ which is also a kind of bias. If we do not let the data speak truth than the question is who will speak that ? There have been many uncomfortable findings in science and some top scientists lost their jobs for revealing those to public !

If we expect AI should get aligned with human values than the questions is whose values ? Universal human values which come from west are not accepted in many traditional societies. There is a chance that in order to make AI systems fair and unbiased someone (most probable the powerful ones) may bend these tools in their favour. Here are some serious points to think about.

  • Can we really pass the responsibility of the actions we take to the tools we use.
  • Should not the guard rails be the responsibility of the end user and not the creator.
  • Will too much centralisation and control not hurt the creativity and progress ?

Some of the directions which can be followed may look like:

  • Let us give more time to AI tools to get mature and develop a common consensus about their use.
  • Make sure that if tech giants are using public data for free than they release their models open source.
  • Make sure that Intellectual Property rights and copyrights are respected and permission is taken from the respective owners.
  • Make commercial entities responsible for any data privacy and security issue.
  • Rather than emphasising too much on fairness and bias let us focus more on transparency and accountability.

This is an evolving article and will remain a work in progress. Please like, comment & share if you find this useful.

--

--

Jayanti prasad Ph.D
Jayanti prasad Ph.D

Written by Jayanti prasad Ph.D

Physicist, Data Scientist and Blogger.

No responses yet