The Problem with Artificial Intelligence: Its only Human

SCIENCELAWFEATURED

Samarth Jain, O.P. Jindal Global Law School

6/18/202410 min read

photo of white staircase
photo of white staircase

The Problem with Artificial Intelligence: Its only Human

Natural selection forced us to adapt to the ever-changing environment, making us the most intelligent species on this planet. But this process of evolution took millions of years. The “Singularity” talks about a future event where Artificial Intelligence though created by humans, after exceeding human knowledge, increases its intelligence at an exponential pace by creating one machine class using another reaching unfathomable levels of intellect. This argument can be understood as follows:

------------------------------------------------------------------------------------------------------------

“Premise 1: There will be AI (created by HI and such that Al = HI)

Premise 2: If there is Al, there will be Al+ (created by Al).

Premise 3: If there is Al+, there will be AI++ (created by AI+).

Conclusion: There will be AI++ (= S will occur).”

------------------------------------------------------------------------------------------------------------

Humans have a tendency for taking actions first without thinking of its potential consequences, either because of internal impulses or because of our incapability of predicting such consequences. Technological innovation cannot ever be halted, even if detrimental in nature. With the outburst of Artificial Intelligence in the recent decade, it has slowly seeped into all aspects of human society redefining how we work, live, and look at the world. Through this paper there is an attempt to showcase and explain intrinsic “human” problems of Artificial Intelligence and its confrontation with law and policing.

While the definitions are diverse, AI is generally an attempt to mimic human intellect using algorithms to perform tasks using pattern recognition. It can be considered equivalent to a probability calculator if fed with enough data and computing power, and the closest invention to a time machine in such sense. The more detailed the data and the inputs fed, the more detailed the outcomes will be. AI is supposed to learn from its own actions. Unsupervised learning is a process where AI is given abundant raw data with no specific function given. It is supposed to bring out interesting hidden patterns which we might miss. Amazon recommends products to its customers using this same technique. Supervised learning or classification task on the other hand uses precise input-output sets and data which is well labelled, in a way to teach the model to predict the correct output for unseen inputs.

The growth of AI depends on the amount and quality of data. Larger amounts of data allow it to make more connections and patterns between unseeming and distorted information. Artificial intelligence has the whole internet open as a data set. Every piece or knowledge, thought, opinion floating on the web is training data for it to feed and learn from. With so much information, AI grows at a speed incomparable to anything ever known. Every query written on ChatGPT also becomes part of its inventory. But since its capabilities are completely dependent on human data set, what it learns is also dependent on us. It essentially learns, from our behaviors, working and flaws. Since AI isn’t sentient as of now, it has a glass ceiling to its advancement. While learning from us it is forced to find patterns based on our understanding of the world and hence replicates outputs filled with biases, prejudices, limitations, and errors.

Our biases are an evolutionary mechanism based on our experience of the world, saving us from getting overwhelmed from an information overload. But bias in data sets leads to discriminatory outcomes, resulting in amplification of such prejudices and harmful decisions. An Amazon “Resume Sorter” based on AI models was found to be discriminating to women since the information for sorting fed to it had inherent gender biases. The intersectionality of biases which AI allows unknowingly, and our own human biases create a hollow perspective of information. “Algorithm Appreciation” and “automation bias”, talks about the human tendency to trust results through algorithms and automated systems rather those generated by human. Not only do we like to outsource our decisions, but we also believe AI being a logical being is unbiased and fool proof to such liabilities. The average person makes up his opinion based on the first search result internet has to offer without looking for supporting evidence. This “Anchoring bias” heavily affects our decision making and increases possibility for spread of false information, exacerbated with the use of AI. “Confirmation Bias” makes us prioritize and look for AI data outputs which feed into our preconceived opinions.

The fundamental reason of such biases and haphazard development of Artificial Intelligence is the problem of “Self-Ignorance”. Based on the Polanyi Paradox which states: “We know more than we can say”. Since humans rely on tacit knowledge, we are capable of a lot of functions, the process of which we cannot fully explain. We can immediately recognize a familiar face, but when we can’t explain in detail how we did it. Since the knowledge is subconscious and based on instinct where language cannot reach, we lack the ability to teach it to someone else. This gap in knowledge transfer might also be the few things which differentiate us from programmable beings. Artificial Intelligence also lacks our morality, since our reasoning is driven by emotions and passions and not by logic statements, our judgements often differ. While Utilitarianism might be possible to program, Kant’s deontological argument is comfortably human in nature. With fear of job security around the world, the occupations which require such tacit knowledge were thought to be last to be automated. But it didn’t take long for AI to even become a part of the legal system, national security etc., occupations which were once thought of being intrinsically human and occupations which required the conveyance of “not just information, but a particular interpretation of information.”

Still in its nascent stage, Artificial Intelligence and Law have not fully met yet. Our need for conformity and objectivity meets a wall when deciding if the definition of a legal person requires expansion and if intelligence should be included in it. Will it also be given legal rights and duties like other imagined orders, or will their rights be intertwined with maker or user? What happens if intelligence makes independent decisions and takes actions which cause harm unfathomable to the owner? Will the liability still be on the owner? Is there a way to punish AI? These questions are still being debated. Law and technology have contrasting paces. Law making is a deliberate and delicate process. Existing legal frameworks are questioned frequently with new advancements made in society. This almost endless consultation on multiple variable concerns makes a time vacuum where such unregulated technologies are open to misuse. Regardless, the use of AI in law and policing systems seem to be interesting since it seems to act as both as an acquaintance and adversary.

Right to privacy is considered a fundamental human right. Since AI works on human data, it takes in everything available on the internet which includes personal data of billions of people, increasing every second, which is free to use by both corporations and the government. Since infringement has become so convenient, with growth of intelligence our privacy will become obsolete with the possibility of AI leveraging every morsel of information against us. AI use haphazardly takes away our human dignity along with our right of privacy also implicitly includes our freedom of expression and opinion taking our dignity away from us. Often, we voluntarily give corporations consent over our personal data, location for extra functions and advantages. None of us go through the terms and conditions of what our data is being used for. This legal loophole is a simple way for acquisition of information. Property is often thought to be a bundle of rights. Can our privacy also be mortgaged similarly, and can our ownership over our identity be given away? Do we lose our meaning as independent human beings if we can trade our privacy of external advantages. Every purchase one makes, every opinion one shares with the world, while individually harmless, collectively shows a psychological pattern of who we really are. As some would argue, the state cannot allow for us to forego our fundamental rights. It is not for us to give away but a function of the state to protect equally for all citizens. Instead of independent humans with hopes and aspirations, we become data and statistics for monetary gain.

Our beliefs can not only be used against us, but they can also be modified as well. Our attention along with every other resource is constantly being capitalized on. Social media built on propaganda show us pattern-based advertisements and videos to slowly build a particular belief in us reducing our own decision making giving us only an illusion of choice. Our free will which allowed some to question even god’s existence has succumbed to a making of our own. Instead of a Levithan, we have become eager to supervise our own existence. Many governments have started the use of assisted biometric tracking and other facial recognition systems. While the idea behind them is for increased security and increase check on illegal activities, they also seem to curtail human rights. One of the studies showed a bias in the system of recognizing Caucasian males accurately but inaccuracy in identification of other ethnicities due to flawed data set inputs. This legitimization of technology cannot be violative of human rights. There is a sheer lack of transparency and consent. Citizens in Hong Kong have started to cover their face to fight back against the facial recognition system. Who is to secure our safety if the State itself delves in criminal activities? Huawei which uses facial recognition technology to build Smart Cities which are like organized like a central computing municipal structure have explicitly stated that such technology can counter “extremism” in the Middle East and United States. The use of the word “extremism” is dependent on state discretion.

Another questionable use of such technology is Predictive Policing. Since these sensors collect large amounts of data, such data when put in an algorithm can find out patterns to predict probability of future illegal or criminal activities. It can predict where such activities might occur and by who. Such algorithms are not free from biases and prejudices though. A prediction based on past events might lead to over policing of sensitive or ethnically discriminative areas. This might become counterproductive to its original objective. It might raise tensions, distrust between citizens and authorities and might lead to riots. Without our freedom, both animals and humans act irrationally. Such systems lead to a presumption of guilt of individuals instead of innocence and such extreme models do not follow doctrine of legal guilt in application.

Infringement notice systems now exist across the world, automating fines in cases of small traffic penalties such as going through red lights or speed limits. In the absence of human interaction, cameras are used to access your registration and other details through a database, and a fine is levied accordingly. In Australia, the camera using AI can find out if you are using your phone while driving. While the system seems to be perfectly functional, it feels unsettling and inappropriate. First, it feels extremely invasive of our private lives. While the system treats everyone equally, it is not completely fair. If one speeds over the limit due to an emergency, the person still receives the fine, notwithstanding the mitigating factors. One would want to make a case for himself before being penalized. Even if an authority and AI are to give the same punishment, a person would still choose an officer. Punishment seems highly personal to us. It yearns for human consideration instead of being the will of an algorithm. Since traffic offences are not monetarily heavy, the citizen is lethargic to question the decision or prove his innocence.

The usage of AI as part of the judiciary is a fiercely debated subject. Generally, one would prefer a human judge to decide on one’s life, unless of course, he's found guilty. Estonia has already commenced the usage of AI to decide on disputes on contestations of less than seven thousand euros. The country has seen multiple benefits through this system: multiple cases can be finalized faster at the same time, applicable precedents can be retrieved faster as well, there will be no inconsistencies between judgments, and AI is, at least in theory, unbiased, if not fairer. AI can also be used as a mediator in arbitration. Before going to court, it is also possible for people to calculate their winning probability.

These systems require a moral framework to function. Such morality will be based on the algorithm’s human programmer. If all morality is relative, AI might face the same dilemma in decision making as its programmer. This also insinuates that its decision might also be questionable. Considering AI exceeds human intelligence, AI might bypass moral patterns and biases to which we are blind. If AI is superior to us, given that it is not humanly possible to check the computational process of its judgments (also called the black box dilemma), we would have to accept any faults in its decisions as our restricted comprehension and knowledge of its functioning and will be unable to question it. Is it better to strip our responsibilities and present those to an automaton? The biases discussed earlier, in a medley with corruption by government and corporations, might leave us crippled and open to wounds. There is also a danger of data poisoning, repetition of its mistakes made in the past, the gap between its predetermined logical thinking and our societal evolution with time, which is extremely variable across cultures. With all countries in a race against time to draft functioning legislations to compete with AI and keep it under check, there is a possibility that it might just make the legislation for us. A compromise must be found between Law and AI without torturing the values the system tries to uphold for human betterment.

Endnotes

1) Cataleta, M. S. (2020). Humane Artificial Intelligence: The Fragility of Human Rights Facing AI. East-West Center. http://www.jstor.org/stable/resrep25514

2) Feldstein, S. (2019). Types of AI Surveillance. In The Global Expansion of AI Surveillance (pp. 16–21). Carnegie Endowment for International Peace. http://www.jstor.org/stable/resrep20995.8

3) Antebi, L. (2021). Challenges in Using AI. In Artificial Intelligence and National Security in Israel (pp. 97–112). Institute for National Security Studies. http://www.jstor.org/stable/resrep30590.17

4) Susskind, R. E. (1986). Expert Systems in Law: A Jurisprudential Approach to Artificial Intelligence and Legal Reasoning. The Modern Law Review, 49(2), 168–194. http://www.jstor.org/stable/1096291

5) Artificial Intelligence (Stanford Encyclopedia of Philosophy). (2018, July 12). https://plato.stanford.edu/entries/artificial-intelligence/

6) Park, J. (2020). YOUR HONOR, AI. Harvard International Review, 41(2), 46–48. https://www.jstor.org/stable/26917302

7) Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005

8) Rubin, C. T. (2011). Machine Morality and Human Responsibility. The New Atlantis, 32, 58–79. http://www.jstor.org/stable/43152657

9) Building a Second Brain by Tiago Forte: Free download, borrow, and streaming: Internet Archive. (2023, August 19). Internet Archive. https://archive.org/details/building-a-second-brain-by-tiago-forte-pdfread.net

10) Celaya, A., & Yeung, N. (2019). Human psychology and intelligent machines. In A. Gilli (Ed.), The Brain and the Processor: Unpacking the Challenges of Human-Machine Interaction (pp. 17–26). NATO Defense College. http://www.jstor.org/stable/resrep19966.9