Adding too many past ethics traumas, I remember one of the most ethically disturbing moments of my life. It was early 2018; I came across a news article mentioning a Silicon Valley health startup Theranos and its founder Elizabeth Holmes. Holmes, unfortunately, misled thousands of customers and investors by claiming that a small device the company invented only needed a few drops of blood to run a blood analysis and that you would not need to pay more than $1.99 for a single test. This, of course, was perceived as a big disruption to the disproportionately expensive and subjectively-broken American Healthcare system. She had always refused to explain how this device actually worked by playing the ‘trade secret card.’ Nevertheless, it turns out there was no such a device, but there was, in fact, a huge fraud going on. Unfortunate enough, Holmes only paid $500,000 as a fine, with criminal charges still pending. Fast reverse winding, ProPublica, an investigative journalism non-profit, published a report in 2016 on COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which is used to determine a defendant’s likelihood of recidivism through risk assessment technology. ProPublica found that, while the instrument did not use race as a variable, the algorithms used to make the determination strongly correlated with race and reflected racial bias. In State v. Loomis, defendant Loomis suspected that the software used gender as a (discriminatory) factor in the determination process. However, the court bluntly barred him accessing the information as the methodology behind COMPAS is considered a trade secret, and hence, denied him equal protection and due process rights in a social justice-slaughtering-way. Apart from trade secret protection being used as an excuse to avoid transparency and accountability, this approach also makes me question the contestability of such technology. Traditional text-driven law allows individuals to contest decisions, even to the norm itself, while the data-driven approach subtly encourages black boxing.
Nevertheless, one can never ignore the benefits, technology, particularly AI, brought and would bring to society. As Peter-Paul Verbeek would argue, we don’t exercise sovereignty over technology, nor technology wretchedly captures us as its victims. Our behaviors, beliefs, and ideas influence technology, and technology shapes and changes us. According to Don Ihde and Verbeek, we’re technologically mediated beings, and we should establish a hermeneutical structure between humans and technology, by also bringing technology into the public domain. As we cannot avoid a Kantian approach here, we should also treat people as ends in themselves, not as a means to an end, by developing technologies that respect the autonomy of humans they target.
I have this unique believe that AI is akin to human subconscious mind. In his bestselling book, Psycho-Cybernetics, Dr. Maxwell Maltz argues that most of our beliefs come from early childhood experiences. Distorted beliefs and unconscious bias we accrue starting from a very early age determine our daily actions, our reasoning, how we interact with others, and how we perceive ourselves and people around us. Even though we learn new things, we always see through the lenses of our subconscious ‘data’; we feed that data with new experiences to establish a pattern. Inherently flawed and biased AI is almost the same; it is fed by limited beliefs and conscious or unconscious bias of its creators (data). It starts its ‘life’ from this place of limited beliefs (bad data), and it continues to operate without changing its core distortion. This is also known as, ‘Garbage in, Garbage Out,’ as what can be derived from data is determined by what is in data. However, two things make human intelligence entirely unique; Emotional Intelligence (i.e.,self-awareness, empathy, self-reflection, and emotional self-control) and Consciousness. Though we as human beings can have distorted core beliefs, while rendering a judgement, a judge does not base his/her decision only on those beliefs. S/he interprets relevant legislation, case law, legal loopholes (e.g., intentional open texts), and secondary sources (e.g., doctrine, custom, etc.) using both intelligence, by experiencing emotions, giving meaning to things, and even relying on intuition. This explains the predictable and argumentative nature of legal reasoning. As technology law professors state, data-driven analysis employs quantitative and statistical methods to identify patterns, trends, and associations among variables. However, there is a difference between cause and effect (causation) and relationships (correlation), and data is not capable of understanding the assumptions, intent, and perspective underlying a certain inference. Therefore, such data-driven approach attempts to deviate from contestable and predictable nature of legal judgment to achieve statistical, dynamic, and computational simulation of legal judgment.
Lastly, working as a policy professional for a little while has shown me that there is almost no single interpretation of any legislation prior to and after its enactment. For example, in the U.S., there is a law that shields Information Service Providers (ISPs) from tort liability. This law, §230 of the Communications Decency Act, has recently been amended by Congress. During the enactment process, both houses drafted their own versions of the bill, and both were utterly vague. Consequently, we — policy professionals — all came up with our own interpretation of these bills, and needless to say, the spectrum of interpretation was pretty broad. The bill became law, but now it all hinges on case law as it is still not clear how courts will interpret this new amendment.
Evidently, even as human beings, we struggle with interpreting the text. Could we really expect machines to understand the intent behind the text and make meaningful decisions out of it?