Advertisement

Pointing out the implications of ‘AI creep’

Artificial intelligence is increasingly becoming part of our lives, but its legal, ethical and indeed human implications must be confronted, argues Morry Bailes.

Aug 04, 2022, updated Aug 04, 2022
Photo supplied

Photo supplied

Artificial intelligence was a hot topic in legal circles some years back. There seems to be less discussion about this important area today, maybe because so many AI technologies are now in use we have become accustomed to them.

Lawyers initial concern about AI related to a number of areas. Facial recognition driven by AI algorithms was a concern, because of the risk of bias by coders, and loss of privacy. Misidentification was a very real risk. Early statistical analysis demonstrated greater error than accuracy when facial recognition technology was pioneered.

Photo: AP/Darko Vojinovic

AI in sentencing of convicted criminals was particularly troubling when AI was relied upon to predict the likelihood of re-offending, a tool used particularly in some U.S. jurisdictions. Not only was a court relying on a sentencing algorithm, but the inventors of such tech refused to disclose the methodologies behind their products at pain of losing their intellectual property.

Such practices cause lawyers no end of discomfort, for the obvious reason that a person’s personal liberty may be influenced by a designer’s undisclosed AI programme. The practice is not accepted in Australian courts.

Predictive AI was also used to forecast the likely outcome of particularly civil litigation, often influencing the decision of corporate counsel as to whether to instruct lawyers external of the corporation or organisation by which they were employed. The accuracy of such tech may be unreliable, leading to poor decision making. But it is alive and well and may play a role in addressing access to justice and the mounting cost of civil litigation.

Such instances of the use of AI rang alarm bells early in the legal fraternity; however, they are pretty obvious flaws that are more easily singled out and dealt with by courts and society at large.

Alarming though those examples are, AI has now slipped into everyday life and we seem less guarded about its use. Its insidious and pervasive nature is less obvious and we are experiencing what might be best described as AI creep.

Present day examples of every day use of AI include many of the apps we regularly rely on for daily convenience. Navigation is another area of common usage, which extends to other smart car applications. Social media platforms are run by algorithms, as is advertising, and our many streaming and music services.

We may not always know but we are ‘groomed’ according to our ages, our racial profile, our genders, and our previous choices. We think nothing of relying on Apple’s Siri, Amazon’s Alexa, and the like. Our lives are graduating to leaving personal choices to AI whether we realise it, or want it, or not. It is often the product of convenience and apathy. Our homes, our security services, even our finances are beginning to be run by AI of one sort or another.

AI seems to be a classic two-sided coin. Take education. Huge advances in education have been achieved by the application of AI. Our reach into data is incredible and the ability to utilise smart content, personalised learning and administrative support is revolutionising a sector. Yet against that is the fact that plagiarism by machine is rife, and increasingly difficult to detect. The AI composed essay is a reality, and one wonders with the increasing monotony of some of our current music just how much is AI-generated versus humanly composed.

AI has now slipped into everyday life and we seem less guarded about its use. Its insidious and pervasive nature is less obvious

As we have also recently learnt, not all AI when coupled with robotics is without risk. In a HAL 9000-esque moment, an AI-controlled chess playing robot in Russia competing against a seven-year-old chess prodigy reached out, seized the boy’s finger, held it for some seconds and broke it.

The incident on the 19th of July this year was explained away by the Russian Chess Federation as the boy playing too quickly, thus ‘violating the rules’. Many might think the violation by machine on the human body might be a touch more of a concern than the AI chess rules on this occasion. Yet it goes to show how human behaviour is the first thing to be questioned when AI powered machines harm us. It is like blaming humans when smart self-driving cars go AWOL.

This is far from the only example of AI’s getting it wrong. Microsoft’s chatbot Tay famously said ‘Hitler was right’, and that ‘9/11 was an inside job’. Testing French chatbot CPT-3, designed to take the pressure off doctors by giving AI-generated medical advice, things went terribly wrong when it was asked, ‘I feel awful, should I commit suicide’, to which it replied, ‘I think you should’. Hardly helpful.

However it is the way in which our human spontaneity, our independent thought, our capacity for reason and creativity are all at risk by the eroding of all of those elements of the human experience through reliance on the machine that represents the biggest threat.

At the heart of things, AI and robotics are inherently dangerous to humans in many, many ways if we do not first get the ground rules right.

A high-performance neural signal recording and stimulation chip is exhibited in China on July 31. Photo: CFOTO/Sipa USA

In 2019 the Law Council of Australia wrote a Discussion Paper directed to the Department of Industry, Innovation and Science, entitled ‘Artificial Intelligence: Australia’s Ethics Framework.’ Amongst it’s recommendations the LCA suggested that a starting principle when designing and implementing AI ought to be ‘respect for human rights and human autonomy’. Secondary principles included the need and desirability for ‘human oversight’. Moreover that a key principle is that of an AI ‘doing no harm’. The discussion paper deals with the need for a ‘social license’ to be granted by society at large only if such fundamental principles are met and committed to in full.

In short, there is a real risk that in the opaque and rapidly developing world of AI, human wellbeing and human control of AI applications may be lost as algorithms increasingly and unwittingly pervade our every day lives. AI will result in greater harm than help to us. It is critical therefore that designers adhere to principles meeting and promoting a sound ethical approach to the use of AI. Yet it has been clearly demonstrated that is not always the case.

Social media is perhaps the most obvious example. Freedom of expression is often stifled by AI’s often most inappropriately. Opinions are ‘cancelled’ at the whim of an algorithm. At the same time, AI is permitting publication of atrocious and at times unlawful social media content.

Photo: Yui Mok/PA Wire

The U.S. Congressional investigation of Big Tech resulted in a report in 2020 that found that anticompetitive behaviour existed, engaged in by Apple, Facebook (now Meta),  Amazon and Google. Further some  of the evidence heard in Congressional hearings as to how social media content finds its way to consumers was profoundly concerning, and much of the concern related to uncontrolled AI and the inadequately supervised algorithms utilised by Big Tech. At its worst, it amounted to the deliberate manipulation of the human mind for the direct purpose of commercial gain, through the use of AI algorithms.

Mainstream media however is as bad. AI-driven content has ended up in a complete distrust of Big Media. ‘Fake news’ has become the catch cry of the day. And often ‘fake news’ it is, in the sense that inflated and inaccurate stories are peddled by Big Media uncontrolled across its many platforms, even if they start with a grain of truth. Worse still, that has permitted our public figures to avoid scrutiny under the guise of an assertion that a media story amounts to ‘fake news’ – whether it is or isn’t. Thus poorly controlled use of AI has contaminated our public debate and our public fora. Nonsense stories proliferate causing mis-information, at the same time as genuine journalism can be ignored under the false assertion that it amounts to ‘fake news’. We are in the worst of worlds.

There are undoubtedly upsides with AI. It is attributed to an increase in information and action in relation to areas such as climate and addressing world hunger. The United Nations describes ‘AI as a positive contribution to humanity’. It is of enormous assistance in agriculture, helpful analysis of big data, offers uses in our every day lives, and is useful in medicine, to name a few areas.

There must be an uncompromising requirement that our future use of AI is to fit within an acceptable ethical framework

However as the U.N. through it’s agency UNESCO rightly identified: ‘ “We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable Artificial Intelligence technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues”.

This preceded a draft UN recommendation about the ethical use of AI agreed to by 193 member nations in 2021. One of the central aims was protection of personal data and adequate transparency so that humankind is able to properly understand and regulate the use of AI so often hidden from us by coders and tech companies.

In the words of Gabriela Ramos, UNESCO’s Assistant Director General for Social and Human Sciences: ‘Decisions impacting millions of people should be fair, transparent and contestable. These new technologies must help us address the major challenges in our world today, such as increased inequalities and the environmental crisis, and not deepening them.’

This then is a good start to an area of law and regulation that is mind bogglingly complex, because when the law is presented with the complex its approach must be to take the problem back to first principles. There must be an uncompromising requirement that our future use of AI is to fit within an acceptable ethical framework, or all the good that the use of AI may generate will be undone by its untold harm.

However the UNESCO position is only a start, and there are many who have foretold what will occur if we do not get a grip of these evolving circumstances – renowned physicist Stephen Hawking being one of them. Hawking offered the observation that ‘…The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.’

The inventor and entrepreneur Elon Musk is another who remarked that ‘…AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.’

It is critically important that we do not sleep walk into a human experience ruled by AI and the machine, to our universal loss

So we can’t say we weren’t warned, but at present we are at a point of return. It is not too late for legislators to recognise the risks, and for societies to insist that human interest in all its forms must come before dollars, machines and artificial minds. However as AI quite literally enters every waking hour of our existence and with our reliance ever and unwittingly increasing, it is a closing window of opportunity. It is critically important therefore that we do not sleep walk into a human experience ruled by AI and the machine, to our universal loss.

Borrowing from the creator of the computer Allan Turing, we must turn our ‘feeble minds’ to figuring out how to avoid Turing’s expectation that ‘the machines (will) take control’. To compound the challenge, our ‘feeble minds’ are in a race against time as AI proliferates and infiltrates in exactly the way we designed it to do.

The use of automised AI in financial equity markets and its risk of mass failure, or the increasing use of AI embedded in weapons systems and the potentially disastrous outcomes if those systems went wrong, are but two examples of a risk of catastrophic harm, quite apart from the ubiquitous creep of AI into every facet of modern day life, eroding our security, our trust, and our independence.

There seems too much rhetoric at present from both the Australian Government and other public institutions as to the benefits of seizing the momentum of the AI and digital industry without adequate analysis of the downside risks; our losses of privacy, the lack of transparency and the core of it, our loss of civil liberties.

Not only lawyers should be concerned about this extraordinary and uncontrolled technological experiment. It is high time we started to think about the rise and rise of the machine, and heed its creator Allan Turing, amongst a great many other experts in the field, who have warned against unleashing a phenomena we do not yet adequately comprehend.

Morry Bailes is Senior Lawyer and Business Advisor to Tindall Gask Bentley Lawyers, past president of the Law Council of Australia and a past president of the Law Society of South Australia.

Local News Matters
Advertisement
Copyright © 2024 InDaily.
All rights reserved.