Ghost in the machine: AI, law, ethics – what does it mean for you?

Artificial Intelligence is the engine of the fourth industrial revolution and is set to shake up society as we know it – from economics and the law through to work and income – no sector is immune. GC spoke to thought leaders from the world of politics, law, business and academia for their views on the artificial revolution and whether we’re heading for utopia or dystopia.

There’s change afoot out there in the world, a world in which any remaining Luddites can no longer sit with their hands over their ears in a state of denial. We are in a period of digital technology which is so disruptive that the only thing that comes close to it in human history has been the first industrial revolution. That movement overturned the trajectory humans had been on through various contemporaneous developments in mechanical engineering, chemistry, metallurgy, and other disciplines, and its advancements altered the landscape of human history. Our digital revolution will effectively lay waste to that landscape created by the first industrial revolution and something entirely new will have to take its place.

‘One can imagine such technology outsmarting
financial markets, 
out-inventing human researchers,
out-manipulating human leaders, 
and developing
weapons we cannot even understand.’

Stephen Hawking

Welcome to Industry 4.0 – an exciting new world where developments in artificial intelligence will enable computers to assess our health better than doctors can. Where the advent of automation will revolutionise production and labour, and the innovations of networked technologies will make for driverless cars and interactive GPS systems, layering current technologies and platforms to create better, faster, more productive, cheaper and accurate new ways of doing traditional things.

But the flipside is much more disturbing. Alongside this undeniable incoming tide of ‘bounty’, we face enormous challenges which will need to be met. Many of us will face the prospect of losing our jobs because technological advances can do our jobs better than humans can. Industries will die. In the midst of the bounty, there will be unavoidable collateral damage.

Alarmists should look away now, because the digital technological revolution hasn’t even yet properly begun. We are still at the bottom of a steep curve, and the ascent will be rapid and unprecedented, changing humanity in ways we cannot imagine. Work, labour, education, economy, law – the framework of things we have set in place will be recalibrated and we will have to find out what will prop things up when the dust settles.

Recalibrating the regulatory environment

Fundamentally, balancing the desire for innovation to progress unhindered with the need to ensure new technologies are sufficiently regulated is a challenge that policy experts and lawmakers alike are grappling with.

Margot Kaminski, assistant professor of law at Ohio State University and an expert in the intersection of technology and public policy, explained that establishing clear frameworks is a necessity, though a huge challenge to formulate.

‘If we put it in legislation then there is clarity. The problem occurs when a legislator gets it wrong, because we don’t know what all the problems will look like and you can end up with very bad laws if you don’t know what the problems will look like first’, she said.

Kate Klonick, a resident fellow at the Information Society Project – a Yale Law School initiative which serves as an intellectual centre, exploring the implications of the Internet and new information technologies for law and society, adds:

‘The legal structures that we created to deal with technology became very outdated very quickly and it became messy. The laws were ineffective and didn’t actually achieve what we wanted them to achieve.’

Take the example of unmanned aerial vehicles or ‘drones’ – innovation moved at a speed far surpassing what regulators could reasonably envision or keep up with. As a result, there were clashes while a more robust system could be developed regulating their use.

But Kaminski points out that there remain a significant number of conceptual uncertainties even now, which are going to create some difficult questions for lawmakers down the line.

‘I can picture a number of areas where the existence of AI in creating an action that can’t trace its origin to a human originator or a creation that can’t trace its origin to a human creator will cause issues. Trying to determine who the responsible actor is for something will be a challenge’, says Kaminski.

‘It comes down to questions of agency; who is the author/inventor, which is similar to trying to trace who is the originator of a particular action. We are not well equipped in the legal system to deal with it yet, but it doesn’t mean we can’t figure it out.’

Klonick explains that it could require a fundamental rethink of how liability structures within the law operate:

‘What it may come down to is creating a whole new liability system, or finding a way to use the existing structures of liability, like supervisory liability or employer/employee liability, in new ways. Self-driving cars for example, where will the line be drawn? Do you fault the programmers that created the algorithm which makes the car run or do you fault the manufacturer who made the car?’

With self-driving cars in particular, issues of ethics and morality also come into play. The infamous Trolley Problem thought experiment in ethics is one which has perplexed philosophers for decades, but with self-driving cars, such an issue moves from the realms of concept to reality. If a self-driving car is faced with the prospect of colliding with a single person in the road or swerving to miss the person, but subsequently hitting two people on the side of the road as a result, how should it act?

When a study was conducted asking people what they thought they would do if they were behind the wheel, most agreed that they would likely attempt to swerve and miss the single person, even if it meant potentially killing two people as a result. But when asked what they would want the program to dictate, most said they would prefer that the car made the utilitarian decision and saved the lives of two versus one.

‘It becomes a question of rationality and what we expect from our devices. It’s easy to make a rational decision around something like what costs the least or minimising transaction costs, even if people themselves are inherently irrational. But machines are held to a higher standard and issues like the Trolley Problem highlight that. Do we want the machine to make the rational decision and if so, how do we structure liability around that within the law?’ says Klonick.

But, explained Kaminski, the Trolley Problem was merely a microcosm of a bigger obstacle to overcome, which was regulating the physical embodiment of AI where it has the potential to do real harm – as opposed to when it operates within the bounds of cyberspace.

‘When you have AI that moves in real physical space and when it can run into humans and cause physical injury to humans, you might be pro-innovation, which is an important value, but it is less of a value when you put it against the idea that people might get killed. So the physical safety elements of putting AI in the world embodied (in this case as a car) does guide more regulatory co-operation’, says Kaminski.

Steven Joyce, New Zealand’s fourth ranking minister, responsible for both the Regulatory Reform and Science and Innovation portfolios (among others) explained his government’s approach, saying: ‘We have been trying to move ahead to design regulatory systems for emerging technologies, but it will always evolve anyway. It is enough to plough on with our front foot forward in regards to regulatory regimes, because this stuff is so new, so we put our best foot forward and thus far, we have been mostly getting it right.’

He points to the New Zealand government’s response to the rise to prominence of ride-sharing app Uber as an example where innovative thinking from lawmakers can find a way to create a fair and competitive regulatory landscape.

‘Our digital revolution will effectively lay waste to that landscape created by the first industrial revolution.’

‘With transport rules, Uber wants to operate in a deregulated environment, whereas the taxi industry is much more regulated. The reaction globally has generally been to regulate Uber, but instead of going that direction, we said “how about deregulating the taxis?”

Now, as a result, Uber hasn’t been particularly happy with that because its deregulated edge might be lost.’

Working in an artificial world

The risk to employment presented by automation and subsequently AI is hardly a new idea. Back in 1930, John Maynard Keynes – the father of the Keynesian school of economics – popularised the term ‘technological unemployment’ and predicted the risk it posed to labour markets, saying:

‘We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come – namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.’

The World Economic Forum has said that by 2018 it expects economies and jobs to begin broadly feeling the impact from AI, with the ability to automate knowledge-based tasks previously performed by medium- to highly-skilled individuals most prevalent. Previously, the ability to automate such jobs was either technologically impossible or financially infeasible. But developments in processing power have not only significantly reduced the costs of implementing such systems, but broadened the capabilities of what they are able to achieve. The upshot is that jobs which involve minimal amounts of creativity, social interaction or mobility and dexterity were most at risk, shifting the impetus for human workers from processing to personality.

‘There are an abundance of opportunities for business to find ways to attract capital, but it’s becoming more about EQ and emotional intelligence as opposed to areas that have been recently automated.’

At a recent event, the head of audit at a Big Four accounting firm revealed that due to the progression of automation and artificial intelligence within the firm, they were planning on slashing their graduate intake of auditors by a half.

Joyce comments: ‘Things go through these phases of development. Once, most accountants were literally doing the accounts, but there is precious little of this work now, so accounting has become about big audit and consulting firms. Automation is the nature of the business and it leads to consultancies.’

He speaks to New Zealand’s pragmatic, forward-facing approach to new developments which has allowed them to withstand structural changes to the economy, the most important of which was an efficient and adaptable labour market.

‘You have to maintain a flexible labour market. Lots of countries still have a rigid labour market, like France and a number of countries in Southern Europe. There, people are still debating the age of retirement and whether you should have a job for life. We had this discussion in the late 80s and early 90s. We don’t try to stop change and this has served New Zealand well’, explains Joyce, adding:

‘You have to be flexible to allow jobs to fade away if they are no longer relevant. It’s really about if they are required for your economy to work. It isn’t exactly the end of work, but rather, the industry moving on and as a result, some jobs are disappearing.’

While Joyce acknowledges that there are bound to be employment casualties as a result of AI and automation, his opinion is that the current estimates and projections are overly alarmist. He expects an upcoming report by the OECD to ‘apply a much more conservative measure’ and better account for the prospect of new job creation.

But what about legal? Robots replacing robots

‘The next generation of attorneys? We won’t have as many and they won’t have the valuable training required’, declared Kaminski when speculating on the future of legal work in an artificial world.

Already, AI is having an impact on the legal sector. ROSS, the world’s first digital lawyer, was hired by Baker Hostetler – a major US law firm. Utilising IBM’s cognitive computer Watson, users ask questions in plain English and ROSS reads through the entire body of law and returns a cited answer and relevant information from legislation, case law and secondary sources.

‘It’s about freeing up the lawyer to allow the lawyer to do what they’re trained to do – which is intellectual output, spending more time thinking about the problem. What this technology really does is allow firms and lawyers to get to the nub of the issue, without the hours and hours of research’, explains Linda NiChualladh, regulatory and competition counsel at An Post in Ireland.

The implementation of ROSS is the first step in a process with transformative potential, and the effect looks set to shake up the industry from the ground up – but is that a positive for the whole sector?

‘The physical safety elements of putting AI in the world does not guide more regulatory co-operation.’

‘The people who will survive the uptake of AI in legal will be the senior attorneys and experts at the top who have honed their skills. But the junior associates will be hurt because they will be outperformed by a machine’, explains Kaminski, who says that the effect for junior lawyers will have a run-on effect into the upper-echelons of the profession.

‘Junior lawyers learn to be good because they put hours into drudgery work. They might practice this kind of work for ten thousand hours and they get expertise by being put through the hoops’, says Kaminski.

‘Essentially all the time that is spent by juniors, in their zero to four years post-qualification experience, all that time spent researching, looking for proper precedents, looking for cases, looking for templates for agreements, that won’t have to be done by them anymore’, added NiChualladh.

Great performance used to mean being good at being machine-like – now, you have to be good at being a person. The question needs to shift from ‘what can robots not do’, because Moore’s Law, named after Gordon Moore (the co-founder of Intel), which predicts the rate of technological advance, suggests it is dangerous territory to assume anything about tech growth. A better idea is to look at what the activities are that humans will insist that other humans continue to do, even if computers do a better job.

‘30 years ago when you had to look something up, you’d have paralegals spending hours going through actual casebooks. Now I go online, I type in the name of the case and I see every word highlighted which I think would be related to what I’m doing and it takes me all of 5 seconds’ , explains Klonick – but with a caveat.

‘What the program can’t do for you, yet, is think for you about how those cases are going to be applied or tell you what was more convincing to a judge, or find some of those unique linkages connecting cases.

For high levels of cognition, we’re still very far away.’

Moving forward, that means that lawyers will need to find new ways to distinguish themselves and generate value-add while utilising the benefits of an increasingly technological legal sector.

‘Everyone’s moving away from IQ testing to EQ testing, and that is because of the ability of AI and big data and all of these things to free up your time. It gives people time to start working on the relationship with the client. It’s not that “we are the best law firm because we have all of this data” – that’s not what people want anymore. They’re looking at the changing needs of clients and the changing needs of potential clients. So the law firms are looking to promote the characteristics which they feel are now required of lawyers’, explains NiChualladh.

And for greater society?

Most of our experts saw the development of both soft and tangible skills, as well as the ability to be adaptive as the key to success in an increasingly competitive market.

‘AI moves up the ladder of the economy in different ways. People who are more highly educated will move to other jobs, as opposed to others who have a different kind of education that allows you to get a different kind of job’, says Kaminski.

‘There is this idea that people are studying for too long, but if you take the longer view over 40 years, the greatest skill to learn is to learn assimilated change. We need to keep upskilling people because we don’t see a sudden boom in low-to-medium skilled jobs. Looking out over 40 years of a young person’s life, they should be aiming to train and go as far as they can go’, said Joyce.

Others are not so optimistic about the outlook for a future dominated by AI. Some experts have proposed the notion that a Universal Basic Income could become a necessary measure to level the playing field and ensure that a basic standard of living is both attainable and sustainable, regardless of the future of work.

‘The risk to employment presented by automation and subsequently AI is hardly a new idea.’

‘With how our economy is structured at present, people either need to have money or have the ability to work to earn money. If AI was to significantly alter the employment landscape, then how we deal with that as a society may require a rethink’, says Klonick.

Switzerland will be the first country to hold a national referendum on whether a basic income should be introduced, while the Netherlands, Canada, Finland and New Zealand are all publicly considering the concept with varying degrees of gravitas.

Joyce says that while people who are affected by structural unemployment ‘absolutely should be supported through a period of change’, he remains steadfastly against any notion of implementing a universal basic income.

‘I think that is the wrong direction because it just reduces the incentives for adaptation and change. A universal income, one in which you are paid if you are working or not, has to be paid for with high tax rates. Paying people irrespective of them working or not results in no obligations to look for work’, he argues.

‘This is the wrong formulation; innovation and entrepreneurship comes from a different type of motivation and a working wage is not going to improve the adaptability of the community to change. The answer is flexibility, skills training and retraining, and more than anything else, literacy and confidence.’

Concluding thoughts

There is no doubt that AI is going to have a broad and meaningful impact on society, but the breadth and depth of that effect is still uncertain. What is certain however, is that the development of the technology places the spotlight back on us – humans. Ultimately, if the implementation of AI is positive, it will allow humans to focus on being more human – at work, home and in life. Concentrating on what makes humans unique and where we can impart value which is still irreplaceable will not become desirable, but a necessity. While data-driven algorithms may do a better job then, say, human juries can do in deciding a legal case, the social constructs of society still require an inherent degree of humanity – something machines just can’t replace. Yet.

So is it a utopia or dystopia ahead? We’ll defer to the second half of Hawking’s quote for that:

‘Whereas the short-term impact of AI depends
on who 
controls it, the long-term impact depends
on whether 
it can be controlled at all.’