People tend to see technology as something that will allow them to get rid of the parts of their job that they don’t like – that they find repetitive, cumbersome or boring. Equally, it’s seen as something which will allow more efficiency and allow the opportunity to concentrate on the interesting parts of their job. In the past, this might have meant something as simple as word processors or automation. But now, all of a sudden, a computer can not only automate, but a computer can learn. That means you can start giving the computer a number of cases, and it starts building rules and logic on its own. Not exactly on its own – of course you need an algorithm that allows it to learn – but after a while, it learns. It can recognise different things, something in a picture or certain clauses in a contract – as soon as you provide it with enough data for the computer to learn, there are seemingly few limits on what could be achieved.
A new paradigm
This is a totally different paradigm. We are not talking about automation anymore, we are talking about something that can learn and, in many cases, can learn in a much more efficient way than a human. That changes the question, and this is something that most people have not yet realised.
Google, right now, is using machine learning and artificial intelligence to manage all the electricity consumption at its data centres. You might say, ‘This is Google, Google is not a regular company, they are on the cutting edge,’ but this is happening in more and more places.
It’s being used right now to identify tumours in medical imaging. Consider this: when a physician is looking for tumours manually, they can be given hundreds of images to search through – even though it’s a relatively routine task, the volume means that things can be inadvertently missed. So instead, an algorithm is being used to do it, and it’s already learnt to spot tumours better than physicians.
I think a difficulty with technology is that when people don’t understand how it works, it can be disconcerting. But if people can develop a sense of what’s going on and understand the basics of how these things work, then it’s going to help with uptake and use. You don’t need to be a mathematician. If you follow the concepts of what it is – even in a nutshell, without getting into the nitty gritty details – you realise that machines don’t make mistakes. They don’t miss things. And things that are absolutely normal in humans don’t happen in machines: their perception is better, their senses are not relative, and they don’t get distracted.
The first time you jump into an autonomous vehicle, for instance, you keep thinking, ‘Well I’m not sure if the camera is going to see that, or the light, the reflection, the speed of reaction.’ And then you realise: no. As soon as you put the machine in charge of it, the machine is able to see 360°, they have immediate reactions, they don’t get distracted, they don’t look at their cell phones, they don’t get road rage.
Analytics is really exciting. I’m still amazed by what you can do already – understanding what your customers do, or think, or express. We have a crazy amount of customer interaction points – it’s not just that someone goes to a store and buys something or receives a service or a product. Now customers speak up on social networks, they provide opinions, they complain, they sometimes praise our products or what we do. So we have a crazy amount of moments that can be analysed. Right now, an algorithm can understand irony – when you tweet something that’s ironic, there are algorithms that can start to approximate and understand that it’s not really a compliment, it’s more like a complaint. Right now this is being deployed primarily in marketing – but there are a host of potential applications for this type of technology.
In my role, I’m extremely interested in how lawyers adopt innovation, because some time ago, lawyers used to be very traditional: they were kind of laggards.
In some places, we’re seeing that lawyers are using machine learning when examining contracts – instead of examining contracts one by one, every single clause, which used to be done by a lawyer, now it’s being done by an algorithm, because the legal terms and the legal language tends to be very well defined because it’s specifically written to avoid ambiguity.
As a lawyer, you used to trust your paralegal to go through the databases and find the relevant cases related to the one at hand. Now, lawyers have started to trust an algorithm to revise old cases and precedents and so on. They trust them much better than they trust human paralegals, because they realise that they can compare the text word by word, meaning that they don’t miss details. They can even revise all the cases heard by a particular judge and get to know about the biases of that judge in similar cases – which is something that a human assistant might not be able to do, or that might be able to do only by intuition.
I’ve been working with the Spanish Ministry of Justice in connection with the modernisation of justice in Spain. I also teach law students at IE. These two roles expose me a little bit to the realities of justice. Work is being done on ‘algorithmic justice’ for petty thefts and things that are repetitive – the possibility of being able to come up with a first verdict, of course allowing the two parties to appeal if they don’t agree with the verdict, but releasing many human hours that could add value in more complex cases. All the things related to an insurance claim, traffic problems when there’s no victims, etc. All these could be very well examined right now, with the current state of technology, by algorithms. You could ask one insurance company to negotiate with the algorithm of the other and get into an agreement, only bringing the human lawyers in if they are really required. That could take away a significant part of the burden for lawyers right now.
Behind the magic curtain
I think the key aspect here is to get people to see this as a natural thing, as something that should not get them scared. Google understood a while ago that machine learning was going to be fundamental. Three years ago, their CEO, Sundar Pichai, talked about machine learning as something that will change the world to the same amount and the same intensity as electricity or fire changed the world. This is a lot to say. So, how do you go from understanding that to getting your organisation to understand that? He put together a very ambitious programme to teach it to all the staff. And all the staff means all the staff – the clerical staff, the people that work in search, the people that work in every single part of the value chain in the company. What happened? Innovation flows much better when people are able to see machine learning not as magic, not as something that someone with a magic wand goes ‘Ping!’ and it starts working, but as something that flows from the data.
You start to work in a different way when you understand the importance of data in what we do on a regular basis. In our day-to-day issues, we don’t pay much attention to storing the data in a proper way and being respectful to how this data is being stored. As soon as you start creating a conscience about data and storing it the proper way – because later on it will provide us with a lot of value add, because you’re going to analyse with algorithms – you’re going to be able to go and find patterns. This is important because then things change in a corporation.
But most corporations are not doing that. Most companies are just using this idea of innovation, machine learning, coming up with algorithms etc, in just one aspect of their corporation – the innovation department or the machine learning department, or something like that. When you only do that, it is much more difficult to change an organisation.
I think it’s better when a law professional is adapted to the environment. The environment is becoming influenced by technology all the time now – we do many things with technology that we didn’t used to do three years ago, and it’s changing amazingly fast. The legal profession has to relate to that and has to adapt to that. It brings up new opportunities: what happens if there’s a misunderstanding when I talk to my voice-enabled home assistant – what type of legal issues are there? And besides understanding these new cases, they need to understand how this is influencing the way they work, or the way they provide their services.
All of us will be able to talk to machines – most likely, kids will start programming in kindergarten. So what happens when these kids grow up and some of them become lawyers? They will be checking out an algorithm and understanding what the algorithm is doing, even creating their own algorithms. Right now, if you think about the latest version of iOS, it provides you with the possibility to tell Siri: ‘When I use this word or this command I want you to do this, this, this and that. I want you to open my GPS, and give me the directions to my office because I want to see where the traffic jam is, and then play this on Spotify…’ – and you can activate all that with a single voice command. What are we doing? We are creating algorithms.