Life on the cutting edge
GC magazine partnered with Winston & Strawn to host a roundtable discussion with general counsel at the frontiers of technological innovation in the heart of Silicon Valley. From AI to regulation, their insights paint a picture of uncertainty – but equally one of cautious optimism about the future of business.
Frontiers of disruptive tech
To say that disruptive technologies will be a catalyst of great change – not just in the legal profession, but wider business – is redundant. While that may sound somewhat controversial, in Silicon Valley – the heart of technological innovation – it’s perhaps more accurately an understated perspective.
But how do the in-house legal departments working closest to the action – those employed by emerging, new-era tech companies – manage to stay ahead of the curve in a way that allows them to credibly advise on legal issues at the cutting edge? How do traditional law firms ensure they are keeping in lockstep with industries undergoing technology-inspired change?
Discussion point: Is there adequate regulatory infrastructure in place for AI and disruptive technology more generally?
Often, government regulation is reactive rather than proactive. This is nearly universally true when it comes to technology at the cutting edge. The implications of new, disruptive technology are unpredictable; only once it has been released into the market can the wider implications even begin to be understood, not just by the users, but oftentimes the creators.
This can put in-house legal teams and their external firms alike in a purgatory of sorts, where they must advise on legal positions that are likely to be made obsolete by inevitable regulation. Uber, and the effect it has had on the employee/contractor demarcation, is a good example of this.
With several high-profile technology revelations promised on the horizon (AI and blockchain in particular), do regulations exist into which they might fit? Or must legal advisers adopt a ‘wait and see’ attitude, knowing that future overzealous regulators might leave a company’s product dead in the water?
THE EXTERNAL PERSPECTIVE
Kathi Vidal, managing partner – Silicon Valley, Winston & Strawn: ‘I gave a talk a few months ago at Berkeley with people who had worked for the previous White House administration and they were concerned not only that we lack the infrastructure, but that we lack an infrastructure to build the infrastructure. The current power regime is all about deregulating as opposed to regulating, so we don’t really have adequate structures in place.
We have infrastructure around unpredictable technologies. In IP we tend to divide technology into predictable and unpredictable. Unpredictable is things like pharmaceuticals – and on that side you have the FDA – you know you have to get certain things approved. On the other hand, for the predictable sciences, we don’t have that.
With AI there is increasingly a merging of the two sides. With software entering into all sorts of “things”, you move away from those things being predictable. I wonder whether we need something like the FDA, where you have to get these algorithms approved or whether something needs to be done before they can go to market. I certainly think that the blurring of the lines between predictable and unpredictable technology is a big change.’
Basil Godellas, partner and co-chair, financial services practice, Winston & Strawn: ‘With distributed ledger technology in particular, there are interesting developments at the state level that could impact the practice of corporate law. In the United States, essentially we are seeing individual states looking at how to position themselves favourably with disruptive technology. About a year ago, Delaware and Wyoming both began exploring the use of distributed ledger technology for storing corporate records, but there’s a number of large players in the industry that already provide corporate franchise and similar services that could be impacted by this technology. Will there be a bit of a push back or will the technology be embraced?
Two key examples have been cited over the last six or eight months as problems or mistakes that distributed ledger technology could have avoided. In the case of Dole Foods, the company’s corporate records did not accurately reflect the total number of shares of its outstanding stock. The judge in that case made a statement to the effect that distributed ledger technology could have prevented this mistake. The other example involved the bankruptcy of a major company. In that case, someone made a mistake and terminated the financing statement securing a billion-dollar-plus loan right before the company went into bankruptcy. So there’s been a lot of talk about using smart contracts to prevent mistakes like this. These are just some examples of where state governments are looking to use blockchain technology, specifically to facilitate corporate record keeping and secured financings. ’
THE IN-HOUSE PERSPECTIVE
Bruce Byrd, chief legal officer, AT&T Communications: ‘When it comes to the question of whether we need new regulations, I should qualify any statement I make by pointing out I work in the second-most regulated industry in America after banking. Our lawyers don’t wake up in the morning thinking we need more regulations. On the other hand, you do wonder what direction some of this might go and whether it could be a wise decision to get ahead of it, or suggest that policymakers should do something. I spend a fair bit of my time talking to the intelligence community about the security of our network, and most of that focuses on big threats to its core elements.
There’s this little thing called IoT – the Internet of Things – too. Whereas I can name on one hand the primary manufacturers of big network gear, I can’t do that with the manufacturers of IoT devices. The standards are disaggregated or nascent, and the security protocols are more questionable than they are in our core network or radio access network. So it’s hard to know the direction to go, but I’ve thought about it less in terms of what recommendations we need (although we have made some recommendations to the White House about things needing to be considered in the IoT space, but those recommendations are not around needing more regulation), but rather emphasising that we need to take advantage of AI to address a problem. In other words, whatever regulation you may come up with will pale in effectiveness in comparison to AI capabilities that can do faster threat analytics, while proving to be more malleable and flexible. That is a tough thing for policymakers to get their heads around. The real challenge is education. They don’t quite get that it’s not going to work in the usual way – they will have to allow us to use a lot of this stuff before it’s perfected if we as a nation are going to take advantage of its benefits.’
Jordan Newmark, litigation and IP counsel, Miami International Holdings: ‘The regulatory framework we have in the US isn’t technology-focused, it is fear-of-disruption-of-the-general-marketplace-focused. The result is you have areas of the world that are way ahead of us in terms of implementing things. In Bermuda, for instance, their stock exchange rules have promulgated draft rules with respect to trading tokenised products that are way ahead of where the SEC is today. We can watch experiments happen in other countries around the world.’
Scott Weber, general counsel, Lumina Networks: ‘What’s interesting when it comes to trust is there has been a shift away from traditional sources of authority and credibility. I trust Google and AT&T far more than I do the government at this point, especially right now, because they move faster – and besides, corporate social responsibility is taking heart and hold in the economics; it’s in a business’s advantage to be a good citizen. It makes sense to be a good corporate citizen and make products that don’t hurt people and protect against these liabilities, with or without regulations. Of course we still need regulations, but they are always going to be three steps behind, and even then it will be a case of putting a sticking plaster over something very large. It will be interesting to see the ways in which corporations and government can work together.’
Kathleen Jason-Moreau, general counsel, Vim: ‘I just feel like I spend all my time chasing after the sales team. Bruce, you talk about AT&T being regulated, but I’m in healthcare – $1bn to get a product to market will be considered cheap. Often sales people come to the company from a different sector and they’re just not used to the way this market works. It’s so heavily regulated. It’s an education, and you don’t want to be the lawyer who is always saying no. I’m not comfortable just being a rubber stamper, but I also don’t want to be the person who stops the deals. The most successful sales executives don’t take a no from anyone – not from a customer and certainly not from a GC. We’ve just got to figure everything out; it creates all sorts of challenges because the law changes too quickly. I study every weekend. You never want to let the CEO down.’
Discussion point: Does disruptive technology change how you expect external advisers to act?
Increasingly, firms are marketing their services on the basis of value. This is particularly so when it comes to in-house clients, for whom tightening budgets have made value a priority. With in-house teams looking to technology to streamline processes, and as businesses from every sector embrace technological innovation, questions are being asked of external partners: are you using technology to deliver more efficient services to my team? Do your lawyers have the technical knowledge required to advise my business?
THE EXTERNAL PERSPECTIVE
Kathi Vidal: ‘I had a client who invited their four top firms in and they explained what their goals were, so that the four law firms could align with that. It helped tremendously; the energy you get is great. Just to hear from the CTO about the open source issues, the cloud issues, what they’re struggling with – it energised me to think outside of the box and about which lawyers I could pull into the spectrum in terms of solving their problem. You can bring them all in at once, which is extremely empowering and makes me able to serve them better.’
THE IN-HOUSE PERSPECTIVE
Chris Ghazarian, general counsel, DreamHost: ‘I think you see a lot of firms playing catch up. You expect them to understand some of the tech behind it, but you also have firms who don’t understand that blockchain is not the same as distributed ledger, and vice versa. Last year, we had a case against the Department of Justice. Part of the reason we got into that was the automation around some of our subpoena compliance and warrant compliance. That led to discussions with GCs of big web hosting companies. It was interesting to hear about the way they incorporate these technologies into what they do in terms of legal compliance. Frankly, a lot of them don’t know what they’re talking about. Their external counsel may also not be fully aware of what’s going on, and then something bad happens. Then, and only then, they realise it is a big deal – after which, it’s often too late.’
Bruce Byrd: ‘My lawyers cover every specialty, and I expect them to be better versed than anyone we hire. So what I’m looking for in external counsel is a level of curiosity. I meet lawyers who aren’t curious about technology. Occasionally, I run into my own lawyers who joke with me about how they don’t understand tech. I don’t find it funny – if you don’t know the tech then that’s a problem. I’m not asking my lawyers to be technologists, that’s not our training, but they should know the essential elements of what we do. I have a lot of confidence in the firms we hire, but when it comes to individual lawyers, I’m seeing a slight laziness about the issue – that’s what annoys me. My outside firms need to understand this at least as well as I do. That’s my biggest challenge – making sure my outside firms are diving into the technology.’
Discussion point: What are the ethical and practical concerns with increasing reliance on AI and algorithms in business?
THE EXTERNAL PERSPECTIVE
Kathi Vidal: ‘We, as a society, don’t trust technology. Autonomous vehicles should save 20,000 lives a year, but if one person dies that’s going to make people think it isn’t safe. Then there are questions of what happens if someone hacks the software. These are very serious issues which we are going to be dealing with all the time. With software entering into all sorts of applications, you move away from those things being predictable. I certainly think that the blurring of lines between predictable and unpredictable technology is a big change.’
THE IN-HOUSE PERSPECTIVE
Jordan Newmark: ‘It’s not so much the legal dollar liability, as much as the brand liability, that’s the issue. If you’re a financial services provider and you have a robo-adviser for which the algorithm is off in a particular way, and it recommends buying something that you should have shorted, for example, that may cause a loss of a certain dollar amount for your customers – but the bigger loss is likely to come from other customers fleeing or deciding not to invest any more money with you. I tend to think that the limitations of liability for using software probably will apply the same ways they traditionally have, but I think brand liability is what will take the biggest hit.’
Robert Shives, general counsel, Shinko Electric: ‘In my view, understanding the risk to retirement plans and savings is just as important. If the AI recommends something that ruins millions of people’s lives, it is a big problem. That is not just a brand problem. It is people who have lost their life savings.’
Mary Fuller, former head of legal and chief policy officer, The Kudelski Group: ‘It won’t be long before AI invents – there is already an AI popstar in Japan. I presume AI cannot get a copyright. Likewise, unless you’re human, you cannot become an inventor on a patent. That means a lot of the regulatory and IP frameworks will ultimately be deemed inadequate. I assume that as soon as there is a clear economic interest in companies being able to own the output of AI, the laws will change, but until then we have the question of what we do. There will be no incentive to develop something that is fundamentally not patentable.’