fivehundred magazine > M&A Yearbook 2024 > AI spy: avoiding bad AI investments

AI spy: avoiding bad AI investments

In the wake of recent advances in generative artificial intelligence (AI), AI has shot to the top of the board agenda. No one wants to miss out on this transformative technology – but businesses need to ensure FOMO doesn’t lead to bad investments. Jonny Bethell and Jo Joyce of Taylor Wessing explore the potential pitfalls to avoid in AI acquisitions

The launch of OpenAI’s ChatGPT in November 2022 took a flamethrower to the last AI winter. As users explored the chatbot’s wide-ranging generative AI capabilities to answer questions, draft copy, write code and more, it dawned on many businesses that this technology could be a game changer for how they operated. The challenge is pinpointing how, given the unprecedented frontier, to embark on the journey with eyes open to the risks associated with this new technology.

ChatGPT also lit a fire under leadership teams. After years of pitching AI as the future, suddenly they were under pressure to demonstrate they had been preparing for that future. A flurry of activity followed as companies raced to reassure investors they were on top of the issue and had their own revolutionary AI tech ready for deployment.

A wave of product launches, investment announcements and strategic alliance arrangements swept the globe, accompanied by a never-ending stream of media coverage on what AI could mean for the way we live and work.

After the NFT hype of 2021 and the metaverse dominating 2022, 2023 was certainly AI’s year. There were too many notable developments to mention exhaustively here, but to name just a few:

  • Microsoft announced a $10bn investment in OpenAI to accelerate breakthroughs in AI technology1.
  • Google launched its own AI chatbot, Bard, and announced a $2bn investment in large language model provider Anthropic2.
  • Amazon launched Amazon Bedrock, a suite of tools to help users build generative AI applications under its AWS brand3.
  • Demand for Nvidia’s AI chips saw a surge in its stock price, and in May it reached a $1tn valuation4.
  • Professional services firms PwC and Accenture announced investments of $1bn and $3bn respectively into generative AI5.
  • US President Joe Biden signed an executive order on AI to foster innovation while avoiding AI risks6.
  • The EU agreed the EU Artificial Intelligence Act, the first-ever legal framework on AI that will come into force in 20257.

It did feel like at one point or another, everyone got a little swept up in the AI wave.

Big money, low volume

Interestingly though, AI’s resurgence in popularity didn’t translate into M&A activity. Indeed, after a huge upswing in AI M&A between 2019-21, last year M&A dealmaking in the sector declined 31%, with only 190 deals compared to 276 in 2022 according to Crunchbase8.

There were a handful of big-ticket acquisitions last year. These included:

  • cloud data platform Databricks purchasing MosaicML, an infrastructure company for training models, for $1.3bn in June;
  • Thomson Reuters acquiring AI legal research tool Casetext for $650m in August; and
  • Travelers Insurance announcing the acquisition of Corvus Insurance, a company which uses AI to predict and prevent cyber risks, for $435m in November (which completed in January 2024).

That said, overall, M&A activity in 2023 was significantly lower than previous years. Given the various macroeconomic factors at play – most economies were suffering rising inflation and increasing interest rates, as well as geopolitical uncertainty and conflicts – perhaps this is understandable. All of these components suppressed market confidence and valuations.

Now the economic outlook is stabilising and becoming more positive, we’re expecting an upswing in M&A activity later this year. As companies look to make strategic acquisitions to future-proof their business models and enhance product offerings, AI is going to be an increasingly active vertical within the tech sector, particularly as investors chase the elusive unicorn – both in terms of pure AI businesses, of which there are a limited number, and companies that create AI-enabled products and services.

Targets and risks

We expect AI-driven businesses focused on the following areas to be the most sought-after acquisition targets:

  • productivity and the automation of software development;
  • drug discovery and personalised medicine;
  • predictive analytics and data analysis;
  • content creation, recommendation algorithms and consumer engagement analysis; and
  • agriculture and sustainable farming.

Many acquirers will be operating in unfamiliar territory though. AI and machine learning are not new, but the rapid pace of their development is. The explosion of chatter around AI technologies and the shift in development focus from a few big players to a much broader range of participants means it can be hard to assess which products and companies are actually going to have a positive impact vs which are well-polished (and expensive) vaporware.

There is a risk that in racing to make an acquisition to avoid being left behind, or to beat others to an in-demand target, businesses unfamiliar with AI technologies either don’t take the time to do proper due diligence or know what constitutes ‘proper’ diligence, meaning that they end up with a bad investment. Given the complexity of the technology, diligence providers are still learning what to look out for and what constitutes a ‘red flag’ issue. Demonstrating a track record may not be possible, and identifying what the threats to the future of the business being acquired might be (whether commercially, legally or regulatory) is harder to achieve.

Buyers are also having to navigate the fact a lot of companies are being less than honest about their capabilities. There has been a surge in businesses positioning themselves and their products as ‘AI-powered’ in order to ride the AI wave when they don’t actually incorporate AI technology. This ‘AI washing’ has reached such a level that regulators have promised more scrutiny to ensure companies aren’t misleading consumers and investors.

In December US SEC chair Gary Gensler warned businesses against making false AI-related claims, and the SEC hasn’t hesitated to take action. In March it charged two investment advisers with making false and misleading statements about their use of AI, resulting in $400,000 in fines9. The US Justice Department’s top prosecutor in San Francisco, Ismail Ramsey, has also indicated he’ll be on the lookout for AI and other start-ups that defraud investors before they go public.

So… a transformative technology. But many potential pitfalls to avoid to take advantage of it. If you are planning to make an AI-related acquisition later this year, there are a number of areas you should pay particular attention to in due diligence to ensure you avoid a bad investment and are able to enjoy the benefits of innovation.

AI acquisition due diligence

Verify the technology

This may sound obvious – but then again, Theranos was able to raise $700m in funding after weaving a web of secrecy about its supposedly revolutionary tech – which didn’t actually work.

If you are planning to make an acquisition to gain technology that you think will give you the edge, make sure you are given an opportunity to actually thoroughly test it. A hands-on demonstration with testing conducted by your own or independent third-party specialists can be a very simple way to test if flashy promises are grounded in reality and avoid a potentially disastrous investment.

Assess the team

When making an acquisition, you are not just investing in the business but also its people. You should assess the quality of the target company’s leadership team to ensure they have a background in AI or relevant qualifications and to make sure there are no red flags or past instances of overpromising and underdelivering. Look for diversity wherever possible. Diverse development teams are less likely to develop AI-driven tools that create unintendedly biased or discriminatory results.

You should also identify key personnel you want to retain for business continuity and their knowledge of the business and ensure all employment contracts are up to standard and contain appropriate confirmatory assignments of intellectual property (IP) and sufficiently attractive related benefits. It’s also important you understand what options have been granted to employees and ensure all tax considerations related to these are in order.

If independent contractors or other third parties have been instrumental in the development of the technology, check that they have assigned their IP and waived other rights.

Review accountability

To ensure AI transparency and explainability are upheld, and that regulatory concerns around bias and discrimination don’t arise, make sure any company you are interested in acquiring has robust governance frameworks in place. These frameworks should include ethical guidelines, auditing procedures for algorithms, and mechanisms to track decision-making processes.

The reporting structure and responsibility for ensuring AI accountability within an organisation will depend upon its size and operational focus. But whether the individual responsible for accountability is the chief ethics officer, chief privacy officer or general counsel, they should have a direct reporting line into the board and ideally should be independent of the development or engineering team.

Monitor regulatory risk

As we’ve mentioned, AI is an area that regulators are paying increasing attention to. You need to stay informed on the evolving regulatory landscape and ensure you are prepared for regulations that might impact you, like the EU’s incoming Artificial Intelligence Act.

Although general (as opposed to sector-specific) AI regulation is relatively new on the scene, UK regulators are already investigating AI-driven businesses (and those claiming to be so) and have been for some time. In line with its innovation-first approach, the UK government has instructed a number of regulators including the Information Commissioner’s Office (ICO) and the Competition and Markets Authority (CMA) to direct their focus towards AI risks falling within their competence. As regulators consult and engage on the use of AI in areas within their purview, there may be advantages for businesses to seek positive regulatory engagement at an early stage.

As AI is one of the sectors classified as ‘high risk’ under the National Security and Investment Act, you may also need to inform the UK government of your acquisition to ensure compliance if it falls within the Act’s scope.

Consider competition law

It’s worth considering how antitrust authorities might view strategic acquisitions and consolidation in the AI sector. The European Commission has raised concerns10 about both AI facilitating collusion between algorithms and the AI sector itself raising competition concerns, as companies with cloud services facilities and vast amounts of data and unique data sets may be incentivised to favour their own AI systems.

In the UK, after launching a review of AI large language models last year, the CMA has published a set of guiding principles11 relating to the development of AI technology which you’ll need to consider.

Understand intellectual property and patentability

Understanding the scope and protectability of AI-related patents requires a nuanced approach given the abstract nature of some AI concepts. You should work closely with intellectual property lawyers familiar with current trends in patent protection law as they pertain to software and algorithms to ensure you understand what you are acquiring the rights to. Patents, if sought in one jurisdiction, are likely to come at the expense of preserving trade secrets everywhere, since the trade-off for patent protection is publication of the details of the underlying invention. Careful consideration of the expected geographical scope of the business’s proposed operations should be undertaken before applications are filed.

Whether the business wishes to rely upon trade secrets or patent protection, it must first have maintained the confidentiality of its IP. If the company has not been careful in protecting its developments, it may find its only real advantage will come from being first to market and you may find that it is a less appealing investment as a result.

Review data protection

Data is one of the essential components of any AI system, fed in so it can learn and deliver useful outputs. In many cases that data will be personal data. You should engage with legal counsel who specialise in data privacy to ensure that any acquisition target has adhered to data protection laws including the European GDPR/UK GDPR and that the acquisition doesn’t introduce risks concerning personal data, data protection governance and accountability.

Privacy compliance is as much about record keeping as it is about handling personal data properly. Any target company using personal data in its AI operations should be able to provide copies of data privacy impact assessments, records of data processing, policies and procedures, and evidence of privacy training for staff. The company should have a clear privacy notice which explicitly addresses its use of personal data for AI development.

Not all small businesses need to have appointed a data protection officer but if a company is processing personal data for AI training or development purposes, it is imperative that a senior member of the team takes the lead on privacy compliance.

Helping you make the right choice

We have a long history of advising clients on how to introduce AI into their business while steering clear of risk. We’ve helped developers create the technology in accordance with regulation, advised companies how to introduce it in their business, and helped investors sort the hype from the real opportunities.

If you are planning to make an AI acquisition or investment and need legal advice, please let us know if we can help.