AI through the lens of Competition Law

AP & Partners | View firm profile

AI-related technologies and products are evolving more rapidly than one can imagine. The hype around generative AI (GenAI)¾which was quite short-lived¾is now giving way to agentic AI. The developments in the AI industry have attracted interest and intrigue of antitrust regulators. A few antitrust regulators have initiated (and, in some cases, completed) market studies to identify potential competition concerns in AI markets. This article analyses some of these concerns including algorithmic collusion, access to compute, and AI partnerships.

One of the initial concerns regarding AI that cropped up was ‘algorithmic collusion’: in markets where prices change frequently (perhaps, even multiple times in a day), competitors can use the same software to engage in price-fixing conduct. This hypothesis might be an oversimplification of the agentic nature of AI and how external-facing pricing mechanisms work. That said, without an ‘agreement’ or ‘understanding’ to not compete, competitors have strong commercial incentive to lower prices to complete a sale instead of maintaining price parity. For instance, an online retailer may employ an AI-based tool to track the prices of its competitors on a real-time basis and offer the same prices. However, it will have the commercial incentive to offer lower prices to achieve higher sales.

There are also concerns around entry barriers in relation to inputs such as data and compute, for building large language models (LLMs) and foundational models (FMs). Before delving into the specifics of key inputs, it is important to acknowledge that we are still at the very cusp of the AI revolution. Capital allocation (internal and external) towards AI has been significant. Not only are the Big Tech players seriously investing in the AI space, but start-ups working in different areas of the AI stack have fairly easy access to capital.

Data: Datasets used to train LLMs and FMs can be public data without copyright protection, public data with copyright protection, non-public copyrighted content, government data, synthetic data, proprietary datasets, and specialized datasets.

As the use of publicly available data without copyright protection becomes saturated, demand for other categories of datasets such as synthetic data and public data with copyright protection will increase. We are already beginning to see this. Amazon has entered into a copyright licensing agreement with the New York Times. OpenAI has struck a similar deal with Condé Nast. Licensing deals are seeing an uptick as there is legal ambiguity around use of copyrighted materials to train AI models. Aside from copyright infringement claims in various jurisdictions, we are also seeing launch of tools like Cloudflare’s ‘pay-per-crawl’ tool, to prevent free scraping of copyrighted content. At this stage, where the use of different categories of data to train AI models is being contested or restricted, and use cases are still being explored, it will be premature to conclude that access to data obtained through specifics apps or services like a social media app or a messaging service can act as an entry barrier.

Compute: The demand for chips, particularly GPUs, has increased dramatically given GPUs’ suitability for training and fine-tuning generative AI (GenAI) models. While Nvidia has been a major GPU supplier, Big Tech firms have begun investing heavily in developing their own chips because of the pace of AI advancement. Meta is developing its own AI training chips – Meta Training and Inference Accelerator (MTIA). Google has deployed tensor processing units (TPUs) which are being used to run Google’s AI services. Amazon Web Services is using custom Trainium, Graviton and Inferentia chips for AI workloads. Microsoft has deployed Maia chips and is developing Braga chips to meet its AI infrastructure needs. Then there are new-age semiconductor startups in the USA that are developing AI chip architecture like Groq and Cerebras Systems. Other countries are also witnessing immense innovation in this space: Huawei Technologies (China) has launched its series of Ascend AI chips, Rebellions (South Korea) is developing AI chips that use high bandwidth memory, and FuriosaAI (South Korea) is working on designing ‘RNGD’ AI chips. Given the extent of innovation in the AI infrastructure stack and the evolving nature of AI use cases, competition concerns in any market relating to AI compute inputs, seems unlikely.

In new, evolving markets, firms may enter into agreements to supply or purchase components and services to generate efficiencies. While we are seeing quite a few partnerships in the AI space (e.g., Perplexity offering access to Perplexity Pro for free for one year to Airtel users), given the high dynamism prevalent, it appears unlikely that any partnership has the ability to impact competitiveness of any AI market in India. Having said that, it will be interesting to see how the Competition Commission of India views AI-related partnerships from a merger control standpoint given that the threshold for ‘control’ is set at ‘material influence’.

AI continues to rapidly evolve and its impact across industries and sectors is expected to be nothing short of ground-breaking. While conducting market studies can be a very productive exercise to gauge and understand how market dynamics are shaping up, any regulatory intervention at an early stage can lead to unintended consequences and do more harm than good. For the time being, regulators may take a wait-and-watch approach and let the chips fall where they may.

More from AP & Partners