Tech Law Talks podkast

AI explained: AI and antitrust

24.07.2024
0:00
12:04
Do tyłu o 15 sekund
Do przodu o 15 sekund

AI offers new tools to help competition enforcers detect market-distorting behavior that was impossible to see until now. Paris Managing Partner Natasha Tardif explains how AI tools are beginning to help prevent anticompetitive behaviors, such as collusion among competitors, abuse of dominance and merger control.

----more----

Transcript:

Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. 

Natasha: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, our focus is going to be on AI and antitrust. AI is at the core of antitrust authorities' efforts and strategic thinking currently. It brings a number of very interesting challenges and great opportunities, really. In what ways? Well, let's have a look at how AI is envisaged from the perspective of each type of competition concept. I.e. anti-competitive agreements, abuse of dominant position, and merger control. Well, first of all, in relation to anti-competitive agreements. Several types of anti-competitive practices, such as collusion amongst competitors to align on market behavior or on prices, have been assessed by competition authorities. And when you look at those in relation to algorithms and AI, it's a very interesting area to focus on because a number of questions have been raised and some of them have answers, some others still don't have answers. The French competition authorities and the German Bundeskanzleramt issued a report sharing their thoughts in this regard in 2019. The French competition authority went as far as creating a specific department focusing on digital economy questions. At least, three different behaviors have been envisaged from an algorithm perspective. First of all, algorithms being used as a supporting tool of an anti-competitive agreement between market players. So the market players would use that technology to coordinate their behavior. This one is pretty easy to apprehend from a competition perspective because it is clearly a way of implementing an anti-competitive and illegal agreement. Another way of looking at algorithms and AI in the antitrust sector and specifically in relation to anti-competitive agreements is when one and the same algorithm is being sold to several market players by the same supplier, creating therefore involuntary parallel behaviors or enhanced transparency on the market. We all know how much the competition authorities hate enhanced transparency on the market, right? And a third way of looking at it would be several competing algorithms talking, quote-unquote, to each other and creating involuntary common decision-making on the market. Well, the latter two categories are more difficult to assess from a competition perspective because, obviously, we lack one essential element of an anti-competitive agreement, which is, well, the agreement. We lack the voluntary element of the qualification of an anti-competitive agreement. In a way, this could be said to be the perfect crime, really, as collusion is made without a formal agreement having been made between the competitors. Now, let's look at the way AI impacts competition law from an abuse of dominance perspective. In March 2024, the French Competition Authority issued its full decision against Google in the Publishers' Related Rights case, whereby it fined again Google for €250 million for failing to comply with some of its commitments that had been made binding by its decision of 21 June 2022. The FCA considered that Bard, the artificial intelligence service launched by Google in July 2023, raises several issues. One, it says that Google should have informed editors and press agencies of the use of their contents by its service Bard in application of the obligation of transparency, which it had committed to in the previous French Competition Authority decision. The FCA also considers that Google breached another commitment by linking the use of press agencies and publishers' content by its artificial intelligence service to the display of protected content and services such as search, discover, and use. Now, what is this telling us about how the competition authorities look at abuse of dominance from an AI perspective? Well, interestingly, what it's telling us is something it's been telling us for a while when it comes to abuse of dominance, and particularly in the digital world. These behaviors have even been so much at the core of the competition authorities', concerns that they've become part of the new digital markets app. And this DMA now imposes obligations regarding the use of data collected by gatekeepers with their different services, as well as interoperability obligations. So in the future, we probably won't have these Google decisions in application of abuse of dominance rules, but most probably in application of DMA rules, because really now this is the tool that's been given to competition authorities to regulate the digital market and particularly AI tools that are used in relation to the implementation of the various services offered by what we now call gatekeepers, big platforms on the internet. Now, thirdly, the last concept of competition law that I wanted to touch upon today is merger control. What impact does AI have on merger control? And how is merger control used by competition authorities to regulate, review, and make sure that the AI world and the digital world function properly from a competition perspective? Well, in this regard, the generative AI sector is attracting increasing interest from investors and from competition authorities, obviously, as evidenced by the discussions around the investments made by Microsoft in OpenAI and by Amazon and Google in Anthropic, which is a startup rival to OpenAI. So the European Commission considered that there was no ground investigating the $13 billion investment of Microsoft in OpenAI because it did not fall under the classic conception of merger control. But equally, the Commission is willing to ensure that it does not become a way for gatekeepers to bypass merger controls. forms. So interestingly, there is a concern that the new way of investing in these tools would not be considered as a merger under the strict definition of what a merger is in the merger control conception of things. But somehow, once a company has been investing so much money in another one, it is difficult to think that it won't have any form of control over its behavior in the future. Therefore, the authorities are thinking of different ways of apprehending those kind of investments. The French Competition Authority, for instance, announced that it will examine these types of investments in its advisory role, and if necessary, it will make recommendations to better address the potential harmful effects of those operations. A number of acquisitions of minority stakes in the digital sector are also under close scrutiny by several competition authorities. So, again, we're thinking of situations which would not give control in the sense of merger control rules currently, but that still will be considered as having an effect on the behavior and the structure of those companies on the market in the future. Interestingly, the DMA, the Digital Markets Act, also has a part to play in the regulation of AI-related transactions on the market. For instance, merger control of acquisitions by gatekeepers of tech companies is reinforced. There is a mandatory information of these operations, no matter the value or size of the acquired company. And we know that normally for an information to be given to the competition authorities, it would be the notification system that is only required where certain thresholds are met. So we are seeing increasing attempts by competition authorities to look at the digital sector, particularly AI, from different kinds of lenses, being innovative in the way they approach it because the companies themselves and the market are being innovative about this. And competition authorities want to make sure that they remain consistent with their consumptions and concepts of competition law while not missing out on what's really happening on the market. So what has the future really made us now? Well, the European Union is issuing its Artificial Intelligence Act, which is the first ever comprehensive risk-based legislative framework on AI worldwide. That will be applicable to the development, deployment and use of AI. It aims to address the risks to health, safety and fundamental rights posed by AI systems while promoting innovation and the outtake of trustworthy AI systems, including generative AI. The general idea on the market and from a regulatory perspective is that if you're looking at competition law or more generally as a society, when you're scrutinizing AI, even though there may be abusive behavior through AI, the reality of it is AI is a wonderful source of innovation, competition, excellence on the market, added value for consumers. So authorities and legislators should try to find the best way to encourage, develop, nurture it for the benefits of each and every one of us, for the benefits of the market and for the benefits of everybody's rights really. Therefore, any piece of legislation or case law or regulation that will be implemented in the AI sector must be really focusing on the positive impacts of what AI brings to the market. Thank you very much. 

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith’s Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. 

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. 

All rights reserved.

Transcript is auto-generated.

Więcej odcinków z kanału "Tech Law Talks"