Health care companies are increasingly using artificial intelligence (AI) to create innovations, set prices, and compete with rivals. At the same time, federal and state antitrust enforcers are finding new ways to apply antitrust law to the modern, data-driven economy. Amid these myriad changes in technology and the law, the time is ripe to consider what the growth of AI in health care means for antitrust compliance.
AI and Competition
Reduced down to a word, the aim of the antitrust laws is “competition.” Business practices that promote competition are generally permissible under the antitrust laws, while business practices that unreasonably restrain competition are generally forbidden. Accordingly, as AI becomes increasingly important to competition in health care, antitrust enforcement is evolving to account for novel issues posed by AI. In the context of antitrust merger reviews, for instance, antitrust enforcers are increasingly taking a broad look at how mergers might combine powerful repositories of data or market intelligence, or whether mergers might have the effect of depriving customers or competitors of the key tools or insights that they need to compete in the modern economy. In this respect, antitrust regulators are broadening their traditional regulatory playbook to scrutinize not only “horizontal” mergers (mergers between competitors), but also “vertical” mergers (mergers between a supplier and its customer). Antitrust regulators are also focused on acquisitions of “nascent” competitors, such as industry disrupters that are poised to stir the competitive pot in a given industry. As an example, if a large, established, health care software provider agrees to acquire a small but promising AI startup, an antitrust regulator might review the transaction to consider whether the buyer may use the transaction to scrap a disruptive technology to protect the incumbent’s existing market position or, instead, whether the transaction might be a springboard for the combined company to more broadly deploy new, innovative technology.
Similarly, antitrust enforcers are becoming increasingly sensitive to whether AI tools might create information asymmetries or power imbalances that could create unfair competitive advantages, whether horizontally (between competitors) or vertically (between a supplier and its customer). In these respects, companies should remember that the ultimate goal of antitrust law is for businesses to engage in full and vigorous competition on the merits. Companies therefore should not be shy about deploying AI to improve the value, quality, or accessibility of their services. But companies should also be mindful that the antitrust laws can unintentionally be violated by uses of AI that do not represent bona fide competition on the merits, such as using AI predatorily to disadvantage a rival.
Additionally, AI can also be implicated by Section 5 of the Federal Trade Commission Act, which broadly prohibits “unfair methods of competition” and “unfair or deceptive acts of practices in or affecting commerce.” For instance, the Federal Trade Commission (FTC) has recently warned businesses to “keep your AI claims in check,” that is, to avoid making false or misleading statements in marketing materials about their AI technology. The FTC has also warned businesses to “be careful not to mislead consumers” with AI, such as by using AI to generate the appearance of “fake users” or so-called “doppelgängers” to interact with consumers. Unfortunately, in many contexts the line between legitimate and unfair business activity can be fact- and context-dependent, as the FTC recently made very clear in an open-ended policy statement. Therefore, these determinations can require difficult judgments by experienced antitrust counsel.
AI and Collusion
Another issue is that in certain circumstances, antitrust enforcers or private plaintiffs might claim that AI is a tool for facilitating collusion. As more and more companies adopt “algorithmic pricing” models that delegate to an AI the power to make real-time, dynamic pricing decisions, the risk arises that the companies could be alleged to be using the technology to collude. As a simple example, imagine there are four manufacturers of a particular medical device and that all four manufacturers use the same AI software to set their prices. In this situation, an antitrust enforcer could allege that the agreement to use a single pricing software amounts to price-fixing – a felony under the antitrust laws.
However, claims of algorithmic collusion could also take different forms. In a more nuanced example, imagine that the four manufacturers of a particular medical device each independently decide to use AIs to set algorithmic pricing. One week, the AIs engage in robust price competition, with each AI discounting aggressively to win more and more share. The next week, however, the AIs start to raise prices independently but in parallel with one another. The third week, one AI cuts prices drastically to try to win more business, to which the three other AIs respond in kind. The fourth week, the AIs return to raising prices in parallel with one another. Finally, the fifth week, all four AIs stabilize their prices to a level that is conspicuously higher than where prices were to begin with, and prices then remain at that above-market level for the next six months. In this example, an antitrust enforcer – or even a class-action plaintiffs’ attorney – might allege that the AIs are engaging in “algorithmic collusion.”
It remains to be seen how antitrust enforcers and courts will determine whether criminal or even civil liability can apply for algorithmic collusion. Until then, companies should become attuned to these risks and consider monitoring their algorithmic pricing tools on an ongoing basis to detect and prevent against situations that could raise even the appearance of improper coordination.
Standards-Setting and AI
One last area of antitrust sensitivity for AI in health care is the role of standards-setting. Standards-setting is a frequent area of antitrust sensitivity, in that it brings together separate companies, potentially including competitors, to agree upon a single set of technological or operational practices for the industry to adopt as the single governing “standard.” To be clear, standards-setting is not inherently problematic under the antitrust laws. To the contrary, when done correctly standards-setting can be highly pro-competitive by creating efficiencies that lessen costs and create a more level playing field for all players to compete. When done wrongly, however, standards-setting can not only raise issues of collusion between competitors but also give the standard-bearer an effective monopoly over an entire industry.
Antitrust will play a central role in AI-related standards-setting. For instance, industry coalitions might emerge proposing standards, rules of ethics, or informal “best practices” on important issues like AI data security, on what sorts of disclosures are made to patients about the use of AI, or on lessening the risks of bias in AI. The adoption of these sorts of standards may create important, potentially life-saving, benefits for the industry and society at large. But in doing so, companies will need to keep antitrust compliance top-of-mind. Antitrust enforcers will look skeptically at any standards, rules of ethics, or best practices that have the effect of foreclosing rivals or potential disruptors from fully competing on the merits. Similarly, antitrust enforcers may take issue with any standards that set unreasonably low bars for competitors to follow. For instance, if a group of hospital systems adopted an industry “best practice” that has the effect of denying patients meaningful choices about their doctors’ use of AI, then the adoption of such a practice could be deemed an improper restraint of trade in violation of the antitrust laws.
AI is poised to unlock tremendous insights, innovation, and value in the health care industry. As these changes unfold, companies will need to ensure that their AI practices comply with the antitrust laws. Companies should be mindful to ensure that their AI practices do not unreasonably foreclose rivals, create unfair or coercive power asymmetries, facilitate collusion, or lead to unreasonably low standards of competition. Instead, companies should use AI to sharpen their competitive edge, enhance the value of their services, or better respond to supply and demand conditions. By using AI to compete on the merits, companies will stay compliant with the antitrust laws.
AI in Health Care Series
For additional thinking on how artificial intelligence will change the world of health care, click here to read the other articles in our series.