Expectation Maximization in Patent Application Disclosures: Considerations with AI-Related Inventions

05 June 2020 Privacy, Cybersecurity & Technology Law Perspectives Blog
Authors: Austin J. Kim Matthew Horton

A patent application must describe an invention with adequate detail so that a person skilled in the art can recognize the boundaries of the invention (called the “written description” requirement) and can be enabled to practice the invention (called the “enablement” requirement).  Historically, these requirements were relatively straightforward and easily met when describing most inventions.  The advent of computing technology, however, brought new legal complexities and introduced challenges to patent application drafting.  

Inventions related to artificial intelligence (AI), a species of software patents, have extended and complicated an already murky landscape.  The USPTO’s recent Request for Comments on AI patenting policy included a call for comments on this issue.  While recognizing the challenges surrounding patenting of AI inventions, most submitters indicated that the current framework is still workable and need not be changed in light of advances in AI.  

The Difficulties Describing High-Tech & AI Inventions

There are two disclosure requirements of primary concern under US patent law.  The written description requirement specifies that a claimed invention must be described in the patent specification in sufficient detail such that one skilled in the art can reasonably conclude that the inventors had possession.  The enablement requirement calls for a specification that must describe the invention in such terms that one skilled in the art can make and use the claimed invention without undue experimentation.  

The start of the digital age and the accompanying efforts to patent software inventions have come into friction with these longstanding standards of patent law.  Under modular programming, a computer program is often explained as independent, interchangeable modules (depicted simply as boxes) each ascribed as performing various tasks to achieve the overall functionality.  Transferring this framework to patent applications, computer-implemented inventions are also often described in terms of functionality or the end result to be achieved.  In addition, software patents attempt to claim not just one particular implementation but the penumbra of the overall functionality.  Both of these practices result in tension with the disclosure requirements under US patent law.  

In light of these issues, the USPTO has issued additional guidelines on computer-implemented inventions.  These directives stress that the patent application should describe how the inventor intends to achieve a computer function (e.g., by using detailed flowcharts), rather than simply declaring the desired end result of the invention, to satisfy written description.  The USPTO also notes that the mere disclosure of “functional block diagrams” without any explanations of the functions or interconnections may be insufficient for satisfying enablement.

Also implemented on computers, AI-related inventions are a subset of computer-implemented inventions.  What sets apart AI from other computer-implemented technologies is that AI is an attempt to mimic some aspect of human behavior or intelligence.  To achieve this objective, AI technologies generally involve a training dataset, a model to perform automated inference from the dataset, an optimization of the parameters of the model, an intake of new data for the model, and application of the model to the new data.

As with other software, many of the aspects of AI systems can be framed in terms of black boxes and thus face many of the same challenges as other software patent applications – plus more.  There is a particular concern over describing AI functionality in this manner.  In both AI and non-AI systems, the input and output to the black box are known a priori.  In non-AI, the contents of the black box are typically a fixed set of logic and rules.  In AI, however, what is in the black box is a model, and the innards of the model (e.g., the weights, parameters, layers, and connections) may not be known or inherently random.  For instance, since the weights and connections of a model are dependent on the contents of the training model and the optimization technique used, these aspects of the model may not be known until the model is trained and used.  

The USPTO RFCs & Submissions

The USPTO requested stakeholder comments on various aspects of AI-related patents, including what constitutes adequate detail for the written description and enablement requirements, such as:

Are there any disclosure-related considerations unique to AI inventions? … 

How can patent applications for AI inventions best comply with the enablement requirement, particularly given the degree of unpredictability of certain AI systems?

The submissions to the USPTO generally seem of the view that the existing standards for written description and enablement are adequate and need not be adjusted for AI inventions.  

For example, one commenter noted that a simple diagram showing a representation of a structure may be sufficient for inventions that require the network structure.  Some submissions commented that disclosing the exact number of layers or neurons in a neural network in an AI invention may sometimes not be necessary to satisfy written description requirement.  Others noted that the unpredictability or unknowability of certain AI inventions do not render enablement difficult and thus does not preclude patentability, observing that there are other technological fields that have innate unpredictability but have no such concerns.

Drafting Considerations

Practitioners should continue to draft patent applications for AI inventions as they have with other computer-implemented inventions.  As with any invention, patent practitioners should also have a firm comprehension of the technical concepts underlying the AI-related invention as well as the invention itself.  Adequate details should be provided in all relevant aspects of the invention, including the training dataset, the architecture of the model itself, the precise methodology of training the model (including the specific inputs and outputs), the pre-processing steps for new data, or the application of the model.  

While AI inventions can be described in terms of a black box system, the degree to which the contents of the black box should be described would be relative to what is being claimed.  If the claim is to the specific model architecture, then the black box should be cracked open and the individual components and the interconnections between them (e.g., weights, layers, and connections) be described in detail.  On the other hand, if the claims are to the application of the model, then the black box could remain relatively opaque.  Although what models can be used to implement the system should be adequately described, the focus of the disclosure in such scenarios should be on the training data, data intake, and output of the model.  

Overall, AI inventions do not necessitate any changes to the disclosure requirements.  The amount of detail and attention to specific aspects in the AI invention depend on the what the primary focus of the claimed invention is.  There should be adequate description, with the use of model architecture and detailed flow charts in describing the individual structure and steps to achieving the solution.  

This blog is made available by Foley & Lardner LLP (“Foley” or “the Firm”) for informational purposes only. It is not meant to convey the Firm’s legal position on behalf of any client, nor is it intended to convey specific legal advice. Any opinions expressed in this article do not necessarily reflect the views of Foley & Lardner LLP, its partners, or its clients. Accordingly, do not act upon this information without seeking counsel from a licensed attorney. This blog is not intended to create, and receipt of it does not constitute, an attorney-client relationship. Communicating with Foley through this website by email, blog post, or otherwise, does not create an attorney-client relationship for any legal matter. Therefore, any communication or material you transmit to Foley through this blog, whether by email, blog post or any other manner, will not be treated as confidential or proprietary. The information on this blog is published “AS IS” and is not guaranteed to be complete, accurate, and or up-to-date. Foley makes no representations or warranties of any kind, express or implied, as to the operation or content of the site. Foley expressly disclaims all other guarantees, warranties, conditions and representations of any kind, either express or implied, whether arising under any statute, law, commercial use or otherwise, including implied warranties of merchantability, fitness for a particular purpose, title and non-infringement. In no event shall Foley or any of its partners, officers, employees, agents or affiliates be liable, directly or indirectly, under any theory of law (contract, tort, negligence or otherwise), to you or anyone else, for any claims, losses or damages, direct, indirect special, incidental, punitive or consequential, resulting from or occasioned by the creation, use of or reliance on this site (including information and other content) or any third party websites or the information, resources or material accessed through any such websites. In some jurisdictions, the contents of this blog may be considered Attorney Advertising. If applicable, please note that prior results do not guarantee a similar outcome. Photographs are for dramatization purposes only and may include models. Likenesses do not necessarily imply current client, partnership or employee status.

Related Services