When Does Use of AI Set Off an Alarm in the Invention Process?

As generative AI is increasingly used to process information and generate new content, one possible application is to create an alternative embodiment in a patent application.  This could happen when an inventor creates an original embodiment, and then instructs an AI system to create a variant of the original embodiment to achieve broad coverage.  Conceivably, the AI system is configured to create an alternative embodiment based on existing data used to train the AI system or additional information that can introduce changes to the original embodiment, such as prior art in the field.  Would such use of AI be an innocent act or should it trigger an alarm like certain other uses of AI?

The Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office (USPTO) published on April 11, 2024 (“April Guidance”) discusses several issues related to the use of AI that have ethical considerations.  One of them concerns the duty to disclose to the USPTO all known information that is material to patentability as part of duty of candor and good faith.  As stated, the duty of disclosure is triggered when AI is used during the application drafting process to create an alternative embodiment that “the inventor(s) did not conceive and applicant seeks to patent”.  The given rationale is that if there is any doubt as to whether there was at least one named inventor who significantly contributed to a claimed invention developed with the assistance of AI, information regarding the interaction with the AI system could be material and, if so, should be submitted to the USPTO.

A key question to answer here appears to be when the inventor would not have conceived the alternative embodiment.  The Inventorship Guidance for AI-assisted Inventions published on February 13, 2024 (“February Guidance”) offers a list of Guiding Principles on determining whether a person made a significant contribution to an AI-assisted invention and specifically to the conception thereof.  However, these guiding principles do not directly address the above-described scenario of an inventor using an AI system to create an alternative embodiment.  In this scenario, a claim based on the alternative embodiment could have multiple contributors: the inventor developing the original embodiment and providing instructions to the AI system, the creator of the AI system, the creator of the training data for the AI system, and the creator of the additional information outside the embodiment.  How should the significance of the inventor’s contribution to the claim be evaluated in this case?

According to Guiding Principle #2 of the February Guidance, a person who “constructs the prompt in view of a specific problem to elicit a particular solution from an AI system” could be making a significant contribution to an AI-assisted invention.  This guiding principle could apply to the above scenario, as one way for the inventor to instruct the AI system to create an alternative embodiment is to provide a written summary of the original embodiment and a request to construct the alternative embodiment (of a particular solution to a specific problem) as the prompt to a large language model of the AI system.  Since the summary of the original embodiment forms a significant part of the prompt, would the inventor necessarily be making a significant contribution in this case?  If so, this would mean that claiming the alternative embodiment would not trigger a duty to disclose use of the AI system.

General definitions of an embodiment include a physical form of an invention, a representative example, or a particular implementation or method of carrying out the invention.  By definition, an alternative embodiment of an invention would not be a different invention and therefore typically differs from the original embodiment by obvious changes or changes that are narrower than the inventive points.  Not only would an AI system be unable to create an alternative embodiment without the original embodiment as an input, any output of the AI system that can properly be considered as an alternative embodiment would still correspond to the same broadest invention.  Therefore, even if multiple parties might have contributed to the alternative embodiment, the contribution of the parties beside the inventor would at most be sub-inventions that are subsumed by the main invention.

It is worth noting that according to Guiding Principle #4 of the February Guidance, a person “who develops an essential building block from which the claimed invention is derived may be considered to have provided a significant contribution to the conception of the claimed invention.” So even if an AI-created alternative embodiment did amount to a sub-invention, one might be hard-pressed to deny that the inventor has provided an essential building block from which the sub-invention is derived.

In conclusion, while the potential of AI can be remarkable, humans continue to invent and control the use of AI.  An AI system being used to create an alternative embodiment does not break free from this control.  There are situations where it can be appropriate to notify the USPTO regarding AI-created embodiments, such as when the output of an AI system that does not work or has no useful application is improperly included as an alternative embodiment.  Otherwise, using an AI system to create a legitimate alternative embodiment sounds like nothing out of the ordinary and may even be encouraged to increase the efficiency of drafting patent applications.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress