Artificial . . . evidence?

Earlier this spring, A Washington State Court Judge issued what is widely believed to be the first evidentiary decision regarding Artificial Intelligence. In ­­Washington v. Puloka, following a Frye hearing, the Judge excluded AI-enhanced video from being considered as evidence. The video originated from Snapchat, and was enhanced using Topaz Labs AI Video, which is a commercially available software program widely used in the cinematography community. The Judge was not persuaded by this widespread commercial adoption, and held that the relevant community for purposes of Frye was the forensic video analysis community – which had not accepted the use Topaz AI.

The opinion shows careful consideration of an issue of first impression. Notably, it was important to the Judge’s opinion that there was another version of the video (the original) that was available and usable – even if it was low resolution, with motion blur. Further, the expert who edited the video did not know the details of how the Topaz Labs AI program worked – that is, he was not sure whether it was generative AI, could not testify to the reliability of the program, and did not know what datasets it was trained on. A different result may prevail where there is no other alternative, and in a situation where there is more testimony regarding the operation of the AI system at issue.

These issues will continue to pop up in courts across the country, and may need to be dealt with in a systematic way to ensure greater consistency. For example, the Advisory Committee on Evidence Rules has been considering proposed amendments to Rules 901 and 702 that would directly address AI-generated evidence.

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress