On February 14, 2025, a significant discussion on the intersection of artificial intelligence (AI) and legal practices is set to take place at the Suffolk Academy of Law’s annual Elder Law Update. During this event, key insights will be provided regarding the latest developments in AI technologies specifically relevant to trusts and estates practice. The presentation will draw from a recent court case in Saratoga County, New York, which has implications for the use of AI-generated evidence in legal proceedings.
The case in question, Matther of Weber, involved a Surrogate’s Court determination focusing on the responsibility of legal counsel to disclose when evidence presented in court has been generated using AI technologies. This ruling brings to light the evolving relationship between AI capabilities and the legal profession amidst the growing reliance on automation within various sectors.
The court defined AI broadly, encompassing technologies that employ machine learning and natural language processing to emulate human intelligence. This includes applications that generate documents, create evidence, or perform legal research. Within the ruling, two categories of AI were highlighted: generative AI, which can create new content based on input prompts, and assistive AI, which aids in document preparation without solely being responsible for content creation.
In the Weber case, the court examined an accounting hearing in which an expert witness had utilised AI to validate his calculations related to the damages sought by a party. However, as revealed during his testimony, the expert was unable to recall the specific prompts used in the AI application or provide details regarding the sources and processes employed by that AI. This lack of transparency led the court to question the credibility of the expert's testimony.
Consequently, the court established a foundational rule for the admissibility of AI-generated evidence in Surrogate's Court proceedings. It asserted that, due to the rapid evolution of AI technology and the associated reliability challenges, legal counsel must proactively disclose the use of AI-generated evidence before it is introduced in court. Furthermore, the evidence should be subjected to a Frye hearing—a legal procedure used to determine the general acceptance of scientific evidence within its relevant field—prior to its admission in court.
The ruling from the Weber case indicates that legal representatives must be forthcoming about the nature of AI's involvement in the evidence they present, ensuring that the court can assess the reliability and acceptance of such evidence within the legal framework. As AI technology continues to advance, it is projected that this case will set a precedent for other Surrogate's Courts addressing similar situations, thereby shaping future practices within this legal domain.
Source: Noah Wire Services