New York Department of Financial Services Issues Cybersecurity Threat Alert as Malicious Activity Rises

The New York Department of Financial Services (DFS) published an alert directed to all DFS-regulated entities specifically warning of a widespread cybersecurity threat involving social engineering of regulated institutions’ IT help desk personnel and call center personnel.

According to the alert, DFS has detected a trend in which threat actors have targeted IT personnel as a part of schemes to gain system access through password resets and diversion of multi-factor authentication (MFA) to new devices. According to DFS, threat actors have employed tactics including voice-altering technology and leveraging information found online about identities of individuals, in attempts to convince IT personnel at help desks and call centers to comply with fraudulent access requests.

DFS cautions all regulated entities to be on “high alert for suspicious communications” based on the observed threat actors’ recent activity. Entities are encouraged by DFS to:

  • implement secure controls for password changing and  MFA device configurations;
  • exercise caution in authenticating the identity of anyone who tries to change a password or MFA device; and
  • remain vigilant when receiving requests from individuals and vendors regarding system access. 

DFS included a link to guidelines published by the U.S. Department of Homeland Security’s Cybersecurity & Infrastructure Security Agency (CISA). The guidelines from CISA (CISA: Avoiding Social Engineering and Phishing Attacks) identify best practices to protect against these cyber threats, including:

  • Distinctions between common methods of social engineering employed by threat actors
  • Common indicators of malicious activity disguised as a legitimate communication
  • Proactive measures to minimize the risk of disclosing information and/or permitting access to threat actors
  • Guidance and resources on handling a cybersecurity compromise

In addition to the CISA guidelines, NYDFS has a publicly available Cybersecurity Resource Center with more information and guidance for DFS-regulated individuals and entities.

For More Information

If you have any questions about this blog post, please contact Michelle Hon DonovanAriel SeidnerMilagros Astesiano, any of the attorneys in the Privacy and Data Protection Group, or the attorney in the firm with whom you are regularly in contact.

Disclaimer: This blog post has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm’s full disclaimer.

FTC Announces Enforcement Actions Against AI Use

As part of its ongoing enforcement efforts against allegedly deceptive and misleading uses of artificial intelligence, the Federal Trade Commission (FTC) disclosed five new enforcement actions on September 25, 2024, against companies across various industries that either allegedly made fraudulent claims about their AI resources or offered AI services that could be used in misleading or deceptive ways. Read the full Alert on the Duane Morris website.

How Copyright Law Regards Artificial Intelligence

Duane Morris partner Agatha Liu is quoted in the Bloomberg Law article, “AI Art Appeal’s Procedural Flaws Put Broader Ruling in Doubt.”

An appeals court panel’s focus on procedural issues in a case involving efforts to copyright AI-generated work left attorneys concerned the judges may sidestep larger questions about how copyright law regards the emerging technology. […]

“The point of copyright protection is it should reward creativity. It should be associated with a human being, not a machine,” said Liu. “But there’s merit in claiming the creator of the machine being an author.”

Read the full article on the Bloomberg Law website.

Artificial Intelligence Updates – 09.18.24

#HelloWorld. It’s been a long summer hiatus. Please bear with us as we play our way back into shape. In this issue, we recap the summer highlights of AI legal and regulatory developments. Of course, the EU AI Act, but not just the EU AI Act. California continues to enact headline-grabbing AI legislation, content owners continue to ink deals with AI model developers, and what would a month in the federal courts be without another AI lawsuit or two. Let’s stay smart together.

Read more on The Artificial Intelligence Blog.

Employment Legislation in Illinois Regulates BIPA and AI

In the span of 10 days in August 2024, Illinois Governor J.B. Pritzker signed into law a series of significant employment legislation, paving the way for a new employment landscape beginning in 2025 and 2026. The new legislation includes:

    • Adding new requirements for employers utilizing artificial intelligence in their decision-making processes, and imposing liability under the Illinois Human Rights Act if those AI systems create a discriminatory effect;
    • Passing long-awaited reforms to the Biometric Information Privacy Act  that limit the number of violations an individual may accumulate under the law

Read the full Alert on the Duane Morris website.

Next up for Medtech: Being Generative in Domain-Specific Languages

Given the vast amounts of data available, including raw measurements, diagnostic information, treatment plans, and regulatory guidelines, the biomedical technologies sector stands to gain immensely from artificial intelligence (AI), particularly machine learning (ML).

ML, at its core, learns from training datasets to identify patterns, which can then be applied to new input data to make direct inferences. For instance, if specific body scans frequently result in a particular diagnosis, ML can be used to quickly provide that diagnosis when similar scans are encountered, thus aiding in disease diagnosis.

Read the full article by Duane Morris partner Agatha H. Liu, PhD on the MD+DI website

Artificial Intelligence Employment Law Enacted in Illinois

On August 9, 2024, Illinois enacted its landmark artificial intelligence employment law, HB 3773. This legislation, which amends the Illinois Human Rights Act, endeavors to prevent discriminatory consequences of using AI in employment decision-making processes. This law goes into effect on January 1, 2026. Illinois is one of 34 states that have either enacted or proposed laws regulating the use of artificial intelligence. Read the full Alert on the Duane Morris website.

Changes to Illinois Biometric Data Law Lower Liability, but the Stakes Remain High

In recent years, a heavy question mark has weighed on companies that process biometric information as part of their standard operating procedures: What is our risk exposure?  On August 2, 2024, Illinois Governor J.B. Pritzker signed into law a bill passed by the Illinois Legislature in May to amend BIPA in a way that is expected to limit the risk exposure associated with violations. The amended text of BIPA now indicates that violations essentially occur on a per-person basis, not a per-scan basis. This is expected to yield a marked decrease in the number of violations for which a company may be liable, though penalties of up to $5,000 may still add up quickly where thousands of individuals or more are implicated. Read the full Alert on the Duane Morris website.

Embracing Artificial Intelligence in the Energy Industry

Last year, President Joe Biden signed Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Since the issuance of the executive order, a lot of attention has been focused on the provision requiring “the head of each agency with relevant regulatory authority over critical infrastructure … to assess potential risks related to the use of AI in critical infrastructure sectors involved, … and to consider ways to mitigate these vulnerabilities.” Naturally, government agencies generated numerous reports cataloging the well-documented risks of AI. At the same time, nearly every company has implemented risk-mitigation guidelines governing the use of artificial intelligence. To be sure, the risks of AI are real, from privacy and cybersecurity concerns, to potential copyright infringements, to broader societal risks posed by automated decision-making tools. Perhaps because of these risks, less attention has been focused on the offensive applications of AI, and relatedly, fewer companies have implemented guidelines promoting the use of artificial intelligence. Those companies may be missing out on opportunities to reduce legal risks, as a recent report by the Department of Energy highlights.

Read The Legal Intelligencer article by Duane Morris partners Phil Cha and Brian H. Pandya

© 2009- Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress