On Tuesday, June 20, 2023, President Joe Biden met with industry experts at the intersection of technology and society to discuss the opportunities and challenges of artificial intelligence (AI) development. President Biden emphasized the “need to manage the risks to our society, to our economy and our national security” and referenced the Blueprint for an AI Bill of Rights, issued in October 2022, to ground federal principles in an AI-powered world. The AI Bill of Rights includes a technical companion document that requires additional technical documentation for algorithmic discrimination testing, IP protection, and privacy measures and may provide a foundation for legislative and regulatory efforts in other state, federal and global AI frameworks. For example, Connecticut Senate Bill No. 1103, which will become effective July 1, 2023, already includes an intent to adopt a Connecticut state AI bill of rights based on the federal blueprint. California’s draft AB 331 (“Automated Decision Tools”) would also broadly require additional technical safeguards around AI tools, in addition to the CPRA’s regulations regarding automated decision making. Businesses will need to consider how to integrate technical accountability documentation and logging data into their AI governance and compliance programs as these regulatory developments continue to be promulgated across 84 state bills, 59 federal bills, and similar measures in 37 other countries across six continents.
The UK Government published its AI White Paper on 29 March 2023, setting out its proposals for regulating the use of artificial intelligence (AI) in the United Kingdom. The White Paper is a continuation of the AI Regulation Policy Paper which introduced the UK Government’s vision for the future “pro-innovation” and “context-specific” AI regulatory regime in the United Kingdom.
The White Paper proposes a different approach to AI regulation compared to the EU’s AI Act. Instead of introducing a new far-reaching legislation to regulate AI in the United Kingdom, the UK Government is focusing on setting expectations for the development and use of AI alongside empowering existing regulators like the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), and Competition and Markets Authority (CMA) to issue guidance and regulate the use of AI within their remit.
On June 28, 2023, the New York Department of Financial Services (“NYDFS”) published updated proposed amendments to its cybersecurity regulation (the “2023 Proposal”) applicable to “covered entities.”1 Covered entities are any person operating under, or required to operate under, a license, registration, charter, certificate, permit, accreditation or similar authorization under the New York Banking Law, Insurance Law or Financial Services Law. These updated amendments come after comments from industry groups and other stakeholders to the NYDFS’s proposed revisions that were published on November 9, 2022 (the “2022 Proposal”).2 Comments on the 2023 Proposal may be submitted until August 14, 2023.
In this Legal Update, we provide a section-by-section analysis of new requirements in the 2023 Proposal. The 2023 Proposal is extensive and would significantly expand requirements for covered entities. Key new and expanded requirements include: (1) new requirements for larger companies (Class A Companies, as defined below); (2) expanded governance requirements, such as board approval for cybersecurity policies; (3) expanded cyber incident notice and compliance certification requirements; (4) expanded requirements for asset inventory; and (5) a revised multi-factor authentication requirement for user access to a company’s network.
On June 18, 2023, Governor Greg Abbott signed into law the Texas Data and Privacy Security Act (the “Texas Privacy Law”), which goes into effect July 1, 2024. With this law, Texas joins 10 other states that have also passed comprehensive privacy laws throughout the United States: California, Virginia, Colorado, Connecticut, Utah, Florida, Montana, Iowa, Tennessee, and Indiana.
Overall, while the Texas Privacy Law closely resembles the privacy model established by the non-California privacy laws, it has certain nuances. Nevertheless, any company that has already implemented compliance with other privacy laws should be able to leverage its existing privacy compliance program to address the Texas Privacy Law.
The European Parliament adopted a Resolution on 11 May 2023 against the adoption of an EU adequacy decision for the US based on the EU-US Data Privacy Framework (DPF). The Resolution comes after an analysis by the European Parliament of the Executive Order on Enhancing Safeguards For United States Signals Intelligence Activities (EO 14086), which was adopted in the US in order to implement the DPF (for more details, see our previous Legal Update).
The European Parliament took the view that the EU-US DPF fails to create essential equivalence in the level of protection as compared to the European framework. In the European Parliament’s view, EO 14086 does not provide sufficient safeguards for the transfer of personal data from the EU to the US, considering the below aspects:
- Signals intelligence practices in the US are still considered too broad, as they allow the collection of personal data in bulk, including the content of communications. EO 14086 contains safeguards with regard to bulk collection of data, but such collection is not subject to an independent prior authorization, which is required in order to limit US intelligence activities, as pointed out by the European Data Protection Board in its opinion about the DPF. The European Parliament expressed concern that US authorities would by this means get access to data they would otherwise have been prohibited from accessing;
- European citizens are not able to seek effective legal remedy in the European Parliament’s view. Although a redress mechanism has been created for European citizens under the EO 14086, the decision of the competent authority is not intended to be made public, so that the data subject filing the complaint would not have the possibility to appeal the decision or claim damages.
On May 18, 2023, the Federal Trade Commission (“FTC”) issued a policy statement warning companies of the ways that collecting and using biometric information, or using biometric information technologies, might amount to an unfair or deceptive practice in violation of Section 5 of the FTC Act.
The policy statement identified several potentially deceptive practices: false or unsubstantiated marketing claims relating to the validity, reliability, accuracy, performance, fairness, or efficacy of technologies using biometric information; and deceptive statements about the collection and use of biometric information. The policy statement also identified several unfair practices that, while not deceptive, might nonetheless violate Section 5, including failing to assess foreseeable harms prior to collection; not addressing known or foreseeable risks; surreptitious collection or use; diligence failures on third party partners; and failing to provide appropriate training.
Although the policy statement purports to represent merely the FTC’s views under existing law, companies that collect or use biometric information should take this enforcement statement—alongside its other privacy-related endeavors—as a sign that the FTC is laser-focused in this area.
On Tuesday, May 16, 2023, the US Senate Judiciary Subcommittee on Privacy & Technology held its first hearing on Artificial Intelligence.1 The hearing, called “Oversight of AI: Rules for Artificial Intelligence”, featured witness testimony from Sam Altman, the CEO of OpenAI, Christina Montgomery, the chief privacy officer at IBM, and Gary Marcus, a professor emeritus at New York University. The hearings provided an opportunity for business and industry leaders to address trends, implications and risks associated with artificial intelligence (AI) with a view to assessing the nature and scope of potential regulatory and oversight frameworks.
The hearings come in the context of growing legislative and industry concern about AI and discussion around how to best inform and protect the public given the proliferation and evolving shape of AI technology while recognizing the potential benefits and practical uses of AI. The hearings are also part of growing federal legislative efforts around AI that indicate an increasing drive for Washington to potentially assume a significant position in the regulation of AI, aiming to strike a balance between fostering innovation and ensuring accountability and transparency.
On 25 April 2023, the UK Government announced the introduction of the Digital Markets, Competition and Consumers Bill (the “Bill”) into Parliament.1 The Bill, which reflects a key government priority of increasing consumer choice and competition, provides for:
- Far-reaching new consumer protection powers to be conferred on the Competition & Markets Authority (the “CMA”);
- The introduction of updated merger control and strengthened competition law enforcement powers, including investigation tools better adapted to a digital age; and
- The creation of a new competition law regime aimed at the digital sector, which will impose onerous new obligations on the largest technology companies and enable the CMA to make more rapid and tailored interventions in digital markets than to date.
On April 25, 2023, the Federal Trade Commission (FTC), Department of Justice Civil Rights Division (DOJ), Equal Employment Opportunity Commission (EEOC), and the Consumer Financial Protection Bureau (CFPB) issued a joint statement (Joint Statement) that each of them is now, and will be, looking at possible discrimination involving AI systems and other automated processes.
The Joint Statement summarizes each department’s and agency’s work on artificial intelligence (AI) and discrimination to date and flags their concerns regarding potential discrimination arising from (a) data sets that train AI systems, (b) opaque “black box” models that make anti-bias diligence difficult, and (c) the risk that third parties may use models in unforeseen ways. It notes that existing legal authorities apply to the use of AI tools just as they do to other conduct. Like the Biden Administration’s Blueprint for an AI Bill of Rights, the Joint Statement does not itself impose any new legal obligations on companies but, rather, helps to clarify the priorities of multiple agencies.
With an effective date of February 17, 2024, the Digital Services Act (“DSA”) will start applying to most online platform providers in less than a year. The DSA, which introduces due diligence and transparency obligations regarding algorithmic decision-making by online platforms, such as social media, video sharing or e-commerce, entered into force on November 16, 2022. For providers of Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs)—providers reaching 45 million EU users monthly—the DSA may apply four months following a notification to the provider concerned, which will be a date earlier than February 17, 2024.
In this Legal Update, we provide further detail on these obligations and note the steps businesses can take to comply with the DSA.