On Wednesday, March 13, 2024, Members of European Parliament endorsed the Artificial Intelligence Act (“AI Act”), with 523 votes in favor, 46 against, and 49 abstentions. This is the world’s first comprehensive AI law and likely to have significant influence on the rapid development of AI regulation in other jurisdictions including in the United States.

Article 1 of the AI Act explains its purpose:

to improve the functioning of the internal market and promoting the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, rule of law and environmental protection against harmful effects of artificial intelligence systems in the Union and supporting innovation

More specifically, in addition to harmonizing rules for developing, implementing, and using AI, the AI Act aims to (1) protect EU citizens’ fundamental rights, including from certain “high risk” AI; and (2) foster, rather than hinder, technological innovation and Europe’s AI leadership.

The Act categorizes AI into 4 levels of risk: unacceptable risk, high risk, limited risk, and low risk. Based on the risk level, individuals and entities within the scope of the Act, such as providers, deployers, importers, and distributors of AI systems (see “Definitions” in Article 3) are required to meet specific requirements. For example, AI with unacceptable risk is simply banned because it violates basic human and civil rights, manipulates human behaviors, or exploits human vulnerabilities.

Use of AI in employment is considered high-risk AI, categorized as such due to its significant threat to civil rights and the law. For employers, utilizing high-risk AI compliance will require a number of steps including keeping accurate use logs, being transparent about the AI use, maintaining “human oversight”, and other efforts to reduce risks.

Individuals are able to submit complaints about high-risk AI systems and are entitled to explanations about decisions made, such as employment decisions, based on the high-risk AI system.

A goal of the AI Act is “to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. “We ensured that human beings and European values are at the very centre of AI’s development[,]” as stated by Brando Benifei, the Internal Market Committee co-rapporteur of Italy.

What’s next? The AI Act is still subject to a lawyer-linguist verification and must be endorsed by the Council. However, it is expected to be adopted before the end of the legislature and will be entered into force 20 days after it is published in the official Journal. It will be fully applicable two years after its entry into force, with some exceptions.

Jackson Lewis attorneys are closely monitoring the EU AI Act as well as U.S. AI regulation. Additional targeted updates regarding the EU AI Act will be posted as we near the effective date, by attorneys from both Jackson Lewis and the firm-led L&E Global alliance.    


 

The explosion of generative AI has spawned a wide range of personal and professional tools and applications. One noteworthy (no pun intended) example of those tools and applications is notetakers that can capture, transcribe, and organize the content discussed at meetings (virtual or otherwise), enabling participants to more meaningfully participate in the meeting/discussion. They can even enable an individual to not be present at the meeting at all and not miss out! Of course, like any new AI or other technology, it is important to consider the risks along with the benefits.

There are already many AI notetakers on the market. Summaries like this can help potential users evaluate the different features, options, ratings, etc. In addition, potential users might consider the following questions when selecting and implementing an AI notetaker for their organization.

  • Does the tool record the conversation/meeting from which it develops the notes, transcript? If so, you will need to think about several issues, a few of which are discussed here.
    • One is whether you have complied with the applicable consent requirements. For example, some states, known as all-party or two-party consent states, require consent of all persons to a call before it can be recorded. Some AI notetakers can attend and record a meeting on behalf of the user. In some cases, the default rule may not alert others on a call that the AI notetaker is dialed in and recording the call. Organizations should alert employees of this possibility and address it accordingly. The organization also will need to consider whether it has provided appropriate notice of the collection of personal information from persons participating in the meeting. Businesses subject to the California Consumer Privacy Act (CCPA), for example, generally are required to provide a notice at collection to California residents concerning, among other things, the categories of personal information the business collects from them. This includes the business’ employees. Accordingly, such businesses will need to evaluate notetakers along with other means for collecting personal information from such individuals.
    • Another issue is how a recording is handled once created – should it be encrypted, who is permitted to access it, how long should it be maintained, etc. Such recordings could become the subject of a litigation hold, or a data subject access request. For example, an individual whose personal information is covered by the CCPA or a similar law, might request access to that information or deletion of it.
  • Is your data used to train the notetaking tool? Some notetaking tools will use the transcriptions generated by customers to help improve the accuracy of the product. Of course, the organization using the tool will need to consider the confidentiality, privacy, and security of the information it permits its notetaking vendor to acquire for this purpose, and whether this practice raises regulatory or contractual issues. The tool might provide an opt out from this use and the organization will want to make sure to train employees to opt out, as needed.
  • What kind of confidential and personal information do you anticipate will be captured by the tool? As with many AI applications, it is critical to understand the use cases that you anticipate being served by the technology. The use cases can be wide-ranging and will be shaped by, among other things, the type of business and activities engaged in, which departments/employees in the organization are using the tool, and other factors. For example, in a law firm environment, using a notetaker likely will raise attorney-client privilege issues. In a healthcare environment, it is likely that a notetaker could capture protected health information (PHI) of patients. However, if a health system’s marketing department is using a notetaker, capturing PHI might be less likely, but still possible. So, when thinking about how your organization will use a notetaker, it is important to consider not only your organization’s regulatory environment, but also who in the organization will be permitted to use the technology and for what purpose(s), what representations have been made about disclosures of confidential and personal information, etc. See policy development below.
  • If the product promotes deidentification, what standard for deidentification applies? Depending on the use cases that an organization anticipates when using notetakers, deidentification may not be a critical issue. Businesses in the construction industry, for example, might find it unlikely that the organization’s use of a notetaker would involve individually identifiable personal information. But where that is the case, and where the organization desires or needs to protect that information and or minimize the creation of it, some notetakers offer deidentification functionality. In those cases, however, it will important to understand the product’s deidentification process. Healthcare entities subject to HIPAA, for example, must satisfy a specific regulatory standard for deidentification. See 45 CFR 164.514.
  • How do we address others outside the organization who are using these tools? Customers, applicants, business partners, vendors, and other third parties also may be using these tools during meetings with persons at the organization. In the process, they may be creating a recording or transcript of the discussion, perhaps capturing confidential business or privileged information. The organization will need to evaluate how it will approach different situations, e.g., a vendor versus a job applicant. However, making the organization’s employees sensitive to this possibility is a starting point.
  • Do we need a policy? New technologies like generative AI and their various iterations often raise many questions concerning use in organizations. Indeed, many organizations have adopted policies to guide employees when using another popular application of generative AI technology – ChatGPT and similar tools. Policies can be helpful to establish guiding principles and requirements for employees, such as:
    • which notetaker(s) have been vetted by the organization and are approved for use in the course of employment,
    • which employees are permitted to use the notetaker and for what purposes,
    • guidelines for providing notice, consent, etc.,
    • what safeguards should be followed for securing transcriptions with confidential and personal information,
    • guidelines for limiting access to transcriptions,
    • record retention and litigation hold requirements, and
    • how to handle meetings intended to be privileged.

Policies will help the organization take into account regulatory concerns, client preferences, among other things. For what it is worth, we asked ChatGPT about whether to have a policy, and it responded, “Implementing a policy to govern how your organization’s employees use a generative AI note-taker is a prudent decision.”

Even if your organization has not formally adopted an AI notetaker, some of your employees may already be using the technology. As noted above, there are several considerations that should prompt additional analysis concerning the nature and scope of the use of such tools.

On March 6, 2024, New Hampshire’s Governor signed Senate Bill 255, which establishes a consumer data privacy law for the state. The Granite State joins the myriad of state consumer data privacy laws. It is the second state in 2024 to pass a privacy law, following New Jersey. The law shall take effect January 1, 2025.

To whom does the law apply?

The law applies to persons who conduct business in the state or persons who produce products or services targeted to residents of the state that during a year period:

  • Controlled or processed the personal data of not less than 35,000 unique consumers, excluding personal data controlled or processed solely for the purpose of completing a payment transaction; or,
  • Controlled or processed the personal data of not less than 10,000 unique consumers and derived more than 25 percent of their gross revenue from the sale of personal data.

The law excludes certain entities such as non-profit organizations, entities subject to the Gramm-Leach-Bliley Act, and covered entities and business associates under HIPAA.

Who is protected by the law?

The law protects consumers defined as a resident of New Hampshire. However, it does not include an individual acting in a commercial or employment context.

What data is protected by the law?

The law protects personal data defined as any information linked or reasonably linkable to an identified or identifiable individual. Personal data does not include de-identified data or publicly available information. Other exempt categories of data include without limitation personal data collected under the Family Educational Rights and Privacy Act (FERPA), protected health information under HIPAA, and several other categories of health information.

What are the rights of consumers?

Consumers have the right under the law to:

  • Confirm whether or not a controller is processing the consumer’s personal data and accessing such personal data
  • Correct inaccuracies in the consumer’s personal data
  • Delete personal data provided by, or obtained about, the consumer
  • Obtain a copy of the consumer’s personal data processed by the controller
  • Opt-out of the processing of the personal data for purposes of target advertising, the sale of personal data, or profiling in furtherance of solely automated decisions that produce legal or similarly significant effects. Although subject to some exceptions, a “sale” of personal data under the New Hampshire law includes the exchange of personal data for monetary or other valuable consideration by the controller to a third party, language similar to the California Consumer Privacy Act (CCPA).

When consumers seek to exercise these rights, controllers shall respond without undue delay, but no later than 45 days after receipt of the request. The controller may extend the response period by 45 additional days when reasonably necessary. A controller must establish a process for a consumer to appeal the controller’s refusal to take action on a request within a reasonable period of the decision. As with the CCPA, controllers generally may authenticate a request to exercise these rights and are not required to comply with the request if they cannot authenticate, provided they notify the requesting party.

What obligations do controllers have?

Controllers have several obligations under the New Hampshire law. A significant obligation is the requirement to provide a “reasonably accessible, clear and meaningful privacy notice” that meets standards established by the secretary of state and that includes the following content:

  • The categories of personal data processed by the controller;
  • The purpose for processing personal data;
  • How consumers may exercise their consumer rights, including how a consumer may appeal a controller’s decision with regard to the consumer’s request;
  • The categories of personal data that the controller shares with third parties, if any;
  • The categories of third parties, if any, with which the controller shares personal data; and
  • An active electronic mail address or other online mechanism that the consumer may use to contact the controller.

This means that the controller needs to do some due diligence in advance of preparing the notice to understand the nature of the personal information it collects, processes, and maintains.

Controllers also must:

  • Limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer. As with other state data privacy laws, this means that controllers must give some thought to what they are collecting and whether they need to collect it;
  • Not process personal data for purposes that are neither reasonably necessary to, nor compatible with, the disclosed purposes for which such personal data is processed, as disclosed to the consumer unless the controller obtains the consumer’s consent;
  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data appropriate to the volume and nature of the personal data at issue. What is interesting about this requirement, which exists in several other privacy laws, is that this security requirement applies beyond more sensitive personal information, such as social security numbers, financial account numbers, health information, etc.;
  • Not process sensitive data concerning a consumer without obtaining the consumer’s consent, or, in the case of the processing of sensitive data concerning a known child, without processing such data in accordance with COPPA. Sensitive data means personal data that includes data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation, or citizenship or immigration status; the processing of genetic or biometric data for the purpose of uniquely identifying an individual; personal data collected from a known child; or, precise geolocation data;
  • Not process personal data in violation of the laws of this state and federal laws that prohibit unlawful discrimination against consumers;
  • Provide an effective mechanism for a consumer to revoke the consumer’s consent that is at least as easy as the mechanism by which the consumer provided the consumer’s consent and, upon revocation of such consent, cease to process the data as soon as practicable, but not later than fifteen days after the receipt of such request; and
  • Not process the personal data of a consumer for purposes of targeted advertising, or sell the consumer’s personal data without the consumer’s consent, under circumstances where a controller has actual knowledge, and willfully disregards, that the consumer is at least thirteen years of age but younger than sixteen years of age.  
  • Not discriminate against a consumer for exercising any of the consumer rights contained in the New Hampshire law, including denying goods or services, charging different prices or rates for goods or services, or providing a different level of quality of goods or services to the consumer.

In some cases, such as when a controller processes sensitive personal information as discussed above or for purposes of profiling, it must conduct and document a data protection assessment for those activities. Such assessments are required for the processing of data that presents a heightened risk of harm to a consumer.  

Are controllers required to have agreements with processors?

As with the CCPA and other comprehensive data privacy laws, the law appears to require that a contract between a controller and a processor govern the processor’s data processing procedures with respect to processing performed on behalf of the controller. 

Among other things, the contract must require that the processor:

  • Ensure that each person processing personal data is subject to a duty of confidentiality with respect to the data;
  • At the controller’s direction, delete or return all personal data to the controller as requested at the end of the provision of services, unless retention of the personal data is required by law.
  • Upon the reasonable request of the controller, make available to the controller all information in its possession necessary to demonstrate the processor’s compliance with the obligations in this chapter;
  • After providing the controller an opportunity to object, engage any subcontractor pursuant to a written contract that requires the subcontractor to meet the obligations of the processor with respect to the personal data; and
  • Allow, and cooperate with, reasonable assessments by the controller or the controller’s designated assessor, or the processor may arrange for a qualified and independent assessor to conduct an assessment of the processor’s policies and technical and organizational measures in support of the obligations under the law, using an appropriate and accepted control standard or framework and assessment procedure for such assessments.  The processor shall provide a report of such assessment to the controller upon request.

Other provisions might be appropriate in an agreement between a controller and a processor, such as terms addressing responsibility in the event of a data breach and specific record retention obligations.

How is the law enforced?

The attorney general shall have sole and exclusive authority to enforce a violation of the statute.

If you have questions about New Hampshire’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

California Invasion of Privacy Act (CIPA) has become a focal point in recent legal battles, particularly within the retail industry. As retailers increasingly adopt technologies like session replay and chatbots to enhance customer experiences, they inadvertently tread into murky legal waters. These technologies, while valuable for optimizing websites and addressing customer inquiries, have faced a barrage of lawsuits and threats. Claimants argue that using these tools without obtaining customer consent amounts to wiretapping or using a “pen register.”

Session-replay software records specific customer interactions on websites, aiding in bug fixes, issue investigation, and market optimization. However, these tools may fall under so-called “two-party consent” statutes. For instance, the California Penal Code § 631 (a) requires consent from all parties involved in a communication. Retailers across various industries—clothing, finance, jewelry, and more—have found themselves in the crosshairs of these lawsuits.

At least 40 lawsuits originating in California have been filed involving CIPA since May 31, 2022. May 2022 was when the U.S. Court of Appeals for the 9th Circuit ruled in Javier v. Assurance IQ that, under CIPA, allparties to a “communication” must consent to that communication. Essentially finding that if a website does not request consent prior to a consumer engaging with a website, recording of any kind would be without valid consent.

As such, retailers with an online presence need to review the use of technologies such as session replay and chatbots and ensure there is a mechanism for consent from the consumer prior to interaction to ensure compliance with CIPA and other statutes that require two-party consent when recording communications.

If you have questions about CIPA compliance or related issues, contact a Jackson Lewis attorney to discuss.

On February 28, 2024, President Biden issued an Executive Order (EO) seeking to protect the sensitive personal data of Americans from potential exploitation by particular countries. The EO acknowledges that access to Americans’ “bulk sensitive personal data” and United States Government-related data by countries of concern can, among other things:

…fuel the creation and refinement of AI and other advanced technologies, thereby improving their ability to exploit the underlying data and exacerbating the national security and foreign policy threats.  In addition, access to some categories of sensitive personal data linked to populations and locations associated with the Federal Government — including the military — regardless of volume, can be used to reveal insights about those populations and locations that threaten national security.  The growing exploitation of Americans’ sensitive personal data threatens the development of an international technology ecosystem that protects our security, privacy, and human rights.

The EO also acknowledges that due to advances in technology, combined with access by countries of concern to large data sets, data that is anonymized, pseudonymized, or de-identified is increasingly able to be re-identified or de-anonymized. This prospect is significantly concerning for health information warranting additional steps to protect health data and human genomic data from threats.

The EO does not specifically define “bulk sensitive personal data” or “countries of concern,” it leaves those definitions to the Attorney General and regulations. However, under the EO, “sensitive personal data” generally refers to elements of data such as covered personal identifiers, geolocation and related sensor data, biometric identifiers, personal health data, personal financial data, or any combination thereof.

Significantly, the EO does not broadly prohibit:

United States persons from conducting commercial transactions, including exchanging financial and other data as part of the sale of commercial goods and services, with entities and individuals located in or subject to the control, direction, or jurisdiction of countries of concern, or impose measures aimed at a broader decoupling of the substantial consumer, economic, scientific, and trade relationships that the United States has with other countries. 

Instead, building on previous executive actions, such as Executive Order 13694 of April 1, 2015 (Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities), the EO intends to establish “specific, carefully calibrated actions to minimize the risks associated with access to bulk sensitive personal data and United States Government-related data by countries of concern while minimizing disruption to commercial activity.”

In short, some of what the EO does includes the following:

  • Directs the Attorney General, in coordination with the Department of Homeland Security (DHS), to issue regulations that prohibit or otherwise restrict United States persons from engaging in certain transactions involving bulk sensitive personal data or United States Government-related data, including transactions that pose an unacceptable risk to the national security. Such proposed regulations, to be issued within 180 days of the EO, would identify the prohibited transactions, countries of concern, and covered persons.  
  • Directs the Secretary of Defense, the Secretary of Health and Human Services, the Secretary of Veterans Affairs, and the Director of the National Science Foundation to consider steps, including issuing regulations, guidance, etc. to prohibit the provision of assistance that enables access by countries of concern or covered persons to United States persons’ bulk sensitive personal data, including personal health data and human genomic data.  

At this point, it remains to be seen how this EO might impact certain sensitive personal information or transactions involving the same.

Jackson Lewis will continue to track developments regarding the EO and related issues in data privacy. If you have questions about the Executive Order or related issues contact a Jackson Lewis attorney to discuss.

Artificial intelligence tools are fundamentally changing how people work. Tasks that used to be painstaking and time-consuming are now able to be completed in real-time with the assistance of AI.

Many organizations have sought to leverage the benefits of AI in various ways. An organization, for instance, can use AI to screen resumes and identify which candidates are likely to be the most qualified. The organization can also use AI to predict which employees are likely to leave the organization so retention efforts can be implemented.

One AI use that is quickly gaining popularity is performance management of employees. An organization could use AI to summarize internal data and feedback on employees to create performance summaries for managers to review. By constantly collecting this data, the AI tool can help ensure that work achievements or issues are captured in real-time and presented effectively on demand. This can also help facilitate more frequent touchpoints for employee feedback—with less administrative burden—so that organizations can focus more on having meaningful conversations with employees about the feedback they receive and recommended areas of improvement.

While the benefits of using AI have been well publicized, its potential pitfalls have attracted just as much publicity. The use of AI tools in performance management can expose organizations to significant privacy and security risks, which need to be managed through comprehensive policies and procedures.

Potential Risks

  1. Accuracy of information. AI tools have been known to create outputs that are nonsensical or simply inaccurate, commonly referred to as “AI hallucinations.” Rather than solely relying on the outputs provided by an AI tool, an organization should ensure it independently verifies the accuracy of the outputs provided by the AI tool. Inaccurate statements in an employee’s performance evaluation, for instance, could expose the organization to significant liability.
  2. Bias and discrimination. AI tools are trained using historical data from various sources, which can inadvertently perpetuate biases existing in that data. In a joint statement issued by several federal agencies, the agencies highlighted that the datasets used to train AI tools could be unrepresentative, incorporate historical bias, or correlate data with protected classes, which could lead to a discriminatory outcome. A recent experiment conducted with ChatGPT illustrated how these embedded biases can manifest in the performance management context.
  3. Compliance with legal obligations. In recent years, legislatures at the federal, state, and local levels have prioritized AI regulation in order to protect individuals’ privacy and secure data. Last year, New York City’s AI law took effect requiring employers to conduct bias audits before using AI tools in employment decisions. Other jurisdictions—including California, New Jersey, New York, and Washington D.C.—have proposed similar bias audit legislation. In addition, Vermont introduced legislation that would prohibit employers from relying solely on information from AI tools when making employment-related decisions. As more jurisdictions become active with AI regulation, organizations should remain mindful of their obligations under applicable laws.

Mitigation Strategies

  1. Conduct employee training. Organizations should ensure all employees are trained on the use of AI tools in accordance with organization policy. This training should include information on the potential benefits and risks associated with AI tools, organization policies concerning these tools, and the operation and use of approved AI tools.
  2. Examine issues related to bias. To help minimize risks related to bias in AI tools, organizations should carefully review the data and algorithms used in their performance management platforms. Organizations should also explore what steps, if any, the AI-tool vendor took to successfully reduce bias in employment decisions.  
  3. Develop policies and procedures to govern AI use. To comply with applicable data privacy and security laws, an organization should ensure that it has policies and procedures in place to regulate how AI is used in the organization, who has access to the outputs, to whom the outputs are shared, where the outputs are stored, and how long the outputs are kept. Each of these important considerations will vary across organizations, so it is critical that the organization develops a deeper understanding of the AI tools sought to be implemented.

For organizations seeking to use AI for performance management of employees, it is important to be mindful of the risks associated with AI use. Most of these risks can be mitigated, but it will require organizations to be proactive in managing their data privacy and security risks.  

On February 13, 2024, Nebraska’s Governor signed Legislative Bill 308, which enacts additional consumer protections for consumers in the state. It is similar to another genetic information law passed by Montana last year.

The law takes effect July 17, 2024 (90 days after the legislature adjourns on April 18, 2024).  

Covered Businesses

The law applies to direct-to-consumer genetic testing companies which are defined as an entity that:

  • Offers consumer genetic testing products or services directly to a consumer; or,
  • Collects, uses, or analyzes genetic data that resulted from a direct-to-consumer genetic testing product or service and was provided to the company by the consumer.

The law does not cover entities that are solely engaged in collecting, using, or analyzing genetic data or biological samples in the context of research under federal law.

Covered Consumers

The law applies to an individual who is a resident of the State of Nebraska.

Obligations Under the Law

Under the new law covered businesses would be required to:

  • Provide clear and complete information regarding the company policies and procedures for the collection, use, or disclosure of genetic data
  • Obtain a consumer’s consent for the collection, use, or disclosure of the consumer’s genetic data
  • Require a valid legal process before disclosing genetic data to any government agency, including law enforcement, without the consumer’s express written consent
  • Develop, implement, and maintain a comprehensive security program to protect a consumer’s genetic data from authorized access, use, or disclosure

Similar to several comprehensive consumer privacy laws, the company must provide a consumer with:

  • Access to their genetic data
  • A process to delete an account and genetic data
  • A process to request and obtain written documentation verifying the destruction of the consumer’s biological sample

Enforcement

Under the new law, the Nebraska Attorney General may bring an action on behalf of a consumer to enforce rights under the law. There is no private right of action specified within the statute.

A violation of the act is subject to a civil penalty of $2,500 per violation, in addition to actual damages, costs, and reasonable attorney’s fees.

If you have questions about Nebraska’s genetic privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

In 2023, a California superior court halted enforcement of any final California Privacy Protection Agency regulation implemented until a period of 12 months from the date that individual regulations became final. Based on the ruling, enforcement of the initial regulations passed in March 2023 could not commence until March 2024.

The California Privacy Protection Agency (CPPA) appealed the decision and on February 9, 2024, the California Court of Appeal reversed the superior court. With the reversal, the regulations enacted last year are now deemed active in advance of March.

The ruling also enables the CPPA to immediately begin enforcing other future regulations as soon as they are finalized, rather than having to wait a year as previously ruled by the superior court.

The regulations passed in March 2023 were intended to:

  1. Update existing regulations to fit with amendments made by the California Privacy Rights Act (CPRA).
  2. To put into operation new rights and concepts introduced by the CPRA
  3. Make the regulations more streamlined and easier to understand.

The revised regulations include regulations on data processing agreements, consumer opt-out mechanisms, mandatory requirements for recognition of opt-out preference signals, and consumer request handling.

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on the CPRA, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

For healthcare providers and health systems covered by the privacy and security regulations under the Health Insurance Portability and Accountability Act (HIPAA), a breach of unsecured protected health information (PHI) likely triggers obligations to notify affected individuals, the federal Office of Civil Rights (OCR), potentially the media and other entities. The breach also may require notification to one or more state Attorneys General, an obligation that depends on state law. Currently, the state data breach notification law in Michigan does not provide for Attorney General notification, something Michigan Attorney General Dana Nessel wants to change, according to reporting earlier this month from the HIPAA Journal.

Spurring the Michigan AG are concerns about the timing of notification to patients about recent breaches involving health systems but which were breaches experienced by downstream vendors. These are important concerns considering the increasing identity crimes and overall data risk individuals face, which can be mitigated to some degree with timely notification. However, health systems and entities in other industries can find themselves caught in a tough spot from a notification perspective when dealing with a breach experienced by a vendor.

On the one hand, quickly putting notification in the hands of individuals about a compromise of their personal data is critical to helping those individuals take measures to protect themselves from ID theft and other harms. Notification may prompt individuals to be more vigilant about their personal information, review credit reports, set up a fraud alert, check their bank statements and other measures to protect themselves from cyber criminals.  On the other hand, as a practical matter, the time between the date the breach occurred (as experienced by a downstream vendor) and the date of notification to patients can be affected by many factors, several of which may be outside the control and sometimes the knowledge of the covered entity. Looking solely to that metric in some cases may not be the most appropriate measure of timeliness to assess a covered entity’s performance and compliance when responding to a breach. If it is a metric upon which enforcement can be based, covered entities may need to revisit their incident response plans and vendor relationships to that timeframe as much as possible.

Let’s unpack this a little.

  • Recall that under HIPAA, a breach must be reported “without unreasonable delay and in no case later than 60 calendar days after discovery.” 45 CFR 164.404(b) (emphasis added).
  • A downstream vendor experiencing a breach of PHI likely is (but not always) a business associate of the covered healthcare provider. Of course, the relationship may not be that close. The vendor may be the subcontractor of the subcontractor of the business associate of the covered entity.
  • The general rule under the HIPAA Breach Notification rule is that business associates are obligated to notify the covered entity of a breach, not the affected individuals. See 45 CFR 164.410(a)(1). When there are multiple layers of business associates, a chain of notification commences where one business associate notifies the next business associate upstream and so on until getting to the covered entity. In many cases, the business associate experiencing a breach may not know what entity or entities are the ultimate covered entity(ies). See more on that below.
  • Under the HIPAA Breach Notification rule, business associates are not obligated to notify affected individuals. That obligation, unless delegated, remains with the covered entity. 45 CFR 164.404(a)(1).
  • The HIPAA Breach Notification rule also provides that when a business associate has a breach it must report “the identification of each individual whose unsecured protected health information has been, or is reasonably believed by the business associate to have been, accessed, acquired, used, or disclosed during the breach.” 45 CFR 164.410(c)(1).
  • In some cases, vendors effectively have no access to the PHI that they maintain or store for the ultimate covered entities, but still may be considered business associates. Other similar vendors may fall under a “conduit exception” and not be considered business associates under HIPAA. In either case, they may nonetheless have obligations other than HIPAA (statutory or contractual) to notify their customers of a breach. In these cases, however, the vendors simply may not be in a position to provide critical information upstream, such as identity of the affected individuals.
  • As the reporting of the data breach travels upstream, the covered entity may be completely unaware of the breach. It could be weeks or even months after the breach actually occurred before news of the breach reaches the covered entity. Consider that the vendor that experienced the breach may not have discovered it for some time after the attack happened, further expanding the time between the breach occurring and ultimate notification to patients.
  • Upon discovery of a security incident from a business associate, which already could be long after the breach actually occurred and several layers downstream, the covered entity must initiate its incident response plan. One of the first tasks will be to understand what happened and what data was affected. This news often does not come with a spreadsheet from which the affected individuals could easily be identified. It may instead arrive in the form of a long list of files and folders that contain thousands and thousands of documents, images, records, etc. Many of these items may have no PHI whatsoever. The challenge is to find those documents, images, records, etc. that do, and to pull from those items the individuals affected and the kind of information involved. This process, sometimes referred to as data mining and document review, often is complex, time-consuming, and costly.
  • On completion of the data mining and document review process, the covered entity will begin to have a better sense of the individuals affected, the type of information compromised, the state(s) in which those individuals reside, etc. At this point, covered entities will work quickly to arrange for notification to individuals, the OCR, and, if applicable, the media, state agencies, others. 

There is no doubt that breach notification laws serve an important purpose, namely, to alert affected individuals about a compromise to their sensitive data so that they can take steps to protect against ID theft and other risks. However, the promptness of notice can and often is hampered by factors outside of the covered entity’s control, particularly if the measure of promptness is the time between the date the breach occurred (regardless of what entity experienced the breach) and the date of notification to individuals.

All that being said, there may be some ways that covered entities might tighten up this process. One consideration, of course, is to adopt, regularly assess, and practice an incident response plan. Another is to have a more granular understanding of the data certain vendors are handling for the covered entity. Still another consideration is to revisit the entity’s vendor management program. Looking more closely at downstream service providers beyond direct business associates might be helpful in assessing the notification process and timing should a breach take place downstream. Having more information about downstream vendors, their roles, and the data they process may serve to shorten the notification timeline. Ultimately, even if there is a delay downstream, before the covered entity discovered the breach, a well-executed incident response plan, one that results in a shortened timeframe between discovery and notification, could help to improve the covered entity’s defensible position whether facing a litigation or government agency enforcement action.

To celebrate Data Privacy Day (January 28), we present our top ten data privacy and cybersecurity predictions for 2024.

  1. AI regulations to protect data privacy.

Automated decision-making tools, smart cameras, wearables, and similar applications, powered by technology commonly referred to as “artificial intelligence” or “AI” will continue to expand in 2024 as will the regulations to protect individuals’ privacy and secure data when deploying those technologies. Last year, we saw a comprehensive Executive Order from the Biden Administration, the New York City AI law take effect, and states like Connecticut passed laws regarding the state use of AI. Already in 2024, several states have introduced proposed AI regulation, such as  New York developing an AI Bill of Rights.

The use of “generative AI” also exploded, as several industries sought to leverage its benefits while trying to manage risks. In healthcare, for example, AI and HIPAA do not always mix when it comes to maintaining the confidentiality of protected health information. Additionally, generative AI is not only used for good, as criminal threat actors have enhanced their phishing attacks against the healthcare industry.

  1. The continued expansion of the patchwork of state privacy laws.

In 2023, seven states added comprehensive consumer privacy laws. And several other states enacted more limited privacy laws dealing with social media or health-related data. It looks like 2024 will continue the expansion. Already in 2024, New Jersey has passed its own consumer privacy law, which takes effect in 2025. And New Hampshire is not far behind in potentially passing a statute.

  1. Children’s data protections will expand.

In 2023, several states passed or considered data protection legislation for minors with growing concerns that the Children’s Online Privacy Protection Act (COPPA) was not sufficient to protect children’s data. Connecticut added additional protections for minors’ data in 2023.

In 2024, the Federal Trade Commission (FTC) issued a notice of proposed rulemaking pertaining to COPPA, in addition to several states proposing legislation to protect children’s online privacy.

  1. Cybersecurity audits will become even more of a necessity to protect data.

As privacy protection legislation increases, businesses must start working to protect the data they are collecting and maintaining. The importance of conducting cybersecurity audits to ensure that policies and procedures are in place.

In 2023, there California Privacy Protection Agency considered regulations pertaining to cybersecurity audits. The SEC and FTC expanded obligations for reporting security breaches, making audits, incident response planning, and tabletop exercises to avoid such incidents all the more important.

It is anticipated there will be further regulations and legislation forcing companies to consider their cybersecurity in order to protect individuals’ privacy.

  1. Genetic and health data protection will continue to rise.

In 2023, Nevada and Washington passed health data privacy laws to protect data collected that was not subject to HIPAA. Montana passed a genetic information privacy law. Already this year Nebraska is advancing its own genetic information privacy law. It is likely concerns about health and genetic data will grow along with other privacy concerns and so too will the legislation and regulations. We also have seen a significant uptick in class action litigation in Illinois under the state’s Genetic Information Privacy Act (GIPA). A close relative to the state’s Biometric Information Privacy Act (BIPA), GIPA carried nearly identical remedy provisions, except the amounts of statutory damages are higher than under BIPA.

  1. Continued enforcement actions for data security.

As legislation and regulations grow so too will enforcement actions. Many of the state statutes and city regulations only allow for governmental enforcement, however, those entities are going to start enforcing requirements to ensure there is an incentive for businesses to comply. In 2023, we saw the New York Attorney General continue its active enforcement of data security requirements.

  1. HIPAA compliance will continue to be difficult as it overlaps with cybersecurity.

In 2023, the Office of Civil Rights (OCR) which enforces HIPAA, discussed issues with driving cybersecurity and HIPAA compliance as well as other compliance concerns.  In 2024, entities required to comply with HIPAA will be challenged to determine how to use new and useful technologies and data sharing while maintaining privacy, while also protecting HIPAA-covered information as cybersecurity threats continue to flourish.

  1. Website tracking technologies will continue to be in the hot seat.

In 2023, both the FTC and the Health and Human Services (HHS) took issue with website tracking technologies such as through “pixels”. By the time that guidance was issued, litigation concerning these technologies pertaining to data privacy and data sharing concerns had already been expanding. To help clients identify and address these risks Jackson Lewis and SecondSight joined forces to offer organizations a website compliance assessment tool that has been well received.

In 2024, it is anticipated that there will be further website-tracking litigation as well as enforcement actions from governmental agencies that see the technology as infringing on consumers’ privacy rights.

  1. Expect biometric information to increasingly be leveraged to address privacy and security concerns.

As we move toward a “passwordless” society,  technologies using biometric identifiers and information continue to be the “go-to” method for authentication. However, also increasing are the regulations on the collection and use of biometric information. While the Illinois Biometric Information Privacy Act (BIPA) is most prolific in its protection of biometric information, many of the new comprehensive privacy laws include protections for biometric information. See our biometric law map for developments.  

  1. Privacy class actions will continue to increase.

Whether it is BIPA, GIPA, CIPA, TCPA, DPPA, pixel litigation, or data breach class actions, 2024 will likely see an increase in privacy-related class actions. As such, it becomes more important than ever for businesses to understand and ensure the protection of the data they collect and control.

For these reasons and others, we believe data privacy will continue to be at the forefront of many industries in 2024, and Jackson Lewis will continue to track relevant developments. Happy Privacy Day!