Eight U.S. states passed data privacy legislation in 2023, and in 2024 laws will take effect in four, including Oregon, Montana and Texas, each with comprehensive state privacy laws, and Florida, with its own law much more limited on the Digital Bill of Rights. Notably, these laws all share similarities and highlight a national trend toward unified data protection standards in the fragmented U.S. privacy landscape.
While these laws are aligned in many respects – such as the exemption of employer information and the lack of a private right of action – they also exhibit state-specific nuances. For example, Montana’s lower threshold for defining personal information, Texas’ unique approach to defining small business, and Oregon’s detailed categorization of personal information illustrate this diversity.
Because of its small population of about a million people, Montana has set its threshold much lower than other states. Because of this lower threshold, more people may be subject to it than would otherwise be. Montana privacy law requires companies to conduct data protection assessments to identify high-risk areas where sensitive data is acquired and stored. This law forces companies to have data protection assessments and processes to ensure organizations are held accountable.
Texas’ privacy law stands out as one of the first in the United States to avoid financial thresholds for compliance, basing its criteria on Small Business Administration definitions. This innovative approach broadens the law’s applicability, ensuring a wider range of companies are held accountable for data privacy.
Oregon’s law expands the definition of personal information to include connected devices, illustrating the state’s commitment to comprehensive data protection. It covers various digital footprints, from fitness watches to online health records. Oregon also includes specific references to gender and transgender individuals in its definition of sensitive information, showing a nuanced approach to privacy.
The laws demonstrate the urgent need for companies to evaluate and ensure data protection integrations into their processes. Accountability is a key aspect of these laws, reflecting the increased rights and awareness of data subjects. Organizations must establish procedures to enable individuals to effectively exercise their privacy rights, which involves investing in platforms to manage and monitor processing activities to ensure compliance.
Generative AI and its uses are receiving considerable attention and scrutiny
The rise of generative artificial intelligence (GenAI) presents unique challenges in the privacy space. As AI technologies become an integral part of businesses, the need for structured policies and processes to manage AI implementation is critical. The National Institute of Standards and Technology (NIST) has developed a framework for managing AI risks, focusing on design and implementation strategies.
In terms of governance, we often see AI relied on privacy instead of security because there’s a lot of overlap, but in terms of tactical impacts there’s a lot. Large language models (LLMs) and other AI technologies often use extensive unstructured data, raising critical concerns about data categorization, labeling, and security. The possibility of AI inadvertently leaking sensitive information is an urgent issue, requiring vigilant monitoring and robust governance.
It is also important to remember that these AI systems need training and what they use to train AI systems is your personal information. The recent controversy on the topic Zoom’s plan involves using personal data for AI training highlights the fine line between legal compliance and public perception.
This year is also critical for privacy laws as they intersect with GenAI’s burgeoning dominance. The rapid adoption of AI technologies poses new challenges for data privacy, especially in the absence of specific legislation or standardized frameworks. The privacy implications of AI vary, from biases in decision-making algorithms to the use of personal information in AI training. As AI reshapes the landscape, companies must remain vigilant, ensuring compliance with emerging AI guidelines and evolving state privacy laws.
Four key emerging data privacy trends that companies should expect to see this yesR
Businesses should expect to see many emerging data privacy trends this year, including:
-
If you’ve looked at some maps of the United States in particular, the Northeast is lighting up like a Christmas tree because of the privacy laws that have been put in place. One trend is for states to continue to adopt comprehensive privacy laws. We don’t know how many will pass this year, but there will certainly be a very active debate.
-
AI will be a significant trend, as businesses will see unintended consequences from its use, resulting in violations and fines due to the rapid adoption of AI without any effective legislation or standardized frameworks. On the US state privacy law front, there will be an increased area of enforcement from the Federal Trade Commission (FTC), which has been clear that it intends to be very aggressive in pursuing such legislation.
-
2024 is a presidential election year in the United States, which will raise awareness and focus on data privacy. People are still a little mystified by the last election cycle in terms of privacy concerns about mail-in and online voting, which could extend to business practices. Children’s privacy is also gaining importance, with states like Connecticut introducing additional requirements.
-
Companies should also expect to see a data sovereignty trend in 2024. While there has always been that discussion about data localization, it is still broken down into data sovereignty, i.e. who controls that data, its residents, and where it resides . Multinationals need to spend more time understanding where their data resides and the requirements under these international obligations to meet residency and data sovereignty requirements to comply with international laws.
Overall, this is the time for companies to sit down and take a deep look at what they are processing, what types of risks they face, how to manage this risk, and their plans to mitigate the risk they have identified. The first step is to identify the risk and then ensure that, with the risk identified, companies define a strategy to comply with all these new regulations that are out there with artificial intelligence taking over. Organizations should consider whether they use AI internally, whether employees use AI, and how to ensure they are aware of and track this information.