Cybersecurity, Emerging Issues in Security

Progress in Privacy Thinking: 2023 Predictions

Last year, I wrote about 4 predictions I had for the privacy space in 2022. These predictions were based on shifts in privacy thinking and how they were affecting regulations and vice versa. Ultimately, these fundamental changes in privacy thinking affect the way organizations view and address their privacy and data protection challenges.

My 2022 predictions, as well as their results, were:

1. Fewer attempts to “build your own” privacy tech. In the hundreds of conversations I’ve had with organizations in 2021 vs. 2022, with teams trying to figure build vs. buy when addressing numerous Privacy by Design problems, I encountered significantly fewer questions about “how can I build?” and many more around “where can I buy?” Organizations are starting to understand that, just as you don’t build your own cryptography, you should not build your own privacy and should rely on expertly built systems instead.

2. More developer tools are enabling this shift. Certain platforms allow developers to integrate privacy checks as part of their Continuous Integration (CI) systems, and homomorphic encryption libraries are increasingly maturing. Fundamental pieces of the privacy problem can be addressed with highly accurate multilingual personal data detection in unstructured data.

3. The beginning of the edge. Privacy on edge devices (e.g., cell phones, VR systems, browsers) started being discussed more often. With Facebook shifting to Meta in 2021, 2022 came with deep dives into what privacy in the metaverse would mean. A hard-to-miss topic, given that Apple’s change in privacy policy, limiting Meta’s ability to do targeted advertising, caused Meta to have the biggest stock plummet in history. Not only did Apple demonstrate just how much power device manufacturers have to dictate privacy rules, but they also led the way in demonstrating the key role edge device manufacturers have to play in enforcing Privacy by Design.

4. The change in conversation around AI. The conversation went from asking “how can we ask consumers for their positive consent?” to asking “what can we ask consumers to consent to in the first place?” While we did see more nations taking the first steps toward banning ethically unacceptable uses of AI and of their citizens’ personal information in 2022, this prediction was still a bit premature. It will come, but for now organizations are still often grappling with what they can use AI for in the first place and then asking about how privacy relates to those uses. With limitless possibilities and a need for digital transformation which organizations depend on for survival, there is still a lot of uncertainty around how value will be unlocked from the 80% to 90% of data that is collected.

What Was Not in the 2022 Predictions?

Last year saw the beginning of a global understanding that anonymization doesn’t mean removing just names, SSN, and a few more direct identifiers. Legislation currently being drafted and discussed, like Canada’s Bill C-27, is likely to provide guidance around what anonymization means. That is a huge step toward informing organizations about expectations around anonymization and preventing them from defining their own anonymization without thorough empirical analysis.

Top Predictions for Privacy Thinking in 2023

With the economic downturn in tech, many things are becoming more difficult to predict for the next year. Though not quite so for the privacy space, which continues to be a headline topic applying continued scrutiny on companies and their data practices. This year will be the year where:

1. Governments will begin to reconsider employee privacy rights. With the increasing use of at-home employee surveillance tech and the Twitter acquisition showcasing the inability for employees to consent to having their conversations viewed by new management, this will be the year when governments take notice and start drafting employee privacy rights legislation which catch up to modern technological capabilities and employment practices.

2. A rise of companies linking their privacy policies with the company’s actual code reviews. The tools are now available to make this happen, and there is a growing understanding that developers themselves have to be in charge of integrating and analyzing privacy practices, much like they are for software security practices.

3. Governments worldwide will have more serious discussions about providing data protection, privacy, and cybersecurity subsidies or incentives. Integrating data protection, privacy, and cybersecurity within an organization and creating or paying for employee training programs is not cheap, but it is crucial and a matter of protecting citizens as well as taxable dollars (both corporate and personal income). Just as green energy is incentivized by governments for the sake of citizens’ well-being, so will privacy and cybersecurity be incentivized. As an example, the U.S. Federal Regulatory Commission proposed incentives to encourage utilities to improve their cybersecurity posture.

There Is Still a Long Way to Go

While the fast evolution of privacy thinking is promising, there is still quite a bit lacking in the understanding and implementation of privacy. What is lacking the most is a collective understanding of Privacy Enhancing Technologies, including when each technology is useful and what their limitations are. While there is now more information available around the trade-offs of each PET, I’m still hearing many comments about how one solution or the other (e.g., homomorphic encryption, differential privacy, anonymization) is the one and only solution to privacy problems, showing an utter misunderstanding of the privacy-enhancing technologies themselves and the use cases which they address. Until there is more widespread education about PETs in universities, this will continue to be a problem.

Another crucial limitation to the success of privacy programs is that few companies properly integrate their privacy teams with their tech teams and encourage every employee to be responsible for user privacy. It is more common to see privacy teams restrained to approving or declining requests from tech teams without a proper understanding on either side of what the other group does. Until privacy and tech teams genuinely start communicating and collaborating, we will not have technology that is truly Private by Design and privacy will continue to be seen as a blocker rather than the enabler that it truly is.

Lastly, there are some problems which are not even close to being solved. In particular, biometric identification and authentication. The biometric devices left behind by American troops in Afghanistan in 2021 reportedly put Afghans in danger of being targeted by the Taliban. Biometric privacy is such a fundamental necessity, but on a level that is not even well understood yet.

Homomorphic encryption (which allows for computations on encrypted data) and secure enclaves (which allow for computations to be performed on a chip that is not even accessible by the Operating System) are used to protect the actual representation of a person’s biometric information, allowing for biometric authentication or identification in a way that preserves the person’s privacy from the organization that built the system. However, there is still no reliable, widespread system that takes into account the consent of the individual being identified or authenticated at processing time.

This is a massive problem, and where there are massive problems, there are massive opportunities.

Patricia Thaine is the Co-Founder and CEO of Private AI. Private AI can identify the personal information within your systems and help you manage, protect and use it in a safe and effective way.