Nowadays, people are often saying that privacy is dead. I’m not sure that’s quite right, but the traditional protection mechanisms are certainly becoming irrelevant.
Every click, post, or photo feeds an inference engine somewhere. Even if you’re careful, others share enough about you that privacy becomes a collective, not individual, responsibility. Between data brokerage, social transparency, and generative AI, practicing secrecy is no longer a reliable defensive mechanism.
So what can replace it?
In cybersecurity, we’ve already adapted—we pivoted to zero trust (never trust, always verify) and least privilege (only give the access needed for a task). These ideas are increasingly applicable to people too.
Applied societally, zero trust means the following:
And least privilege becomes behavioral:
If you accept that absolute privacy is gone, the goal changes. The objective is no longer to hide everything but to curate what matters.
Being selectively authentic and deciding deliberately what truths to expose, to whom, and when become a new form of self-defense. It’s how humans can apply zero trust logic to social life.
When machines can infer everything, what does this mean for privacy? Maybe it means being intentional—knowing what’s public, what’s personal, and what’s just noise and then acting accordingly.
We can’t control every data leak, but we can control our digital posture. Zero trust started as a network principle; maybe it should become a human one.