In a press release in February 2026, Discord announced that it would be introducing “teen safety features” to create a “safer and more inclusive experience for users over the age of 13.” In practice, this means that users will be required to verify their age to access certain content, either by uploading a piece of ID or a picture that is then scanned using “facial age estimation” technology.
Discord is one of many apps that are beginning to implement age-checking to use their platforms in different parts of the world, especially in the UK and Australia. It is also not alone in receiving backlash for its decision: a common complaint among users is that using age-verification tools risks violating their privacy, since tech companies have often not had the best track record in keeping personal data safe from bad actors. Indeed, Discord itself delayed the implementation of its age-checking tools in response to concerns over one of its vendors being hacked.
It’s perfectly reasonable to be wary of how tech companies handle personal data. But when it comes to needing to prove one’s identity online, privacy concerns can also represent something more than just worries about hacks. To see why, we should ask: what do we mean when we say we’re concerned about the privacy of our information?
We should have realistic expectations about the privacy of our information online. People ought to work under the assumption that any information they post publicly will be available to everyone and in perpetuity, and that by interacting with anything online, companies gather information about you. Discord (and other apps) don’t typically need to count the number of wrinkles on your face to determine your age: your online habits paint a picture, enough to estimate whether you are or aren’t a teenager.
While users are often content to trade access to some of their information for free services online, few would be willing to make their lives an open book for tech companies. We value control over the information communicated to others about us. When we worry about companies violating our privacy we worry about losing this kind of control: even if we have a sense of how a company might use our information – e.g., to try to sell us things or present us with content that will keep us on the platform – we might worry that information could be used in ways that we don’t endorse.
Discord’s latest policies attempt to address these concerns by promising that not all users will need to verify their age, that facial scans are not uploaded to Discord and “IDs are used to get your age only and then deleted,” and that “your identity is never associated with your account.” If your personal information is never actually received by Discord (or if it is but then deleted immediately), then presumably our worries over control seem to have less pull.
But even if we are guaranteed no data breaches, being asked to provide personal information can still feel like a violation of privacy. This is because privacy isn’t only about keeping secrets, it also involves choosing which information we reveal and how it is interpreted. Philosopher Daniel Susser, for instance, argues that one of the reasons we value online privacy is because the way that we conceal and reveal personal information helps to “shape the way others perceive and understand who we are,” what he calls “social self-authorship.”
Say, for example, that in my professional life I like to maintain an air of seriousness and competence. At work, I value my privacy insofar as I am able to both conceal and reveal information about myself, and do so to create the persona of someone who gets down to business. In my personal life, however, I might be quite different: around my friends, I take myself much less seriously and am forthcoming about my beliefs and feelings. I still choose to conceal some information and reveal other information, but it is different from what I reveal and conceal at work.
This likely sounds familiar: we take on different personas depending on the context we’re in and the people we’re around. One way your privacy can be violated is if information about yourself from one sphere of your life is made known to those in another in a way that threatens your ability to determine how others see you. If, for example, I like singing the works of Celine Dion at karaoke in my personal life but do not want my work persona to be that of an appreciator of a French-Canadian chanteuse, then that information becoming known to my colleagues can constitute a violation of my privacy insofar as it undermines my ability to be the author of my work-self.
Why is the notion of privacy as social self-authorship important when it comes to providing information about ourselves to tech companies? It reminds us that concerns about the privacy of information are not simply about keeping things secret, but about choosing how we present ourselves – which information to present to whom and when. Even if a tech company ensures that our information is kept secret, the act of sharing that information can constitute crossing a boundary that we don’t want to cross. For instance, if part of the appeal of using certain apps or websites is the ability to do so anonymously, or to explore niche interests, or express part of your personality that you wouldn’t be able to express in other areas of your life, then attaching your identity to something can feel like an intrusion – you must introduce aspects of your life that you have created for yourself in one sphere into another.
Consider the difference between being in favor of something as a fact about what you believe and signing your name on a petition: the act of attaching your identity to something incorporates that thing into the person you are. Likewise, being required to provide information about your identity to use a product can then feel like a violation of privacy insofar as it does not respect your ability to separate your online persona from that which you author in other spheres of your life.
One reason we might not want to give tech companies our information is that we do not trust them and want to take a stand against overreach. By continuing to use their products, one thus brands oneself as someone who capitulates to the demands of those companies: using Discord says that you are willing to provide information about your identity and are at least somewhat trusting of the company behind it. Whether this is something that we want to be part of our identities is something that users must now grapple with.