ٍَالرئيسيةموضوعات وتقارير

Digital Security Increasingly Relies on AI. But That Tech Isn’t as Secure as We Think

Digital Security Increasingly Relies on AI. But That Tech Isn’t as Secure as We Think

COMMENTARY

Apr 23, 2024

Digital image of a fingerprint, photo by Olemedia/Getty Images

Photo by Olemedia/Getty Images

This commentary originally appeared on San Francisco Chronicle on April 23, 2024.

Imagine trying to convince someone that you are, in fact, you. Do you offer to compare your face to your driver’s license photo? Or, tell them something only you would know, like the name of your first crush?

Handing over identifying information to get something of value is a trade most of us have made throughout our lives. Even before the internet, this trade might have included placing our inked index finger onto a notary’s notepad or reciting our mother’s maiden name to a government official or a bank teller.

But powerful technologies like artificial intelligence could make such trades too lopsided, forcing us to give up too much of who we are to get the things we need.

In the digital age, trade-offs—like displaying our faces and fingerprints—are all but required simply to function in society. Whether it’s unlocking our smartphone, paying for coffee, or boarding an airplane, these AI-powered trades grant us access to the things we want. But the technology charged with securing our information and protecting what we have given up in these trades—proof of our very selfhood—against theft, fraud, and other potential harms (PDF) doesn’t always work.

In the digital age, trade-offs—like displaying our faces and fingerprints—are all but required simply to function in society.

Share on Twitter

September 2023 report (PDF) from the Center for Democracy and Technology, for example, found that 19 percent of students whose schools use AI software reported that they or someone they know had been inadvertently outed as LGBTQ+ by the technology, a 6-percentage point increase over the previous school year. Similarly, in March 2023, OpenAI revealed that a bug in its technology allowed some users to see titles of another active user’s chat history, and in some cases, even the first message in a newly created conversation if both users were active at the same time.

In a world rapidly integrating AI into everything security-related, what if we reach a point where a chatbot interview is a required verification step and its underlying large language model infers something sensitive about you—like your sexual orientation or risk for depression—and then asks you to confirm this trait as proof of self? Or, what if government programs that use risk-prediction algorithms and facial recognition to safeguard travel try employing AI that forces travelers or migrants to disclose something deeply personal or risk being turned away from somewhere they want, or even need, to go?

These are not far-fetched future scenarios.

And just as worrisome as the technology being unable to secure our private information, it repeatedly cannot correctly identify people. One face-recognition tool, for example, recently incorrectly matched several faces of members of Congress to the faces in a mugshot database. It has also exhibited problematic behaviors. Uber’s facial recognition verification system received multiple complaints, and in the United Kingdom, it was deemed discriminatory after it repeatedly failed to recognize a Black delivery driver and even locked him out of its platform.

Even more disturbing, newer technologies like generative AI—which has billions of dollars in investments—continue to mischaracterize us. AI image generators like Midjourney, when asked to depict people and places from around the world, reduce them to caricatures.

When people behave like this, we call it stereotyping or even discrimination.

But AI, of course, is not human. Its skewed understanding of who we are can be traced back to its reliance on the data we provide—our posts on social media or our conversations with a chatbot—all of which occur online, where we’re not always who we appear to be. It’s on developers and researchers, then, to ensure people’s data remains private, to keep working to improve AI’s accuracy, minimizing technical errors such as when AI makes up answers—known as bias.

AI developers could also do a better job acknowledging that their products shape not just our attitudes and behaviors, but our very sense of self. Developers could partner with social scientists to marshal research about identity development, for example, and research why it’s important for youth and others to be able to define—and redefine—themselves.

The insights gleaned from such research might then help clarify how AI could more fully embody the picture of who we are, not just the factual knowledge we have traditionally traded off for security access, but our emotions and personalities, our culture and creativity, our capacity for cruelty and compassion. Future workers—AI and human—need better training to relate to and communicate with people.

Future workers—AI and human—need better training to relate to and communicate with people.

Share on Twitter

Policymakers have a role to play, too: They can encourage these actions by updating existing frameworks for the responsible use of AI or by developing new guidance for integrating AI in digital security and identity-verification practices.

Establishing who we are in society is fundamental to being human. And digitally securing our identities is crucial to safeguard the selves we have built—and are continuously building. By putting AI in charge of deciding who counts, or what traits define a human, we risk becoming the people the machines say we are and not who we might want to be.


Douglas Yeung is a senior behavioral scientist at RAND and a Pardee RAND Graduate School faculty member.

مقالات ذات صلة

زر الذهاب إلى الأعلى