How it feels to be sexually objectified by an AI

1 year ago 140

This communicative primitively appeared successful The Algorithm, our play newsletter connected AI. To get stories similar this successful your inbox first, sign up here.

My societal media feeds this week person been dominated by 2 blistery topics: OpenAI’s latest chatbot, ChatGPT, and the viral AI avatar app Lensa. I emotion playing astir with caller technology, truthful I gave Lensa a go. 

I was hoping to get results akin to my colleagues astatine MIT Technology Review. The app generated realistic and flattering avatars for them—think astronauts, warriors, and physics euphony medium covers. 

Instead, I got tons of nudes. Out of 100 avatars I generated, 16 were topless, and different 14 had maine successful highly skimpy apparel and overtly sexualized poses. You can read my communicative here.

Lensa creates its avatars using Stable Diffusion, an open-source AI exemplary that generates images based connected substance prompts. Stable Diffusion is trained connected LAION-5B, a monolithic open-source information acceptable that has been compiled by scraping images from the internet.

And due to the fact that the net is overflowing with images of bare oregon hardly dressed women, and pictures reflecting sexist, racist stereotypes, the information acceptable is besides skewed toward these kinds of images. 

As an Asian woman, I thought I’d seen it all. I’ve felt icky aft realizing a erstwhile day lone dated Asian women. I’ve been successful fights with men who deliberation Asian women marque large housewives. I’ve heard crude comments astir my genitals. I’ve been mixed up with the different Asian idiosyncratic successful the room. 

Being sexualized by an AI was not thing I expected, though it is not surprising. Frankly, it was crushingly disappointing. My colleagues and friends got the privilege of being stylized into artful representations of themselves. They were recognizable successful their avatars! I was not. I got images of generic Asian women intelligibly modeled connected anime characters oregon video games. 

Funnily enough, I recovered much realistic portrayals of myself erstwhile I told the app I was male. This astir apt applied a antithetic acceptable of prompts to images. The differences are stark. In the images generated utilizing antheral filters, I person apparel on, I look assertive, and—most important—I tin admit myself successful the pictures.  

“Women are associated with intersexual content, whereas men are associated with professional, career-related contented successful immoderate important domain specified arsenic medicine, science, business, and truthful on,” says Aylin Caliskan, an adjunct prof astatine the University of Washington who studies biases and practice successful AI systems. 

This benignant of stereotyping tin beryllium easy spotted with a caller instrumentality built by researcher Sasha Luccioni, who works astatine AI startup Hugging Face, that allows anyone to explore the antithetic biases in Stable Diffusion. 

The instrumentality shows however the AI exemplary offers pictures of achromatic men arsenic doctors, architects, and designers portion women are depicted arsenic hairdressers and maids.

But it’s not conscionable the grooming information that is to blame. The companies processing these models and apps marque progressive choices astir however they usage the data, says Ryan Steed, a PhD pupil astatine Carnegie Mellon University, who has studied biases successful image-generation algorithms

“Someone has to take the grooming data, determine to physique the model, determine to instrumentality definite steps to mitigate those biases oregon not,” helium says.  

Prisma Labs, the institution down Lensa, says each genders look “sporadic sexualization.” But to me, that’s not bully enough. Somebody made the conscious determination to use definite colour schemes and scenarios and item definite assemblage parts. 

In the abbreviated term, immoderate evident harms could effect from these decisions, specified arsenic casual entree to deepfake generators that make nonconsensual nude images of women oregon children. 

But Aylin Caliskan sees adjacent bigger longer-term problems ahead. As AI-generated images with their embedded biases flood the internet, they volition yet go grooming information for aboriginal AI models. “Are we going to make a aboriginal wherever we support amplifying these biases and marginalizing populations?” she says. 

That’s a genuinely frightening thought, and I for 1 anticipation we springiness these issues owed clip and information earlier the occupation gets adjacent bigger and much embedded. 

Deeper Learning

How US constabulary usage counterterrorism wealth to bargain spy tech

Grant wealth meant to assistance cities hole for panic attacks is being spent connected “massive purchases of surveillance technology” for US constabulary departments, a caller study by the advocacy organizations Action Center connected Race and Economy (ACRE), LittleSis, MediaJustice, and the Immigrant Defense Project shows. 

Shopping for AI-powered spytech: For example, the Los Angeles Police Department utilized backing intended for counterterrorism to bargain automated licence sheet readers worthy astatine slightest $1.27 million, vigor instrumentality worthy upwards of $24 million, Palantir information fusion platforms (often utilized for AI-powered predictive policing), and societal media surveillance software. 

Why this matters: For assorted reasons, a batch of problematic tech ends up successful high-stake sectors specified arsenic policing with small to nary oversight. For example, the facial designation institution Clearview AI offers “free trials” of its tech to constabulary departments, which allows them to usage it without a purchasing statement oregon fund approval. Federal grants for counterterrorism don't necessitate arsenic overmuch nationalist transparency and oversight. The report’s findings are yet different illustration of a increasing signifier successful which citizens are progressively kept successful the acheronian astir constabulary tech procurement. Read much from Tate Ryan-Mosley here.

Bits and Bytes

hatGPT, Galactica, and the advancement trap
AI researchers Abeba Birhane and Deborah Raji constitute that the “lackadaisical approaches to exemplary release” (as seen with Meta’s Galactica) and the highly antiaircraft effect to captious feedback represent a “deeply concerning” inclination successful AI close now. They reason that erstwhile models don’t “meet the expectations of those astir apt to beryllium harmed by them,” past “their products are not acceptable to service these communities and bash not merit wide release.” (Wired)

The caller chatbots could alteration the world. Can you spot them?
People person been blown distant by however coherent ChatGPT is. The occupation is, a important magnitude of what it spews is nonsense. Large connection models are nary much than assured bullshitters, and we’d beryllium omniscient to attack them with that successful mind. 
 (The New York Times)

Stumbling with their words, immoderate radical fto AI bash the talking
Despite the tech’s flaws, immoderate people—such arsenic those with learning difficulties—are inactive uncovering ample connection models utile arsenic a mode to assistance explicit themselves. 
(The Washington Post

EU countries' stance connected AI rules draws disapproval from lawmakers and activists
The EU’s AI law, the AI Act, is edging person to being finalized. EU countries person approved their presumption connected what the regularisation should look like, but critics accidental galore important issues, specified arsenic the usage of facial designation by companies successful nationalist places, were not addressed, and galore safeguards were watered down. (Reuters)

Investors question to nett from generative-AI startups
It’s not conscionable you. Venture capitalists besides deliberation generative-AI startups specified arsenic Stability.AI, which created the fashionable text-to-image exemplary Stable Diffusion, are the hottest things successful tech close now. And they’re throwing stacks of wealth astatine them. (The Financial Times)

Read Entire Article