AI

Women in AI: Anika Collier Navaroli is working to shift the power imbalance

Comment

Image Credits: Anika Collier Navaroli / Bryce Durbin / TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution.

Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Technology Public Voices Fellow with the OpEd Project, held in collaboration with the MacArthur Foundation.

She is known for her research and advocacy work within technology. Previously, she worked as a race and technology practitioner fellow at the Stanford Center on Philanthropy and Civil Society. Before this, she led Trust & Safety at Twitch and Twitter. Navaroli is perhaps best known for her congressional testimony about Twitter, where she spoke about the ignored warnings of impending violence on social media that prefaced what would become the January 6 Capitol attack.

Briefly, how did you get your start in AI? What attracted you to the field? 

About 20 years ago, I was working as a copy clerk in the newsroom of my hometown paper during the summer when it went digital. Back then, I was an undergrad studying journalism. Social media sites like Facebook were sweeping over my campus, and I became obsessed with trying to understand how laws built on the printing press would evolve with emerging technologies. That curiosity led me through law school, where I migrated to Twitter, studied media law and policy, and I watched the Arab Spring and Occupy Wall Street movements play out. I put it all together and wrote my master’s thesis about how new technology was transforming the way information flowed and how society exercised freedom of expression.

I worked at a couple law firms after graduation and then found my way to Data & Society Research Institute leading the new think tank’s research on what was then called “big data,” civil rights, and fairness. My work there looked at how early AI systems like facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms were replicating bias and creating unintended consequences that impacted marginalized communities. I then went on to work at Color of Change and lead the first civil rights audit of a tech company, develop the organization’s playbook for tech accountability campaigns, and advocate for tech policy changes to governments and regulators. From there, I became a senior policy official inside Trust & Safety teams at Twitter and Twitch. 

What work are you most proud of in the AI field?

I am the most proud of my work inside of technology companies using policy to practically shift the balance of power and correct bias within culture and knowledge-producing algorithmic systems. At Twitter, I ran a couple campaigns to verify individuals who shockingly had been previously excluded from the exclusive verification process, including Black women, people of color, and queer folks. This also included leading AI scholars like Safiya Noble, Alondra Nelson, Timnit Gebru, and Meredith Broussard. This was in 2020 when Twitter was still Twitter. Back then, verification meant that your name and content became a part of Twitter’s core algorithm because tweets from verified accounts were injected into recommendations, search results, home timelines, and contributed toward the creation of trends. So working to verify new people with different perspectives on AI fundamentally shifted whose voices were given authority as thought leaders and elevated new ideas into the public conversation during some really critical moments. 

I’m also very proud of the research I conducted at Stanford that came together as Black in Moderation. When I was working inside of tech companies, I also noticed that no one was really writing or talking about the experiences that I was having every day as a Black person working in Trust & Safety. So when I left the industry and went back into academia, I decided to speak with Black tech workers and bring to light their stories. The research ended up being the first of its kind and has spurred so many new and important conversations about the experiences of tech employees with marginalized identities. 

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?  

As a Black queer woman, navigating male-dominated spaces and spaces where I am othered has been a part of my entire life journey. Within tech and AI, I think the most challenging aspect has been what I call in my research “compelled identity labor.” I coined the term to describe frequent situations where employees with marginalized identities are treated as the voices and/or representatives of entire communities who share their identities. 

Because of the high stakes that come with developing new technology like AI, that labor can sometimes feel almost impossible to escape. I had to learn to set very specific boundaries for myself about what issues I was willing to engage with and when. 

What are some of the most pressing issues facing AI as it evolves?

According to investigative reporting, current generative AI models have gobbled up all the data on the internet and will soon run out of available data to devour. So the largest AI companies in the world are turning to synthetic data, or information generated by AI itself, rather than humans, to continue to train their systems. 

The idea took me down a rabbit hole. So, I recently wrote an Op-Ed arguing that I think this use of synthetic data as training data is one of the most pressing ethical issues facing new AI development. Generative AI systems have already shown that based on their original training data, their output is to replicate bias and create false information. So the pathway of training new systems with synthetic data would mean constantly feeding biased and inaccurate outputs back into the system as new training data. I described this as potentially devolving into a feedback loop to hell.

Since I wrote the piece, Mark Zuckerberg lauded that Meta’s updated Llama 3 chatbot was partially powered by synthetic data and was the “most intelligent” generative AI product on the market.

What are some issues AI users should be aware of?

AI is such an omnipresent part of our present lives, from spellcheck and social media feeds to chatbots and image generators. In many ways, society has become the guinea pig for the experiments of this new, untested technology. But AI users shouldn’t feel powerless.  

I’ve been arguing that technology advocates should come together and organize AI users to call for a People Pause on AI. I think that the Writers Guild of America has shown that with organization, collective action, and patient resolve, people can come together to create meaningful boundaries for the use of AI technologies. I also believe that if we pause now to fix the mistakes of the past and create new ethical guidelines and regulation, AI doesn’t have to become an existential threat to our futures. 

What is the best way to responsibly build AI?

My experience working inside of tech companies showed me how much it matters who is in the room writing policies, presenting arguments, and making decisions. My pathway also showed me that I developed the skills I needed to succeed within the technology industry by starting in journalism school. I’m now back working at Columbia Journalism School and I am interested in training up the next generation of people who will do the work of technology accountability and responsibly developing AI both inside of tech companies and as external watchdogs. 

I think [journalism] school gives people such unique training in interrogating information, seeking truth, considering multiple viewpoints, creating logical arguments, and distilling facts and reality from opinion and misinformation. I believe that’s a solid foundation for the people who will be responsible for writing the rules for what the next iterations of AI can and cannot do. And I’m looking forward to creating a more paved pathway for those who come next. 

I also believe that in addition to skilled Trust & Safety workers, the AI industry needs external regulation. In the U.S., I argue that this should come in the form of a new agency to regulate American technology companies with the power to establish and enforce baseline safety and privacy standards. I’d also like to continue to work to connect current and future regulators with former tech workers who can help those in power ask the right questions and create new nuanced and practical solutions. 

More TechCrunch

Tags

,

Rocket Lab surpassed $100 million in quarterly revenue for the first time, a 71% increase from the same quarter of last year. This is just one of several shiny accomplishments…

Rocket Lab’s sunny outlook bodes well for future constellation plans 

In 1996, two companies, Patersons HR and Payroll Solutions, formed a venture called CloudPay to provide payroll and payments services to enterprise clients. CloudPay grew quietly over the next several…

CloudPay, a payroll services provider, lands $120M in new funding

The vulnerabilities allowed one security researcher to peek inside the leak sites without having to log in.

Security bugs in ransomware leak sites helped save six companies from paying hefty ransoms

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the…

A comprehensive list of 2024 tech layoffs

A new “beta rabbit” mode adds some conversational AI chops to the Rabbit r1, particularly in more complex or multi-step instructions.

Rabbit’s r1 refines chats and timers, but its app-using ‘action model’ is still MIA

Los Angeles is notorious for its back-to-back traffic. Three events that promise to bring in millions of spectators from around the world — the 2026 World Cup, the Super Bowl…

Archer to set up air taxi network in LA by 2026 ahead of World Cup

Featured Article

Amazon is fumbling in India

Amazon’s decision to overlook quick-commerce in India is now looking like a significant misstep.

Amazon is fumbling in India

OpenAI’s GPT-4o, the generative AI model that powers the recently launched alpha of Advanced Voice Mode in ChatGPT, is the company’s first trained on voice as well as text and…

OpenAI finds that GPT-4o does some truly bizarre stuff sometimes

On Thursday, Box filled in a missing piece on its AI platform when it bought automated metadata extracting startup, Alphamoon.

Box adds crucial piece to its AI platform with Alphamoon acquisition

OpenAI has announced a new appointment to its board of directors: Zico Kolter. Kolter, a professor and director of the machine learning department at Carnegie Mellon, predominantly focuses his research…

OpenAI adds a Carnegie Mellon professor to its board of directors

Count Spotify and Epic Games among the Apple critics who are not happy with the iPhone maker’s newly revised compliance plan for the European Union’s Digital Markets Act (DMA). Shortly…

Spotify and Epic Games call Apple’s revised DMA compliance plan ‘confusing,’ ‘illegal’ and ‘unacceptable’

Thursday seeks to shake up conventional online dating in a crowded market. The app, which recently expanded to San Francisco, fosters intentional dating by restricting user access to Thursdays. At…

Thursday, the dating app that you can use only on Thursdays, expands to San Francisco

AI companies are gobbling up investor money and securing sky-high valuations early in their life cycle. This dynamic has many calling the AI industry a bubble. Nick Frosst, a co-founder…

Cohere co-founder Nick Frosst thinks everyone needs to be more realistic about what AI can and cannot do

Instagram is rolling out the ability for users to add up to 20 photos or videos to their feed carousels, as the platform embraces the trend of “photo dumps.” Back…

Instagram is embracing the ‘photo dump’

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Anyone paying…

Lyft ‘opens a can of whoop ass’ on surge pricing, Tesla’s Dojo explained and Saudi Arabia pumps $1.5B into Lucid

Flint Capital just closed its third fund at $160 million. Its has a unique strategy for finding its limited partner investors. 

Flint Capital raises a $160M through an unusual fund-raising strategy

Earlier this week it emerged that the DPC had instigated court proceedings seeking an injunction against X over the data processing without consent.

Elon Musk’s X agrees to pause EU data processing for training Grok

During testing, Google DeepMind’s table tennis bot was able to beat all of the beginner-level players it faced.

Google DeepMind develops a ‘solidly amateur’ table tennis robot

The X account announced that its Premium+ subscription would now be “fully” ad-free, leading some to question how this change would affect creator earnings.

As X sues advertisers over boycott, the app ditches all ads from its top subscription tier

Apple has further revised its compliance plan for the European Union’s Digital Markets Act (DMA) rulebook, which, since March, has forced it to give iOS developers more freedom over how…

Apple revises DMA compliance for App Store link-outs, applying fewer restrictions and a new fee structure

The rise of neobanks has been fascinating to witness, as a number of companies in recent years have grown from merely challenging traditional banks to being massive players in and…

Chime and Dave execs are coming to TechCrunch Disrupt 2024

If you visited the Wikipedia website on mobile this week, you might have seen a pop-up indicating that dark mode is ready for prime time.

How to enable Wikipedia’s dark mode

The home security company says attackers accessed databases containing customer home addresses, email addresses, and phone numbers.

Home security giant ADT says it was hacked

The Looking Glass Pro has a 6-inch display and a foldable base. It shows spatial images like those created with the Apple Vision Pro and iPhone 15 Pro.

Looking Glass’ new lineup includes a $300 phone-sized holographic display

TikTok’s latest offering is capitalizing on the app’s ability to serve as a discovery engine for other media — something its users already take advantage of by sharing short clips…

TikTok partners with Warner Bros. to become a discovery engine for TV and movies

Cocoon is a new startup built on the belief that greener steel production and the creation of concrete slag doesn’t have to be an either/or proposition.

Cocoon is transforming steel production runoff into a greener cement alternative

SoundHound, an AI company that makes voice interface tech used by car companies, restaurants and tech firms, is doubling down on enterprise services by playing consolidator in a crowded market.…

SoundHound acquires Amelia AI for $80M after it raised $189M+

Seeking mental health support is a complex process, but some founders believe that using AI to formalize techniques like cognitive behavioral therapy (CBT) can help folks who might not have…

Feeling Great’s new therapy app translates its psychiatrist co-founder’s experience into AI

The U.K.’s antitrust regulator has confirmed that it’s carrying out a formal antitrust investigation into Amazon’s ties with Anthropic, after Amazon recently completed a $4 billion investment into the AI startup.…

UK launches formal probe into Amazon’s ties with AI startup Anthropic

Bardeen has raised $3M to build its platform that uses a natural language interface to automate repetitive knowledge work.

AI business agent startup Bardeen pulls in strategic investment from Dropbox and HubSpot