Hey readers, How's your Friday going? Thanks to everyone who has emailed and engaged with our work the last few weeks! In case you missed it, I recently wrote about why effective altruism is struggling with sexual misconduct. You can check that out here.
Don't be afraid to shoot us an email at futureperfect@vox.com to tell us your thoughts, what you've been enjoying, and what you'd like to see more of.
—Kelsey Piper, senior writer
|
|
|
Q&A: AI's "whiteness problem" |
Photo by Brittany Hosea-Small | A year ago, Alex Hanna, a senior ethical AI research scientist at Google, resigned from the technology giant, sharing that the company was a toxic environment for workers of color. In her "resignation letter" published on Medium, Hanna wrote that her appreciation for the people she worked with at Google was "in spite" of the company's allegedly discriminatory culture. "Tech has a whiteness problem," Hanna wrote. "Google is not just a tech organization. Google is a white tech organization." This "whiteness problem" is just one of the issues Hanna seeks to address as the director of research at the Distributed AI Research Institute. DAIR is a nonprofit research institute founded by Timnit Gebru, another ex-Google employee and computer scientist. The institute seeks to include the communities that could be harmed by AI in its development and use. "We're always finding ourselves responding to the negative things that are coming out of Silicon Valley," Hanna told Vox. "We want to really understand if there are other kinds of ways of using AI that can be more helpful to the communities that we are from and that we focus on." I spoke with Hanna about recent AI developments and the work DAIR is doing to promote the creation of community-based AI. —Rachel DuRose, Future Perfect Fellow This interview has been condensed and edited for clarity. What are some of the biggest harms and risks currently happening with AI right now? There are existing things that are harmful, persistent, and not necessarily going away without massive community resistance, like surveillance, predictive policing technologies, surveillance at borders [and] refugee camps. There's also the kind of harms that are coming out of content moderation, psychological harms that are being done to content moderators, and to the people who really make AI run.
And then more recently, we're seeing all these things around generative AI. We see ChatGPT, but we've also talked a lot about other large language models, and also other types of generative AI like image generators, which have kind of fewer direct ties to things like predictive policing, but have been built upon them. We don't even really know the full range of harms that are going to come about through that. |
"We also know that there are harms that come from the training and the inference of these technologies." |
That was something I actually wanted to touch on: ChatGPT. It's some people's first interaction with AI. I wanted to get your thoughts. Do you have concerns about how it's being deployed? There's a lot to worry about with how it's being deployed. We really haven't seen OpenAI and other organizations do their due diligence and understand the range of harms that are coming out of this. They've had certain things that they knew were going to be pretty obvious [harms] around things like race and gender. But even those weren't adequately filtered against. We also know that there are harms that come from the training and the inference of these technologies. These things take huge amounts of carbon to train. So all those people that are hitting the ChatGPT API are generating a huge amount of carbon. Flipping a bit to DAIR's work — I saw on the website the mission is to include diverse perspectives and deliberate processes. I was hoping you could elaborate on what that looks like in practice. For instance, Meron Estefanos is a refugee advocate on our staff who knows mostly about people who are fleeing from the Horn of Africa or Sudan. These people get enmeshed in this massive web of surveillance, many of them are kidnapped from refugee camps, and then they are sold to traffickers. And so she has done massive amounts of work [for us] literally freeing people who are held hostage in these camps, or in these particular places. So we're trying to reduce the distance between who's a researcher and who is personally experiencing this. Those are examples of what we're trying to do at the Institute, which makes it a bit different from how academics typically do this type of research and also very different from how corporations do research. |
|
|
10 years ago, we were turning nuclear bombs into nuclear energy. We can do it again. |
Just a decade ago, one in 10 American lightbulbs was powered by dismantled Russian nuclear weapons. National security incentives for the US to maintain its nuclear stockpile consistently outweigh its incentives to disarm, but we could tip the scale toward disarmament by linking it to climate mitigation and energy security. We can turn our own nuclear bombs into energy and simultaneously address nuclear threat and climate change as twinned existential risks, argues contributor Irina Wang in her latest piece.
More on this topic from Vox: |
|
|
The East Palestine, Ohio, train wreck didn't have to be this bad |
Angelo Merendino/Getty Images |
In the two weeks since 38 train cars carrying hazardous chemicals including vinyl chloride derailed in East Palestine, Ohio, there remain frustratingly few answers about exactly why it happened or what the long-term environmental impact will be. The critical questions now are why this type of spill happened again and what we can do to prevent the next one. There are plenty of technologies and strategies known to improve rail safety, but rail operators say they're costly to implement. The worry is also whether there is any long-term danger to residents after the chemical clouds drift away, writes correspondent Umair Irfan.
More on this topic from Vox: |
|
|
I highly recommend this concise, on-the-nose Farhad Manjoo column on a recent wave of animal rights activists facing criminal prosecution for removing suffering animals from factory farms, and why what they do is important for the movement against factory farming. The next trial is in a few weeks and will be an important test case for activists trying to establish a legal "right to rescue" — look out for updates here. —Marina Bolotnikova Every once in a while, you read a story you know will haunt you for a very long time. Wired's Lauren Smiley just wrote this gripping feature around one of the first high-profile deaths caused by self-driving cars. (Well, whether it was the car or the "operator" is up for debate.) It wrangles with company responsibility, human error and complicity, and the new terrain of legal hurdles as tech companies like Uber forge a new gray area, but with beautifully heartbreaking prose. —Izzie Ramirez The US economy in 2023 is so goddamn weird. We have the worst inflation problem we've had since the early 1980s — but at the same time, we're enjoying full employment of a kind we haven't had since the late '90s, complete with super-low unemployment and fast wage hikes. A new paper by David Autor, Annie McGrew, and Arindrajit Dube lays out the second part of this story more compellingly than anything else I've read. They look at what's happened to wages since Covid-19, and find that wages for low-earners without college degrees have grown very fast — so fast they've reversed a striking share of recent decades' surge in wage inequality. In fact, the wage benefits of going to college have shrunk substantially because non-college workers are doing so well. —Dylan Matthews. Generative AI is suddenly everywhere, and to understand where it is and where it's going, you need a guide. I can't recommend a better one than Azeem Azhar's weekly newsletter Exponential View. Azhar, a London-based tech entrepreneur and observer, trods the frontier of technology, from synthetic biology to the latest in climate tech. But he really shines in explaining the scientific and business forces behind generative AI, and provides his readers a map to a fast-shifting future. —Bryan Walsh |
Questions? Comments? Have a recommendation on who we should interview or feature next? Tell us what you think! We recently changed the format of this newsletter and would love to know your thoughts. Email us at futureperfect@vox.com. And if you want to recommend this newsletter to your friends or colleagues, tell them to sign up at vox.com/future-perfect-newsletter. |
|
|
Access the web version of this newsletter here. This email was sent to punjabsvera@gmail.com. Manage your email preferences or unsubscribe. If you value Vox's unique explanatory journalism, support our work with a one-time or recurring contribution. View our Privacy Notice and our Terms of Service. Vox Media, 1201 Connecticut Ave. NW, Floor 12, Washington, DC 20036. Copyright © 2023. All rights reserved. |
|
|
|