Hey readers,
Happy Friday! How are you? In case you missed it, I recently wrote about how Open AI, the company responsible for ChatGPT, wants to accelerate AI development, and what that means for us. Have any burning questions around AI? Don't be afraid to shoot us an email at futureperfect@vox.com to tell us your thoughts, what you've been enjoying, and what you'd like to see more of. —Kelsey Piper, senior writer |
|
|
Q&A: Would you trust news written by an AI? |
Josep Lago/AFP via Getty Images |
If you think about artificial intelligence in black-and-white terms, then you probably hold one of two beliefs: AI is going to destroy the world, or it's the magic key to solving all of life's problems. Chiara Longoni, a social scientist at Boston University who researches psychological responses to automation, technology, innovation, and artificial intelligence, sees both the potential and the risk, which is why she's a self-described "AI optimist." "If I had to choose a camp, I would choose the camp of viewing these as incredible tools and somewhat inevitable," says Longoni. "I don't think that we can hope it's going to go away because it's not. I do think that they have great potential." One of Longoni's recent research projects was on the public perception of AI-generated news. Longoni and her co-researchers wanted to determine how accurate people believed news created by an AI is. To do this, they found news articles that were published by various outlets, and categorized them into "true" and "fake" news, depending on the results of a fact-check using fact-checking website Snopes. The team presented these articles to participants in the study and told them that the article was written by either a human or generated by an AI. They then asked the participants whether or not they believed the article was true. (Spoiler alert: People don't trust AI.) I spoke with Longoni about her work, how public perception and trust in AI may evolve as people interact with the technology more regularly, and how trust or its lack can influence the proliferation and regulation of AI. —Rachel DuRose, Future Perfect fellow
This interview has been edited and condensed for clarity. What did you find when you researched people's trust in AI-generated news? We were focusing on perceptions of accuracy — so to what extent will people perceive the accuracy of news generated by AI compared to the perception of news generated by a human reporter? What we found was that irrespective of whether the news was true or fake, people were systematically discounting the news when it was written by AI. We replicated this a bunch of times, and we were able to measure other variables that could have potentially correlated with some of these main findings, such as the age of the news or characteristics of the sample population, such as their religiosity, socioeconomic status, or political affiliation. None of these other variables actually mattered. This seems to point to people viewing AI negatively, at least when it comes to the perception of how accurate it is. This is a finding that was published a year ago or so at this point, so there's the question of: Would things change now that these tools that rely on various generative AI technologies are available to people? |
"I do think that these tools have tremendous potential to benefit societies, but also they really carry so much risk." |
How do you think people's trust and perception of AI changes? Do they change frequently? At the very, very beginning of research on automation, when people were starting to look at computers, the original phenomenon was something that was called, "automation bias," which is actually an overall favorable view of automation. Then algorithms came along, and all the findings showed that people view algorithms as rigid, inflexible ways to make predictions or decisions and that they tend to not account for outliers or particular circumstances. And that goes under the umbrella term of "algorithm aversion." We might be at a point where there could be some polarization of people's perception in terms of looking at these AI systems in a very dystopian way … and so maybe being overly fearful of these tools. The opposite effect might also be happening, where these tools are seen in a very utopian kind of way, capable of fixing all of the problems that we have. I do think that these tools have tremendous potential to benefit societies, but also they really carry so much risk. And I don't think that the average person is well calibrated in either one — in really understanding the benefits nor do I think they necessarily understand the risks. That's very understandable because development has been so quick, and the way in which [AI] has been deployed and is accessible to people has been very, very sudden. That's interesting, and it does make me wonder if your research was done right now, when more people have interacted with these AI chatbots online, if the results would change. It is possible that this ability to have a direct experience with AI tools would actually improve the perception that we have overall. But it's really, really hard not to humanize the AI. I think this anthropomorphization of these tools could worsen people's perception of AI because of fear of replacement. I wonder to what extent these shifts in the way in which we perceive alternatives to human intelligence — from automation bias, then to algorithm aversion, and now to miracle savior or terrible superintelligence that's gonna replace us all — are a function of the way in which these tools are often presented to people. More and more often these tools are — either because of their potential and capabilities or the way in which they are marketed — presented as replacing humans. There's now this fear of being replaced in our jobs or fear of being replaced in our own societies. The things that used to be quintessentially human, now AI can just do them and do them better. So it is possible that perceptions will change a lot. And it is also possible they will change quite quickly. On a societal level, we are going to have to be really, really careful about how we decide to deploy and regulate these tools. |
|
|
Joe Biden is pretty good at being president. He should run again |
So why is "Joe Biden should run for president" even worth saying? For one thing, because less than two years before 2024's Election Day, he's not running yet. Politico reported last week that Biden has yet to make a final decision on whether to seek reelection. Taking the good with the bad, Biden looks like a fairly successful president, overseeing an unusually good economy without US troops in danger. That's not normally someone you want stepping aside, writes senior correspondent Dylan Matthews.
"We were talking in Slack about a recent Politico piece speculating about him not running, and I half-jokingly said, 'Maybe we should run an extremely cold take that the incumbent president is pretty okay and should run again,'" Matthews said. "That joke turned into an assignment, and a nice opportunity to step back and take stock of Biden two years in." More on this topic from Vox: |
|
|
The FBI and Energy Department think Covid-19 came from a lab. Now what? |
Feature China/Future Publishing via Getty Images |
The discussions around the origins of Covid-19 remain acrimonious among politicians and scientists. Even within the US government, different agencies can't agree. Nonetheless, figuring out where the virus came from is still a high priority for President Joe Biden. However, experts say that even without an answer, there are things we can and should begin doing now to mitigate the risks of future pandemics, argues correspondent Umair Irfan. "Where Covid-19 came from is an important scientific and political question, but we're unlikely to get a definitive answer," Irfan said. "The hints we get aren't going to change anyone's mind. I used the Department of Energy's findings to get back to the point that we should treat the lab leak and 'natural' spillover as true because the odds of both are rising."
More on this topic from Vox: |
|
|
I'm trying to make my blurbs shorter, so I'll just say I finished Joe Pera Talks With You and it's made me happier than any media in a long time. Start here. The rest is on HBO Max. —Dylan Matthews This Aeon article does a good job explaining why it'll be damn hard to figure out if/when AI becomes sentient: "We need better tests for AI sentience, tests that are not wrecked by the gaming problem." (The "gaming problem" is that machines can use human-generated training data to mimic human behaviors that trick us into thinking they're sentient.) The authors argue that we need gaming-proof markers of sentience and that studying animal minds is the best way to suss those out. —Sigal Samuel I've always been a little creeped out by bugs. But after writing more about them at Vox (and reading and listening), I've slowly developed some awe and appreciation — and, yes, some sympathy — for the creepy crawlers. So I was excited to see the launch of the Insect Institute, a new organization that will work to influence the quickly growing industry that seeks to raise them in massive numbers for food and livestock feed. —Kenny Torrella I was trying to write a blurb on an unrelated subject, but I kept getting distracted by Tammy Baldwin (D-WI), one of my state's senators, having a meltdown over plant-based milk for a week straight. Here's the best write-up I've seen on why the FDA's new draft guidelines for labeling plant milks have managed to anger both dairy boosters and the non-dairy milk industry. —Marina Bolotnikova This isn't a new show, but my wife and I have been watching Gomorrah, the 2010s Italian crime drama about gangs in Naples, on HBO Max. We love a good foreign drama, especially if it has multiple existing seasons that can fill our evenings for weeks. (Look, we're boring.) But Gomorrah, which is based on a nonfiction book by the journalist Roberto Saviano, is more than just a time filler. It demonstrates what happens when the state loses its hold and crime fills the vacuum. Plus, you can work on your angriest Italian! — Bryan Walsh |
Questions? Comments? Have a recommendation on who we should interview or feature next? Tell us what you think! We recently changed the format of this newsletter and would love to know your thoughts. Email us at futureperfect@vox.com. And if you want to recommend this newsletter to your friends or colleagues, tell them to sign up at vox.com/future-perfect-newsletter. |
|
|
Access the web version of this newsletter here. This email was sent to punjabsvera@gmail.com. Manage your email preferences or unsubscribe. If you value Vox's unique explanatory journalism, support our work with a one-time or recurring contribution. View our Privacy Notice and our Terms of Service. Vox Media, 1201 Connecticut Ave. NW, Floor 12, Washington, DC 20036. Copyright © 2023. All rights reserved. |
|
|
|