Most tech founders are not household names. Those that are are usually either incredibly successful. Think Jeff Bezos, Mark Zuckerberg, or infamous like Elizabeth Holmes from Theranos. Well, over the weekend, another name you may not know may have found himself in the latter category.
Fredricka Whitfield
00:00:21
The founder of one of the world's most widely used messaging apps is now in custody in France. Police arrested Pavel Durov Saturday at an airport outside Paris.
Durov is the founder of the encrypted messaging app telegram, which is known for its unfiltered content.
Fredricka Whitfield
00:00:39
Investigators accused telegram of being used for money laundering, drug trafficking and sharing explicit content of children.
And experts say household name or not. This is a huge moment for the entire internet. My guest is CNN's Clare Duffy. She covers tech. We're going to talk about what drives the rest means for the future of content moderation online, and why a lack of it is already fueling election misinformation in the U.S.. From CNN. This is One Thing I'm David Rind.
Okay, Claire. So I want to talk telegram. But I realized that some of our listeners may have not even ever heard of it, let alone use it. So, like, what exactly are we talking about here?
Yeah. So telegram is a messaging service that you can use in the same way that you text or use WhatsApp. It was launched in 2013, and it has grown to be one of the most important and most used messaging apps in the world. 950 million users. And it's really used for everything from like regular day to day conversations. You can send photos and documents, but they also have these like enormous group channels, up to as many as 200,000 users on each of these channels. That acts sort of more like broadcast channels that anybody can participate in. And people can also forward messages from those big channels into smaller private conversations, which is where people start to have some concerns about disinformation because those big channels, things can move so quickly, and then people can can move conversations that are happening there into their private chats, where there's less sort of oversight of what's happening there.
'This is the man at the center of the global encryption debate. Russian exile Pavel Durov says he prefers to remain in the shadows. A self-described introvert, normally he doesn't give television interviews. Durov says he wants to explain the company he co-founded, Telegram.
So it's launched in 2013 by this guy, Pavel Durov, who is was born in Russia. He is referred to sometimes as Russia's Mark Zuckerberg.
When I was living in Russia a few years ago, all this, all of this, activities were used as a pretext to monitor the, communication off of Russian citizens and then in many cases, used to suppress, dissidents.
And he really created telegram with privacy in mind as a place that would be free from oversight by the government.
I had, a group of armed policemen trying to get into my home. Then I started to think about ways to, like, defend myself, get in touch with my brother. And I realized that there are very few options for us to communicate securely.
It is encrypted, which means that even the company has limited oversight in terms of what kinds of conversations are happening there, and it means it's become a really important tool in countries like Russia and Iran, where free speech is restricted. It's also become an important tool in Ukraine as citizens.
That's where I've heard it being used the most.
Exactly. People warning each other about air raids. But that same privacy also means that it has attracted less savory figures drug traffickers, money launderers. It was a tool used by ISIS militants who attacked Paris in 2015. Election deniers sort of caught on to this platform, in 2021, in the U.S. so, you know, it has this sort of good and bad. On one hand, it's got privacy for regular folks who want it, but it also is, you know, a tool that's really useful for people who are trying to hide the conversation.
You cannot make, messaging technology secure for everybody except for terrorists. So is that a secure or not secure?
You know, it's interesting, though, because the platform does have some control. It removed channels associated with Hamas after October 7th. More recently, it removed channels that were being used to organize those violent UK riots we saw just a few weeks ago. So it does have some control, but the app has really prioritized user privacy over pretty much everything else. I mean, telegram specifically says in their frequently asked questions on the site, there's a question that is, there's illegal content on telegram, how do I take it down? And the response? All telegram chats and group chats are private amongst their participants. We do not process any requests related to them. So the sort of reading between the lines there is that if you see child pornography on telegram in a private conversation, you can't report it.
They're not going to do anything about it. They do say that the these more public channels that are publicly available, you know, they will process requests related to those. But in those private conversations where you might have bad actors operating, they say there's nothing they can or will do about it.
Well, the Kremlin is denying President Vladimir Putin met with the founder of telegram during a state visit to Azerbaijan last week.
So scrutiny of this app has been growing as the app grows, as it gains more users, and also as we see it start to become involved in. In some of these major conflicts in world events, people keep hearing telegram popping up over and over. And that scrutiny culminated with the arrest of Pavel Durov over the weekend outside of Paris.
Fredricka Whitfield
00:06:34
If tried and convicted, he could face 20 years in prison.
Right. And so why was he detained? Do we know?
What we first heard was that he was arrested on a warrant related to a lack of moderation. And then we got a bit more information. On Monday, a French prosecutor said that this is part of a wide investigation into alleged crimes conducted by an unnamed person on the app that dates back to last month. Durov is accused of being complicit in aiding fraudsters, money launderers, drug traffickers and people spreading child sexual abuse material on the app. And he's also accused of failing to communicate information and documents related to that investigation. So telegram issued a statement defending Durov. They say they abide by EU law and that Durov has nothing to hide. He travels frequently in Europe. He's a French citizen. They also said that it is, and I'm quoting, using their words, absurd that a tech company or the leader of the tech company could be held accountable for what gets published on their platform.
And this seems like the big crux of the issue here is, and we've kind of talked about this before on the show, like, are these social media spaces? Are they just platforms, open town squares for people to post whatever they want or do these platforms? And by extension, I guess the people that run them have a responsibility to actually rein in that conversation.
Exactly. This is the debate that goes well beyond telegram that the really the entire social media ecosystem is grappling with is how do you balance people's need for and right to free speech and the speech opportunities that are afforded when people have privacy in places, especially where free speech is restricted, you know, dissidents having the opportunity to speak freely and anonymously is in many ways very important. But how do you balance that free speech, right, with safety and ensuring, for example, that your platform is not a major vector for spreading child sexual abuse material, which is an allegation that has dogged telegram again and again. Europe in particular has really been cracking down on this and is especially concerned about election misinformation spreading online and, you know, has imposed with the Digital Services Act new requirements for these tech companies to really be limiting the spread of harmful material on their platforms. But this detention really does sort of raise more questions about who can be held responsible for that. It's pretty striking to see the leader of a tech company. We've seen tech companies be fined. We've seen them be forced to change the way that they're operating. But to see the CEO be arrested.
Or be plucked out of his private plane at an airport.
Is really is really striking and potentially sort of gets us to this, this new phase in this conversation. You already in just a few days are seeing other tech leaders speaking out. Elon Musk in particular. Posting free Pavel Durov on X and I think I think this goes beyond what's happening with Pavel Durov in France. And there really is a question more broadly, you know, across Europe in terms of how they're cracking down on harmful content and forcing these companies to do more. And we're seeing kind of a splintering globally in terms of what the different regions think is harmful content and how they expect these companies to behave, which does sort of make it challenging for the companies to know how to operate. In the U.S., we've seen, you know, legislators, regulators really lagged behind Europe in terms of their responsiveness to harmful content online. And we're already starting to see the ramifications of that play out ahead of this fall's presidential election.
Okay. So let's talk about the US election then, because you said we've already started to see ramifications of the lack of moderation around election content online. What does that look like in real life?
I mean, it looks like a lot of different things. We're seeing, kind of the stuff that you would expect false claims about the different candidates we're seeing, you know, conspiracy theories start to spread. Certainly this sort of divisiveness that we've seen over the past few years, but all of this has really been supercharged by artificial intelligence. It is now so much easier to create really convincing fake text, fake images. We saw former President Donald Trump posting on his platform truth Social AI generated images of Taylor Swift fans.
Fredricka Whitfield
00:11:27
Calling themselves Swifties for Trump, which implied an endorsement from the singer. Despite that, there was no such endorsement and.
Most of those photos were AI generated and not real. But there was no label. There's no way of people knowing that.
There also is, a photo, a photoshopped image of the actor Ryan Reynolds wearing a t shirt that supposedly had a pro Kamala Harris message.
We've also seen Elon Musk's chat bot grok, which is part of his company x AI, spreading fake information about Harris's eligibility as a candidate for the 2024 election. It has started generating fake images of Trump and Harris and Biden. We tried it out and it's actually really it's incredible and disturbing how realistic these images are.
This is a tool built in to X. This isn't some random person creating it. It's the product itself.
Exactly. The people who can access it at this point are subscribers to X premium. There were actually concerns raised by election officials asking Elon Musk, asking the company to do more to make sure that people have correct information when it comes to something as important as when to vote or who's eligible to vote for. And so the company has started to try to make some efforts. They're directing people to vote.org to these third party resources. But even when we were playing with the grok image generator, you could say, give me a fake image of one of the candidates in a hospital bed, and it would produce a really lifelike image. And then it would say, for more information, go to vote.org. But that's why the.
Exactly. The image is still there. And if I copy and share that image to another platform, the fact that it showed me third party resources isn't going to do the people that see that image very much good. And then, of course, you have Musk himself spreading conspiracy theories about Biden's immigration policies, raising questions about the security of voting systems. And so clearly, he does not care about ensuring that misinformation is not spreading on his platform.
Is this just an X problem, or do we see this kind of stuff on other platforms, too?
It's not just an X problem. I mean, again, I think as long as we have these AI generators that can so easily pump out this fake content and they're not doing enough to restrict people from creating it, we're going to see it spreading on social media. And it is tricky for these platforms. It requires a lot of people. It requires a lot of technology to identify these AI fake images. At this point, we've also seen a lot of the mainstream social media companies cut back on their trust and safety teams. Over the last few years, we saw this big wave of tech layoffs this year of efficiency across the tech industry. And what that meant was a lot of these companies are pulling back on their trust and safety teams. You know, a lot of the industry watchers refer to this as the great backslide. Of course, the companies themselves say that, you know, we've become more efficient. We move people around. We're still invested in this. But I do think it's possible that we'll see some of the ripple effects of them not having as many people working on them.
Yeah. Like, didn't we see what happened in 2016? Like, is this just a simple business decision? They just don't want to pay these people.
I think it is a you know, it's a mix of a business decision. These companies will say that they have technology that can do this. They have AI that can catch this fake information. But there are a lot of ways that bad actors can work around technology. They learn how these systems work, and then they find the workarounds. Whereas a real human might be able to look at something and say, no, this is fake. Whereas the technology needs to catch up to being able to identify that, right?
Smart, but not quite that smart.
'Exactly. And there's this effort, you know, across a number of industry players to implement essentially meta data when something is AI generated so that other third parties could identify it. So what that would look like. Is you have an AI image generator. You know, like Dall-E from OpenAI will include metadata in any image that's created that would let Facebook know that it's AI generated, and then Facebook, it's going to start labeling those things so people know.
Well, gotcha. That was kind of my last question. As we see more of this stuff kind of spread around, what can the average person do to spot these images? You know, is it the metadata or are there other signs that people should be looking out for?
Yeah, I mean, I think a lot of it is going to be up to these companies to make sure that they're labeling that they're implementing these metadata industry standards. But I think for regular folks who are scrolling social media, I think the best piece of advice is just to slow down and sort of look at this stuff with a critical eye. A lot of times these AI images especially will have little weird telltale signs. You know, somebody will have a sixth finger or somebodies facial features will be a little bit blurry. But these things are are improving rapidly. So I think the best advice is just to slow down and think a minute before, especially before you hit share on something and hopefully sooner rather than later we'll start to have more industry players on board with labeling these things. That will get easier to do that, because it really is. There's only so much a regular person can do when you're looking at these things. It really is up to the companies to be doing more, to be transparent. You know about this AI generated content.
It's good advice, Claire.
One thing is a production of CNN audio. This episode was produced by Paola Ortiz and me, David Rind. Our senior producers are Felicia Patinkin and Faiz Jamil. Matt Dempsey is our production manager. Dan Dzula is our technical director. And Steve Lickteig is the executive producer of CNN Audio. We get support from Haley Thomas, Alex Manasseri, Robert Mathers, John Dianora, Leni Steinhardt, Jamus Andrest, Nichole Pesaru, and Lisa Namerow. Special thanks to Wendy Brundidge and Katie Hinman. We'll be back on Sunday. I'll talk to you then.