Mark Zuckerberg and Jack Dorsey face Senate grilling over moderation practices

- Source: CNN Business " data-fave-thumbnails="{"big": { "uri": "https://media.cnn.com/api/v1/images/stellar/prod/201026093619-how-to-spot-misinformation-screengrab-01.jpg?q=x_0,y_0,h_1080,w_1919,c_fill/h_540,w_960" }, "small": { "uri": "https://media.cnn.com/api/v1/images/stellar/prod/201026093619-how-to-spot-misinformation-screengrab-01.jpg?q=x_0,y_0,h_1080,w_1919,c_fill/h_540,w_960" } }" data-vr-video="false" data-show-html="" data-byline-html="
" data-timestamp-html="" data-check-event-based-preview="" data-is-vertical-video-embed="" data-network-id="" data-publish-date="2020-10-26T13:43:09Z" data-video-section="business" data-canonical-url="https://www.cnn.com/videos/business/2020/10/26/how-to-spot-misinformation-explainer-orig.cnn-business" data-branding-key="" data-video-slug="how-to-spot-misinformation-explainer-orig" data-first-publish-slug="how-to-spot-misinformation-explainer-orig" data-video-tags="internet and www,social media,technology" data-details="">
how to spot misinformation screengrab 01
Here's how to spot misinformation online
01:43 - Source: CNN Business

What we're covering here

What’s happening: Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey are testifying before the Senate Judiciary Committee.

At issue: The social platforms’ content moderation policies. Republican lawmakers specifically want to know why Facebook and Twitter limited the circulation of a New York Post story about Hunter Biden.

Why it matters: Conservatives allege that social networks are censoring their right to free speech. Critics say the platforms must do more to limit the spread of false information and hate speech online.

27 Posts

Trump's social media accounts will lose special treatment once he leaves office

President Donald Trump’s social media accounts could face even greater enforcement when he leaves office, Dorsey and Zuckerberg both said.

Facebook and Twitter have policies that permit elected officials and world leaders, respectively, to make claims on their platforms that would otherwise violate the companies’ policies. But once Trump leaves office, he would no longer be eligible for that special treatment.

“If an account is not a world leader anymore, that particular policy goes away,” Dorsey said.

“By and large, the vast majority of our policies have no newsworthiness or political exception,” Zuckerberg said. “If the president is spreading hate speech or promoting violence …?those will receive the same treatment as anyone else saying those things, and that will continue to be the case.”

Sen. Ernst challenges Zuckerberg on encryption plans

Encryption came up during Tuesday’s hearing on content moderation — highlighting the wide range of topics lawmakers have covered over the past four hours.

Sen. Joni Ernst asked Mark Zuckerberg how the company’s plans to add end-to-end encryption to its Messenger platform would impact its ability to crack down on illegal content, such as images of child sexual abuse. Such encryption would prevent both the company and law enforcement from accessing private messages between users.

“The reason why we’re moving to encryption is because people want greater privacy and security in their messaging systems,” Zuckerberg said.

The Facebook CEO acknowledged that the company would need to “find and develop some new tactics” to identify problematic content despite encryption, such as looking at patterns of activity or relying on individual users to flag such content for review.

“We’ve grown increasingly sophisticated at that,” he added. “Overall I would say that this is something we are very focused on, and I agree with your concern.”

Zuckerberg: Remote work could promote ideological diversity at Facebook

Mark Zuckerberg suggested that Facebook’s expanding remote work policy in light of the pandemic could help address perceptions of a lack of ideological diversity among Facebook workers. The Facebook CEO was responding to a question from Sen. Joni Ernst about the political background of Facebook employees.

Zuckerberg said it is difficult to know how many of Facebook’s employees are left- or right-leaning, because “I don’t think it would be appropriate to ask people on the way in, as they were interviewing, what their political affiliation is.

”Still, he said, “we’re going to see more people working remotely around the country and also around the world,” which will mean fewer people needing to live and work in the San Francisco Bay Area.

Zuckerberg has said that within the next five to 10 years, up to half of Facebook’s employees could be remote.

Facebook pressed on handling of Kenosha counter-protester group

Sen. Chris Coons challenged Facebook on its handling of the protests in Kenosha, Wisc., particularly pushing Zuckerberg on a Facebook page that urged armed counter-protesters to gather.

Coons pointed to Facebook’s existing policy banning calls to arms, asking why the page was not removed sooner in light of that policy.

Zuckerberg, who has testified that Facebook’s handling of the page as a mistake, said his understanding was that the call in question did not violate the policy at the time.

Coons responded said that “facially, it seems to me this was a violation of your own call to arms policy.”

Hawley tries to paint common practice as nefarious

Republican Sen. Josh Hawley targeted Facebook CEO Mark Zuckerberg with a misleading line of questioning.

Hawley repeatedly asked Zuckerberg if Facebook coordinates with YouTube and Twitter for “censorship” and “to control information,” citing information he had obtained from a “whistleblower.”

It is well-known that social media companies do communicate with each other on issues related to foreign meddling, terrorism, and other topics.

But Hawley tried to deceptively characterize the practice of Facebook communicating with its peers as nefarious and implied that the coordination was a major revelation.

Zuckerberg pushed back against Hawley’s characterization, explaining to him first that the major social media companies “do coordinate and share signals on security related topics.”

“For example,” Zuckerberg explained, “there is signal around a terrorist attack or around child exploitation imagery or around a foreign government creating an influence operation, that is an area where the companies do share signals about what they see.”

But Zuckerberg stressed that the communication “is distinct from the content moderation policies” Facebook has.

Zuckerberg noted that the companies might share information on what they are each seeing occur on their respective platforms, but that each company makes their own decisions on how they will enforce their policies.

Sen. Klobuchar's questions show Big Tech has an even bigger problem than content: antitrust

Though Tuesday’s hearing is ostensibly about content moderation and the election, Sen. Amy Klobuchar asked a number of questions about the dominance of tech platforms. It’s a good reminder that for Facebook, antirust, and not debates over moderation, may be the biggest forthcoming challenge in Washington.

Sen. Klobuchar pressed Zuckerberg about the company’s allegedly anti-competitive tactics. She pointed to Facebook’s purchase of Instagram in 2012, which documents show it saw as an emerging rival, as an example.

“At the time, I don’t think we or anyone else viewed Instagram as a competitor as a large multi-purpose social platform,” Zuckerberg said. “In fact, people at the time kind of mocked our acquisition because they thought that we dramatically spent more money than we should have to acquire something that was viewed primarily as a camera and photo sharing app.”

Klobuchar responded: “We don’t know how [Instagram] would’ve done.”

“When we look at your emails it kind of leads us down this road as well with WhatsApp that part of the purchase of these nascent competitors is to – I’ll use the words of FTC Chairman Joe Simons who just said last week: ‘A monopolist can squash a nascent competitor by buying it, not just by targeting it with anti-competitive activity.’”

Zuckerberg and Dorsey are both Big Tech, but that doesn't mean they agree

It seems that what Facebook and Twitter would like to see from federal content moderation laws is increasingly diverging.

Zuckerberg has consistently emphasized how well Facebook’s systems work for detecting and blocking terrorist and child-exploitation content, categories of material that are explicitly illegal. Zuckerberg also said he would be open to a regulatory regime that imposes liability on tech platforms for some forms of content.

Twitter takes a very different tack. Dorsey has argued for rules promoting transparency in process and outcomes, and importantly, the ability for consumers to opt out of tech company algorithms that rank and determine what you should see.

Each executive’s position reflects the strengths of his respective product —?Facebook’s entire business rests on its algorithm, so it makes sense that Zuckerberg would want rules that lean more heavily on a company’s ability to deploy its algorithms to manage content.

By contrast, Twitter has only gotten into ranking content in users’ feeds relatively recently in its history. For years its main approach was to serve users tweets in reverse chronological order, without ranking or sorting.

This distinctions may also be the source of another point of friction between Dorsey and Zuckerberg. Dorsey has warned against policy changes that would “entrench” large, established social media companies —?a veiled shot at Facebook —?and Dorsey’s arguments today have all basically sought to deny Facebook the ability to benefit disproportionately from federal rules based on its business model.

Zuckerberg and Dorsey say ideological makeup of workforces doesn't lead to bias

Sen. Ben Sasse pressed Facebook and Twitter on the ideological makeup of their workforces, focusing particularly on their employees in Silicon Valley, and questioning how content moderation policies can be applied in a non-partisan manner.

Zuckerberg acknowledged that many of Facebook’s employees are left-leaning. But, he said, the company takes care not to allow political bias to seep into decisions. In addition, he said, Facebook’s content moderators, many of whom are contractors, are based worldwide and “the geographic diversity of that is more representative of the community that we serve than just the full-time employee base in our headquarters in the Bay Area.”

Dorsey said political affiliation is “not something we interview for” at Twitter and said that while Twitter’s decisionmaking and outcomes can sometimes seem opaque, the company is trying to be more transparent.

“If people are questioning that,” he said, “[then] it’s a failure.”

Sen. Cruz pushes for Twitter to admit it has taken sides on voter fraud allegations

Sen. Ted Cruz repeatedly pressured Dorsey to admit that Twitter has taken a corporate position on whether voter fraud exists, in a wider campaign to discredit the company’s labeling of tweets questioning the election results.

“Does voter fraud exist?” Cruz asked Dorsey.

“I don’t know for certain,” Dorsey replied.

“Why, then, is Twitter right now putting purported warnings on any statement about voter fraud?” Cruz shot back.

Dorsey said the company is simply connecting users to more context about claims of voter fraud. But Cruz rejected that defense, accusing Twitter of taking a “disputed” policy position despite repeated statements by the Trump administration’s own Department of Homeland Security that the election was conducted securely.

Zuckerberg’s rhetorical crutch

Throughout this hearing, Zuckerberg has cited Facebook’s handling of terrorist or child-exploitation content as an example of how well the company’s systems work. But this is a deflection; federal law makes this type of content illegal, so Facebook’s obligations in this area, as well as its practices, are clear and straightforward.

The entire point of the hearing is how tech platforms should handle content where the lines are not so clear-cut, and what responsibilities lie with the tech companies for addressing that.

Sen. Leahy raises Facebook's Myanmar missteps

While questioning Zuckerberg, Sen. Patrick Leahy of Vermont cited one of the most infamous examples of where Facebook’s policies failed.

The company was accused of helping fuel “genocide” in Myanmar, where its platforms were used to spread hate speech and promote violence against the country’s Rohingya Muslim minority. Facebook acknowledged in 2018 that it had not done enough to prevent the violence.

Leahy said he was “deeply concerned” about Facebook’s role in Myanmar, and acknowledged that Facebook had taken down dozens of accounts linked to Myanmar’s military that promote anti-Rohingya content.

“I compliment you for doing that but the Myanmar military just turned around and created new accounts that promote the same content,” Leahy said. “So…in some way you’ve got a whack-a-mole problem here.”

Zuckerberg noted that Facebook prohibited certain members of the military and other dangerous individuals from making new accounts, but he compared Facebook’s effort to crack down on hate speech to a city’s efforts to crack down on crime.

“These kinds of integrity problems are not ones that there’s a silver bullet,” he said. “You will always be working to help minimize the prevalence of harm in the same way that a city will never eliminate all crime, you try to reduce it…and have it be as little as possible, and that’s what we’re trying to do.”

The oft-cited Myanmar example highlights just how Facebook and other social networks’ battle against misinformation often play out in far more sinister ways outside the United States, particularly in developing countries. In India — home to Facebook’s biggest user base — the company now faces similar accusations of hate speech and political bias, with government committees examining its role in religious riots in New Delhi earlier this year.

Sen. Lee cites content moderation mistakes as evidence of bias

Sen. Mike Lee asked a series of questions regarding content decisions that Facebook and Twitter later reversed. Listening to him, it sounded like he was right that the companies were simply biased against the right.

But as with many of these claims the truth is often simpler: The companies are just bad at this. If you cherry-pick examples of them being bad at moderation, it’s easy to make it look like they’re biased, but the fact remains: They’re bad at this.?

Both Zuckerberg and Dorsey acknowledged the errors and made little effort to defend themselves except to say that they made mistakes and corrected them. As Dorsey said of content moderation early in the hearing, “We are facing something that feels impossible.”

Under questioning by Lee, Dorsey acknowledged that Twitter made a “mistake” when it acted against a tweet by Mark Morgan, US Customs and Border Patrol commissioner.

Dorsey attributed the mistake to a “heightened awareness of government accounts.” Lee slammed social media platforms for repeated “mistakes” that he said appear to disproportionately affect conservative accounts.

Sen. Lee claims Facebook's content labels are 'editorializing'

Sen. Mike Lee used his question time to spread the narrative about voter fraud, accusing Facebook of trying to manipulate its users through “editorializing” and content labels.

Lee cited Facebook’s decision to label his own post about suspected election fraud as something that “sounds more like state run media announcing the party line, rather than a neutral company as it purports to be.”

“This kind of editorializing insulates people from the truth and it insinuates that anyone concerned about voter fraud must be crazy,” Lee said. “These concerns may be out of the mainstream in Palo Alto, but they’re not out of the mainstream in the rest of America.”

Facebook’s labeling is based on public statements by election authorities and the Bipartisan Policy Center.

Zuckerberg defends inaction on Steve Bannon's Facebook page

Zuckerberg flatly rejected pressure to ban Steve Bannon’s Facebook page.

“How many times is Steve Bannon allowed to call for the murder of public officials before Facebook suspends his account?” Sen. Blumenthal asked Zuckerberg. He also asked whether Zuckerberg would commit to taking down Bannon’s Facebook page.

Zuckerberg said “no” and “that’s not what our policies suggest that we should do.”

The Facebook CEO added that the company does take down accounts that post content related to terrorism or child exploitation the first time they do so. However, for other categories, Facebook requires multiple violations before an account or page is removed.

Bannon was permanently suspended from Twitter last week after saying in a video that Dr. Anthony Fauci and FBI Director Christopher Wray should be beheaded. The video was live on Bannon’s Facebook page for about 10 hours and was viewed almost 200,000 times before the company removed it, citing its violence and incitement policies.

Last week, Zuckerberg told employees at a company meeting that the video was not enough of a violation of Facebook’s rules to permanently suspend the former White House chief strategist from the platform, an employee told CNN.

Google gets a pass

As Facebook and Twitter face questions around censorship and suppression, one other Big Tech rival — Google — is conspicuously absent.

“Google has been given a pass from today’s hearing,” Senator Richard Blumenthal of Connecticut said in his opening remarks. “It’s been rewarded by this committee for its timidity, doing even less than [Facebook and Twitter] have to live up to its responsibilities.”

Google CEO Sundar Pichai appeared at last month’s Senate Commerce Committee hearing alongside his Facebook and Twitter counterparts, Mark Zuckerberg and Jack Dorsey, but Google has often escaped (or avoided) the same level of scrutiny from lawmakers.

In fact, back in 2018, the Senate Intelligence Committee set up an empty chair next to Dorsey and Facebook’s COO Sheryl Sandberg with a placard for Google, in a swipe at the company’s refusal to offer up Pichai or another high-level executive to testify.

Blumenthal argued that the tech companies have only taken “baby steps” to combat harmful misinformation on their platforms.

Google-owned YouTube was criticized for not doing enough to deal with misinformation during the election, applying a far less aggressive strategy than Facebook or Twitter did.

The video platform placed an information panel at the top of search results related to the election, as well as below videos that talked about the election, but allowed some videos containing misinformation to stay online without labeling or fact-checking it.

Zuckerberg claims Facebook isn't addictive

Zuckerberg called claims that social media platforms have an addictive quality to be “memes and misinformation.”

Graham said he is increasingly concerned by research suggesting that there may be a public health problem associated with social media, drawing parallels with the tobacco industry.

Zuckerberg pushed back on research saying, “I don’t think the research has been conclusive,” but added it’s an area the company “cares about.” For example, he said the Facebook team running the news feed isn’t provided with data on how much time people are spending on its products.

A challenging task

When asked what he heard during the hearing’s opening statements, Jack Dorsey admitted that content moderation isn’t easy.

Both executives agreed with Graham that the government should not be involved in setting out what content should be removed from tech platforms.“I think it would be very challenging,” Dorsey said.

Zuckerberg said: “For certain types of illegal content, it may be appropriate for there to be clear rules around that. But outside of clear harms including for things like child exploitation, terrorism, I would agree with your sentiment that’s not something government should be deciding on a piece of content by piece of content basis.”

Dorsey and Zuckerberg defend content decisions?

Dorsey and Zuckerberg opened their testimony by acknowledging questions about the way their companies handled political content, particularly surrounding the election.??

Dorsey said he recognized that Twitter made a mistake in the way it handled the New York Post story about Hunter Biden, saying?the company quickly moved to update its policies on hacked materials.??

Zuckerberg said?of the?election, “from?what we’ve seen so far, our systems worked well.”

Graham gives accurate reading of controversial Section 230

Here’s something worth noting: In his opening remarks, Graham described Section 230 in largely accurate terms. Normally a senator simply describing a provision of a law accurately wouldn’t be notable but when it comes to Section 230, it is, because of how often the debate over Section 230 updates is often based on misconceptions of the law and its purpose.

Graham’s opening remarks may show that he hopes the hearing will be perceived as a push for productive dialogue.

Sen. Lindsey Graham kicks hearing off: "Something has to give"

Sen. Lindsey Graham began the hearing with a relatively impartial description of Section 230, correctly characterizing the law as having protected tech companies for their content decision-making. But he quickly pivoted to claims of anti-conservative censorship regarding an article by the New York Post about Hunter Biden.

Under the current law, “you can sue the person who gave the tweet, but you can’t sue Twitter that gave that person access to the world,” Graham said. He added, accurately, that exposing tech companies early on in their lifetimes to content lawsuits could have prevented great companies from coming into existence.

But then he said that tech companies enjoy enormous power rivaling governments and legacy media companies, referring to Facebook and Twitter’s decisions to suppress a viral but baseless story by the New York Post containing unfounded allegations about Joe Biden’s son, Hunter Biden.

How to watch today's hearing

The hearing is underway. There are a few ways to watch:

  • Tune into the Senate Judiciary Committee’s site.
  • C-Span is livestreaming it here.
  • CNN is also streaming the hearing in the top corner of this page.

Jack Dorsey touts Twitter's content moderation around election

Twitter CEO Jack Dorsey will tell senators that the company applied contextual warning labels to 300,000 tweets between Oct. 27 and Nov. 11, according to his prepared testimony.

Of those, the content of 456 tweets were also covered up by a warning message, Dorsey is expected to say. Those numbers were shared by Twitter in a blog post last week.??

Twitter has witnessed a wave of misinformation as users including President Donald Trump and his allies?have spread false and misleading claims?about the election and its outcome.?At one point, the social network applied warning labels to?more than a third?of Trump’s tweets after polls closed.?Over last weekend, Twitter affixed a fact check label to more than 30 of his election-related tweets and retweets between Friday and Monday morning.

Dorsey will repeat several familiar themes from his testimony last month, including expressing a commitment to greater transparency around content moderation.??

In response to proposals concerning Section 230, a federal law that grants websites legal immunity for curating the content on their platforms, Dorsey will warn that a repeal could lead to increased content removals?and?frivolous litigation?while making it harder to address truly harmful material online.?

In something of a jab at Facebook,?Dorsey is also expected to oppose?updates to technology laws?that are done via “carve-outs” that he claims will “inevitably favor large incumbents” and “entrench” the most powerful tech companies.??

“For innovation to thrive, we must not entrench the largest companies further,” Dorsey is expected to say.?

A refresher on Section 230, the law everyone's arguing about

Once?the hearing?gets underway, we’re likely to hear a great deal about Section 230 of the Communications Act of 1934, and why it should be changed. Here’s what you need to know about this pivotal US law.??

Originally passed in 1996, Section 230 grants tech platforms and websites legal immunity for?many of?the decisions they make about user-generated content.?It holds that?platforms — such as Facebook and Twitter — can’t be sued for material?created by their users and posted to their services.?And,?it says, platforms?can’t?be sued?just?for suppressing or removing content they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”?

The purpose of the law, according to its authors, Sen. Ron Wyden and former Rep. Chris?Cox, was to allow a then-nascent industry to grow up free from the fetters of federal regulation, and to give?companies the freedom to moderate their platforms as they saw fit.?

You may hear some lawmakers claim that Section 230 requires platforms to be politically neutral. That is false and misleading. There is nothing in the text of Section 230 that calls for neutrality; in fact,?the text of the law, and subsequent court decisions, have protected the decision-making freedom of private companies, up to and including decisions about political speech.?

Proposed updates to Section 230 could change?that. But Congress?will?need to be careful?to avoid?drafting a law that compels tech companies?to carry specific kinds of speech, as it could raise questions about the companies’ First Amendment rights.

Zuckerberg: Facebook labeled 150 million pieces of content during the election

Mark Zuckerberg will say that Facebook applied contextual warning labels to more than 150 million pieces of content during the election, according to his prepared testimony – though?he did not provide a specific timeframe.?

As many as 140 million people viewed the company’s voting information center, a hub for accurate information about the election process, across Facebook and Instagram, according to the testimony.?

Zuckerberg is also?expected to claim credit for Facebook helping to register 4.5 million voters and for signing up 100,000 volunteers to be poll workers.?

In his prepared remarks, Zuckerberg will again call for updates to Section 230, a federal law that grants websites legal immunity for curating the content on their platforms, “to make sure it’s working as intended.”?

“We support the ideas around transparency and industry collaboration that are being discussed in some of the current?bipartsian?proposals,” Zuckerberg will say.??

Democrats call on Facebook to enforce policies against anti-Muslim bigotry

Sen. Chris Coons asks a question to Secretary of State Mike Pompeo during a Senate Foreign Relations to discuss the Trump administration’s FY 2021 budget request for the State Department on July 30, in Washington.

More than a dozen Democratic senators sent a letter to Mark Zuckerberg on Monday calling on Facebook to “take immediate action” to enforce its own policies on incitement and discrimination against minorities, highlighting the company’s role in fueling hate and violence worldwide.?

The letter, led by Sen. Chris Coons and co-signed by Sens. Mark Warner, Amy Klobuchar, Richard Blumenthal, Dick Durbin and other lawmakers, highlighted Facebook’s role in fomenting political violence in Kenosha, Wisc. and in Myanmar.?

Citing the company’s independent civil rights audit, the senators expressed “deep concern regarding anti-Muslim bigotry on Facebook.” And they said despite the company’s claims of progress, greater transparency is needed from the company to evaluate those claims.

The letter pressed Facebook CEO Mark Zuckerberg to provide detailed country- and language-specific data about its enforcement efforts.?

“While pointing to its increases in country-specific staff and language-specific content moderators in certain areas, Facebook has declined repeated requests from advocates to provide detailed information about its country specific staff or language-specific content moderators across the world,” the senators wrote.

Yes, Jack Dorsey and Mark Zuckerberg just testified on a similar topic

On October 28, Twitter CEO Jack Dorsey testified remotely during the Senate Commerce, Science, and Transportation Committee hearing 'Does Section 230's Sweeping Immunity Enable Big Tech Bad Behavior?', on Capitol Hill, in Washington.

If this all sounds familiar, that’s because it is: Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey both appeared before the Senate Commerce Committee at the end of October, just before Election Day.

In the contentious hearing on October 28, the executives, along with Google?CEO Sundar Pichai, were questioned over their content moderation policies. Some lawmakers demanded more transparency while others sought explanations on a few specific cases in which content was removed or labeled by platforms.

Though the hearing was meant to focus on a crucial law, known as Section 230, that protects the companies’ ability to moderate content as they see fit, senators strayed from the central topic and confronted the executives on other topics, including antitrust, misinformation about voting and election interference.

Read more on the hearing here.

Facebook and Twitter CEOs to testify about content moderation and the 2020 election

Here we go again

On Tuesday, the CEOs of Facebook and Twitter will testify before the Senate Judiciary Committee, in what will be the first congressional hearing involving Mark Zuckerberg and Jack Dorsey since Election Day.??

In the weeks following the election, social media companies have faced a wave of misinformation on their platforms as President Donald Trump and his allies have sown doubt about the integrity of the race, a notion that?state and federal election officials — and increasingly, the courts — have rejected.??

What to expect

Much of the questioning from lawmakers will probably focus on Facebook and Twitter’s handling of election-related content. The committee has titled the hearing “Breaking the News: Censorship, Suppression, and the 2020 Election.”?

It will be the second?congressional?hearing in less than a month zeroing in on Big Tech and its handling of speech.?As before,?Republicans on the committee?are widely expected to air?allegations of anti-conservative censorship.?Tech companies have strongly denied that their technology is biased against conservatives, and conservative content ranks among the most engaging on their platforms. But that hasn’t stopped Republicans such as Sens. Ted Cruz and Josh Hawley from?going after companies for enforcing their platform policies?against misleading claims about the coronavirus or about the democratic process.?

Last month, the Judiciary Committee authorized subpoenas to compel testimony from Dorsey and Zuckerberg, though the two executives ultimately volunteered to testify. In calling for the subpoenas, Hawley and Cruz accused social media platforms of engaging in “election interference” over?their decision to limit the spread of a New York Post article containing discredited allegations about Joe Biden’s son, Hunter Biden.

As we’ve seen in other hearings, Tuesday’s session may lead to a great deal of heat and light but little progress on policy. While Democrats and Republicans have both taken aim at Section 230 of the Communications Act of 1934, a signature federal law that grants websites legal immunity for curating the content on their platforms, the two parties?take radically different views about what’s wrong with the law — and?about?what needs to be changed.?

For their part, the CEOs?may squirm uncomfortably in the hot seat for several hours but will likely stick to their talking points emphasizing the progress they’ve made in?curbing problematic content while adhering to their free-speech principles.??

What time does the hearing start?

The proceedings are slated to begin at 10 am ET, and CNN Business will have complete coverage right here.