The Honest Ads Act would be a “valuable step in normalizing the status of political ads online”

by

[Editor’s Note: Law professor Yochai Benkler, who sits on Sunlight’s advisory board, recently analyzed how the Honest Ads Act would affect disclosure of election advertising disclosure –– and would not address botnets, sockpuppets or intentional falsehoods — in Harvard Law Review. We’ve syndicated his article below, with permission and attribution.]

The 2016 U.S. Presidential Election was a watershed moment for how we understand the Internet and democracy. In the 1990s and 2000s, the Net enjoyed wide acclaim as a democratizing technology. In 1997, the Supreme Court hailed it as a platform on which “[t]hrough the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer.” That image captured what many of us writing about the Net, myself included, at the time thought: for the first time since the emergence of mass circulation print and then broadcast, citizens could have a voice, could participate in setting the public agenda and mobilize for action around our intense political concerns, rather than follow the agenda set by media owners, the advertisers who paid, and the political elites to whom they paid attention.

It was this egalitarian democratic image that informed the Federal Election Commission’s first major ruling on Internet communications in 2006. Citing the experience of bloggers, particularly collaborative bloggers who collated the contributions of thousands of contributors, the FEC emphasized the important public policy consideration of preserving the “Internet as a unique forum for free or low-cost speech and open information exchange.” In service of this commitment, the FEC treated only paid advertising on the Internet, including email campaigns targeted at more than 500 recipients, as “political communication,” the core category of communication under its regulation. The baby of decentralized citizen participation was precious, while the bathwater of online electioneering and advertising was still relatively clean and shallow. This basic reticence to make Internet political communications unnecessarily expensive later guided the FEC when Google and Facebook asked for advisory opinions exempting their advertising services from the Act’s disclaimer requirements: the requirement that political advertising explicitly identify itself as such and identify who authorized and paid for it. In neither case was the Commission able to come to a definitive answer, but in both cases, by default, the FEC accepted that political advertising “in flow” of the newsfeed or search results was too brief to require a full disclaimer and, at most, required a link to another site where the disclaimer would be made.

Fast forward to 2016, and propaganda, disinformation, misinformation, and manipulation of public opinion have become the watchwords. Mounting concerns over a sustained Russian information campaign lent a bipartisan framing for what might otherwise have maintained a more purely partisan perspective. While the bipartisanship is welcome, it would be a mistake to imagine that Russian propaganda or Macedonian teenager “fake news” alone were the heart of the crisis. False and materially misleading claims of fact and manipulative and misleading framing were legion from American partisan sources as well. Ten years ago, I was concerned primarily about re-centralization of online political communication, which might replicate either the censorship structures of authoritarian mass media or the models more typical of democratic mass media — extremely concentrated power in the hands of a few owners (“the Berlusconi effect”) and the sheer distraction of entertainment that rendered mass mediated polities inert (“The Baywatch effect”). But reality has delivered a very different kind of threat: the leveraging of the decentralized, seemingly authority-free Internet structure to facilitate propaganda campaigns on the model Evgeny Morozov long emphasized, and the harnessing of rich, data-informed targeting to manipulate beliefs on a population-level scale that Zeynep Tufekci underscored. The “baby” of citizen participation remains real, and we repeatedly see genuine grassroots campaigns on both the left and the right of the American political spectrum. But the bathwater has become dangerously toxic as sophisticated political actors, foreign and domestic, have developed new ways of using data, targeting, and masking to manipulate public opinion in new forms and on a new scale.

The Honest Ads Act

The Honest Ads Act introduced by Senators Klobuchar, Warner, and McCain is the first significant legislative effort to address the new challenges of network propaganda. The Bill seeks to do three things. First, it separates paid Internet communications from unpaid communications, incorporating paid communications into the normal model adopted for communication generally, and leaving volunteer or unpaid communications alone. Second, it requires disclaimers on online advertising, so that people exposed to political advertising can see that it is political advertising, not part of the organic flow of communications, and who is paying for it. And third, and perhaps most important, it requires the creation of a fine-grained public database of online political advertising that reaches beyond elections to issue campaigns as well.

Paid Internet as Political Communication

First, the Bill includes “paid Internet, or paid digital communications” in the general definition of “political communication” in the Federal Election Campaign Act. This then includes paid Internet communications as part of the normal framework of contributions and expenditures, addressing what was an anachronistic exclusion given the dramatic increase in the significance of Internet and social media as core modes of political communication. And the bill also expands electioneering—express advocacy for or against a candidate by anyone just before an election—to include placement or promotion on an online platform for a fee. The use of “paid” and “for a fee” are clearly intended to exclude genuine grassroots campaigns. This latter provision is the only one that might be interpreted to apply not only to communications made in payment to the platforms themselves, as with advertising, but also to behavioral social-media marketing firms that specialize in simulating social attention to a topic or concern by deploying paid human confederates or automated and semi-automated accounts—botnets and sockpuppets. I’ll return to this question in a future post.

Disclaimers on Facebook and Google Ads

Second, the Supple agency has said that the bill requires online advertising to include the kinds of disclaimers television viewers have come to expect: “paid for by” or “I am so and so and I approve this message.” These provisions of the bill emphasize the anomaly that inconclusive FEC advisory opinions have enabled Google and Facebook to market to political advertisers not only reach and focus, but also the ability to remain masked. In 2010, Google persuaded a divided FEC that its ads were too short to include a full disclaimer, and the Commission split between those who wanted to simply exclude Google’s ads from the requirements of disclaimer altogether, and those who wanted to condition the exclusion on the ad carrying a link to the advertiser’s site, where the disclaimer would appear prominently. In 2011, Facebook tried to piggyback on Google’s effort and argued that its own advertising was not only too brief to allow the disclaimer to show on its face, but that because much of the advertising directed not to a campaign site, but to news stories supportive of a campaign, the FEC should adopt the more complete exclusion supported by some of its members in the 2010 opinion. In other words, because Facebook’s ads were designed to be short to fit with users’ usage patterns, and because ads often sent users not to a campaign site where a disclaimer could be displayed, imposing the disclaimer requirement on Facebook advertising was “impractical,” a recognized exception to the disclaimer requirement in the Act.

The Klobuchar-Warner-McCain Bill explicitly rejects the possibility that advertising on social media and search would be covered by this “impractical” exception. The idea that the biggest and most sophisticated technology companies in the world can build driverless cars and optimize messaging and interface by running thousands of experiments a day, but cannot figure out how to include an economical marking that a communication is a political ad and construct a popup or other mechanism for letting users who want to figure out who is behind the act, is laughable. The bill simply states a clear minimal requirement: users have to know the name of the sponsor and have to be given the means to get all the legally required information about the sponsor without being exposed to any other information.

The necessity of this kind of provision is clear. We assess the credibility of any statement in the context of what we think the agenda of the speaker is. That’s why we require political advertising to disclose its sponsor to begin with. If the Clinton campaign were to target evangelical voters with communications that emphasized her opponent’s comments on the Hollywood Access video, these voters would treat the communications with more of a grain of salt even if its contents are true. The same would be true if the Trump campaign had targeted African American voters with narrowly tailored targeted ads quoting Hillary Clinton’s use of the term “super predators” in the context of a 1996 criminal law reform debate. There is nothing wrong with trying to persuade your opponents’ base that their candidate is unworthy of their support. But doing so behind a mask undermines those voters’ ability to judge your statements fairly, including by discounting properly the reliability or honest intentions of the speaker.

The disclaimer requirement is particularly critical because Facebook and Google can deliver advertisements that are finely tuned to very narrowly targeted populations. In the television or newspaper era, if a campaign wanted to appeal to neo-Nazis, it could only do so in the public eye, suffering whatever consequences that association entails with other populations. That constraint on how narrow, incendiary, or outright false a campaign candidates and their supporters can run disappears in an era when Facebook can identify and target advertising to populations in the few thousands range — down to the level of American followers of a German far right ultranationalist party. Identifying the source of the campaign will give users a baseline defense against messaging that is highly tailored to push their buttons, at least when the source of the campaign is a party supporting the candidate they likely oppose. The hyper-targeting, and its capacity to help design highly manipulative communications precisely to push the buttons of its narrowly targeted population, is what makes the third provision of S.1989 so critical.

A Public Machine-Readable Open Database Of Political Advertising

The major innovation of the bill is to leverage the technological capabilities of online advertising to create a timely, publicly open record of online advertising that would be available “as soon as possible,” and would be open to public inspection in machine-readable form. This is perhaps the most important of the bill’s provisions, because, executed faithfully, it should allow public watchdog organizations to offer near–real time accountability for lies and manipulation. Moreover, this is the only provision of the bill that applies to issue campaigns as well as electoral campaigns, and so it is the only one where the American public will get some visibility into the campaign dynamics on any “national legislative issue of public importance.”

The bill requires the very biggest online platforms (with over 50 million unique monthly U.S. visitors) to report all ads placed by anyone who spends more than $500 a year on political advertising to be placed in an open, publicly accessible database. The data would include a copy of the ad, the audience targeted, the views, and the time of first and last display, as well as the name and contact information of the purchaser. The online platforms already collect all this data as a function of their basic service to advertisers and their ability to price their ads and bill their clients. The additional requirement of formatting this data in an open, publicly known format and placing it in a public database is incrementally trivial by comparison to the investments these companies have made in developing their advertising base and their capacities to deliver viewers to advertisers. Having such a database, by contrast, would allow campaigns to be each other’s watchdogs — keeping each other somewhat more honest and constrained — and perhaps more importantly, would allow users anywhere on the Net, from professional journalists and nonprofits to concerned citizens with a knack for data, to see what the campaigns and others are doing, and to be able to report on these practices in near–real time to offer us, as a society, at least a measure of transparency about how our elections are conducted. This public database could allow the many and diverse organizations that have significant knowledge in machine learning and pattern recognition to deploy their considerable capabilities to identify manipulative campaigns by foreign governments and to help Americans understand who, more generally, is trying to manipulate public opinion and how.

Bottom Line

The Klobuchar-Warner-McCain Bill is a narrowly tailored effort to remove the anachronistic treatment of social media and search advertising, and to recognize that online advertising has come of age. By focusing only on paid communication, it avoids sweeping in genuinely open, citizen-driven mobilization and expression. Nothing in this bill would require a grassroots campaign made up of actual citizens expressing their actual views to mark its members or messages any differently than has been the case before. That leaves the question of astroturf campaigns, or botnets and sockpuppets. I’ll return to that in Part 2 of this discussion. The bill won’t, by any stretch of the imagination, close all the pathways through which a foreign government or other propagandist could seek to influence beliefs online, but it will help moderate the extent to which propagandists who seek to manipulate public opinion can leverage the enormous data and behavioral marketing tools that Facebook and Google have developed without disclosing who they are and exposing what they are doing to public scrutiny. That would be a significant step forward relative to where we find ourselves today.