Frequently Asked Questions

Who is Public Editor For?

Newsreaders – Everyone who loves keeping up with the news has seen the degradation of news quality over the past few years. Internet-based advertising rewards outrageous falsehoods, and has driven even the most reputable news organizations to prioritize the scandals, clickbait, and galling misinformation that attracts attention. If we want to share reality again, we need tools like Public Editor that hold content publishers and platforms accountable to higher standards of credibility.

Critical Thinkers – We all know the frustration of someone else being wrong on the Internet. Instead of shouting in to the void, Public Editor provides us with a productive way to clean up bogus information. And for everyone who wants to sharpen their discernment skills, Public Editor provides passive and active learning opportunities.

Classrooms – Teachers and professors from high school through graduate school can now inoculate their students against misinformation using our simple 90 minute ‘News Vaccine’ Curriculum. Contact us to get started.

Newsrooms – Newsrooms spend a lot of expensive time editing articles. And lately, they spend a lot of time and money trying to engage their readers more deeply to ensure they maintain a healthy community of subscribers. With Public Editor, newsrooms can free two birds with one key by encouraging their readers to take part in their pre-publication editorial process. For instance, readers who perform 20 Public Editor tasks per month, could earn member benefits. Contact us to get started.

Journalists – There are too many hacks out there giving journalism a bad name. Worse, those hacks are gradually degrading journalistic standards across the board. Public Editor rewards competent and thoughtful journalism and journalists, while exposing the bad actors and lazy hacks polluting the media landscape.

Governments – People are calling on their governments to regulate the quality of information flowing through the Internet. We take a strong stance against outright censorship. But Governments can empower their people to collectively take responsibility for their national conversation by supporting Public Editor. They can start by standing up for existing policies that require social media platforms to support users’ rights to filter out harmful content from their own feeds using 3rd party filters like Public Editor. They can also institute quality standards like they do for food and drug products, now that Public Editor can deliver accurate and specific measures of content quality.

Citizens – Ultimately, Public Editor is for everyone who takes seriously the notion that healthy democracy requires a well-informed citizenry. We imagine a world where Public Editor is a household name that people encounter multiple times per week. Many citizens will volunteers for Public Editor, use its browser extension to read articles, experiences a better media diet thanks to its credibility scores, and learn to discern over 40 types of misinformation thanks to Public Editor labels. But even those who rarely or never participate will live better lives because their fellow citizens are better informed and sharing reality again.

Social Media Platforms and their users – Facebook’s Mark Zuckerberg has publicly stated that he does not believe Facebook should try to be “the arbiter of truth.” We can sympathize with him. It has not been easy to create a careful, broad-based tool for evaluating the credibility of content. But now that we’ve done it, Facebook, Twitter, and others can hand off the difficult responsibility of monitoring their content to their own users and the Public Editor community. To provide their users with a legitimate tool for measuring content and filtering out bogus information, these platforms need only provide an API to our team, as indicated by US Policy under Section 230.




Who is behind Public Editor?

Public Editor is an open, independent nonprofit that is of, by, and for the people. Led by the nonprofit Goodly Labs, with collaboration from the UC Berkeley Institute for Data Science, we’re a growing community of thoughtful people working together to directly improve the quality of the news people read everyday. Anyone can join. We welcome collaboration with individuals, governments, non-profits, and foundations (and even large tech companies if they can appreciate our commitment to raising the quality of online content). We are grateful for support from forward-looking organizations like Schmidt Futures and the McCune Foundation, and from many individual contributors offering what they can to help expand our reach.

How will Public Editor Impact the World?

Public Editor attends to the smallest details in news articles, but can also be applied at a massive scale to improve the quality of national conversations affecting billions of people. Governments can sponsor Public Editor communities in their countries, and require Social Media Platforms to allow their users to filter low quality news from their feeds. News aggregators like AppleNews, Google News, or even the Drudge Report, can provide Public Editor labels to their readers to show their commitment to serving truthful content. Even newspapers like the New York Times can use Public Editor with their subscribers as an engagement tool that reduces the burdens on their professional editors. And, of course, any newsreader can download our browser extension to see how the article’s they read anywhere on the Internet may contain misleading information. Classrooms, too, are beginning to use Public Editor’s ‘News Vaccine’ curriculum to help inoculate students against bogus content. With so many ways to deploy Public Editor, newsreaders will learn to distinguish the variety of ways we humans fool ourselves, intentionally or not. And as we become more discerning, we will all be able to share reality again.

How does Public Editor evaluate articles? What mistakes do you look for?

Public Editor works like an assembly line. When we’re evaluating an article, we don’t each read the whole article from top-to-bottom finding all the mistakes within it like an expert editor would. Instead, we each do a portion of the work and then combine our output together. First, triagers (our most experienced volunteers) highlight passages of the article that need closer review. Then, teams of specialist editors evaluate those passages to label the praiseworthy or problematic ‘use of evidence’, ‘discussion of probabilities’, ‘use of language’, and ‘reasoning’ within the passage. Other specialist editors assess the relevance of an article’s various arguments and quoted sources. All of the credibility indicators we use to label articles are open and transparent, and were developed in collaboration with top science educators, journalists, and cognitive scientists, including Nobel Laureate Dr. Saul Perlmutter’s ‘Sense & Sensibility & Science’ curriculum team.

How does Public Editor score articles?

Once articles are labeled by multiple independent teams of volunteers, Public Editor’s underlying TagWorks (collaborative annotation) technology finds the consensus among the volunteers. If large majorities of independent volunteers see the same reasoning error or biased tone, the Public Editor scoring algorithm recommends that 1 or 2 or 3 points be removed from the article score because of that mistake. If the mistake occurred within an argument that was highly relevant to the argument’s main point, the deduction could be multiplied by as much as 3x. If the mistake was made by a quoted source, the article may not lose any points at all (while the quoted person would lose points in our database). To compute an article’s overall score, the demerits for each mistake in the article are multiplied by the ‘relevance scores’ of each argument in which they are embedded, then those products are all summed together and subtracted from 100. Public Editor also awards points to articles when authors are particularly careful, thorough, or circumspect. Typically, good quality articles score >85 and low quality articles score <75.

How do we know Public Editor isn’t biased?

Public Editor’s has been built to encourage volunteers’ neutrality by a design team including experts in bias detection and mitigation. Instead of asking a single volunteer to evaluate an entire article, Public Editor asks different teams of volunteers to precisely evaluate different aspects of each specific claim in the article. This way, volunteers are focused on the merits or mistakes of each idea, regardless of how much the volunteer agrees with the article’s overall point of view. Public Editor’s data science team also continually evaluates levels of agreement across volunteers completing identical tasks. When particular volunteers are more critical of some news sources than others, we temporarily reduce the weight of their judgments in the system and provide them with additional training. In these and other ways, Public Editor ensures that volunteers are working to promote good reasoning and good evidence, regardless of an article’s conclusions.


What is it like to be a Public Editor volunteer?

“Rewarding.” “Enriching.” “Surprisingly fun, actually.” These are just some of the responses we’ve gotten when we’ve asked our volunteers this question. Some of us get a little rush when we complete tasks, because we know we’re actually making a real difference. Some of us think of each task as a quick brain teaser, like Sudoku or the crossword puzzle. Others of us are all about the leaderboard. And pretty much all of us like the badges/stickers we earn as we develop our discernment and clean up the Internet’s misinformation. As our community grows, we’re doing more task parties, too, online and in-person. Our volunteer forum is not just for advice about how to do tasks, it’s also a place volunteers in different locales or sharing common interests can come together. If you could enjoy learning some useful thinking tools by doing interesting tasks in service of a good cause, you could enjoy being a Public Editor volunteer. If you’re also interested in more of a community experience, we’ve created that as well.

How can non-experts produce expert evaluations?

Many of us are accustomed to thinking of expertise as something that an individual possesses. But expertise can also be captured in a book or held by an organization. And now, with TagWorks technology, expertise can be formalized as a set of analytical protocols, then applied directly to documents. Just a single expert can multiply her own analysis throughput by a thousand (or even a million) by using TagWorks to set up an assembly line of simple annotation tasks that non-experts can complete without face-to-face training. To learn more, visit https://tag.works

How long has Public Editor been in development?

Even before “Fake News” was a term, back in 2015, project Co-Directors Adams and Perlmutter were working on the Public Editor concept. They wanted to use TagWorks to identify and label examples of erroneous thinking that Perlmutter could use in his critical thinking curriculum. When so much propaganda affected the 2016 election, the pair focused more energy on the project, convening a “Signal to News” working group at the Berkeley Institute for Data Science in 2016 to collect volunteers and perspectives about measuring news quality. Then the team began attending conferences including the original MisInfoCon at MIT, IC2S2, and others. Adams wrote the founding mission statement and work plan for the group that would rebrand itself the Credibility Coalition. He taught CredCo’s core team how to operationalize credibility concepts and wrote their first credibility indicator data model, which was also used as the initial model of the W3C Credibility Standards Working Group. Adams and the Public Editor team then focused on improving and scaling its own annotation tools and refining its own comprehensive schema of credibility indicators. Now, in its fourth spiral of development, Public Editor is ready to deploy it’s new technology with the public.

How can stakeholders collaborate with Public Editor?

We envision ourselves as an important contributor in a cooperative information ecology. We correct misinformation in near real time (<30 minutes to label & score an article). Readers can consume those corrections without interrupting their reading process or even leaving their browser window. We provide free, effective media literacy training to everyone with an Internet connection (training ourselves and newsreaders to discern 40 different kinds of sloppy thinking, broken inference, and twisted reasoning). And, we provide a reliable, nuanced, visualized, and explainable measure of news article credibility via a sensible and transparent scoring rubric that quantifies the qualitative into a 0-100 score people can readily understand.

We also cooperate with the surrounding ecology when there are things we cannot do effectively. For instance, we do not attempt to perform fact-checking. A single fact-checking task can take 20 minutes or 20 weeks to complete, and other organizations already do it very well. We focus on an article’s reasoning, use of evidence, biases, and anything that can be evaluated within the article itself. So, we cooperate with fact-checkers, sharing our data back and forth.

We have a very cooperative stance toward news organizations, news aggregators, and platforms. Public Editor is a way to reward good journalism while discouraging low-quality content. Some newsrooms may use it as a pre-publication tool that engages their own subscribers. We’re sure newsreaders would love to see big tech companies and news aggregators use our credibility scores to promote better content while slowing the spread of misinformation. And since we are offering platforms’ users an opportunity to clean up problematic content, we would love to see the big platforms work with us to improve their own users’ experiences.

We would like to cooperate with governments, too. With sponsorship, we can help ensure their citizens are more literate and sensible, and resilient to misinformation. Governments might even consider using Public Editor’s credibility scores to regulate the quality of information being promoted through social media platforms without resorting to censorship.

Above all, we’re simply happy to be loved by newsreaders. We get a thrill out of knowing that when people use our Chrome extension they begin to observe how their own discernment improves, prompting them to rethink and revise some of their own limiting beliefs about the power of human reason and cooperation. That gives us hope that maybe we can all wise up and share a common reality again someday soon.