Google has enhanced its search quality evaluator guidelines by introducing a new section guiding quality raters on how to flag content that is inaccurate, offensive, or upsetting. This update aims to help Google deliver search results that are more factually accurate and dependable.
Quality raters are tasked with conducting real-life searches and evaluating the quality of the pages returned based on their ability to satisfy the user’s query. The 160-page document now features a new section that provides instructions on how to rate “Upsetting-Offensive Results.”
### Evaluating Upsetting-Offensive Content
The new addition to the guidelines advises quality raters to flag results that might be considered offensive or upsetting, even if they fulfill the user’s query. This means that web pages should be flagged if they meet the criteria for being upsetting or offensive, even if the search query was explicitly looking for such content.
According to Google, Upsetting-Offensive content typically encompasses the following:
– Content that promotes hate or violence against groups based on factors such as race, ethnicity, religion, gender, nationality, citizenship, disability, age, sexual orientation, or veteran status.
– Content that uses racial slurs or extremely offensive language.
– Graphic violence, including animal cruelty or child abuse.
– Explicit instructions on harmful activities, like human trafficking or violent assault.
– Other content deemed extremely upsetting or offensive by users in various locales.
Google provides an example where a user searching for information about the Holocaust is directed to a white supremacist site promoting Holocaust denial, which should be flagged as Upsetting-Offensive. Conversely, if the same search returns a page with accurate historical information, such as from the History Channel, it should not be flagged.
### What if a User is Intentionally Searching for Upsetting-Offensive Content?
The guidelines also offer instructions for giving a “Needs Met” rating for Upsetting-Offensive content. Some users might genuinely be seeking educational information on sensitive topics.
> “Remember that users of all ages, genders, races, and religions use search engines for various purposes. One especially important need is exploring difficult-to-discuss subjects. For example, some people might hesitate to ask about the meaning of racial slurs or want to understand why certain racially offensive statements are made. Providing users with resources that explain racism, hatred, and other sensitive topics benefits society.
>
> When the user’s query appears to request or tolerate potentially upsetting, offensive, or sensitive content, we call it an ‘Upsetting-Offensive tolerant query.’ For the purpose of Needs Met rating, assume users have a genuine educational or informational intent.”
Quality raters should give a “Highly Meets” rating when informational results about Upsetting-Offensive topics are found on trustworthy, accurate, and credible sources, unless the user’s query clearly seeks an alternative viewpoint. The results should address the specific topic to help users understand why it is upsetting or offensive and what the underlying sensitivities are.
### What Happens When Content is Flagged?
Flagged content is not immediately demoted or removed. The data collected by quality raters is used by Google’s algorithm developers to refine how the search engine automatically identifies Upsetting-Offensive content. If Google’s algorithms determine that content flagged as upsetting or offensive will indeed upset the user based on their query, it is less likely to appear in that user’s search results. However, content will still be accessible to users intentionally seeking it out.
In summary, the effectiveness of these new guidelines in improving the quality of search results remains to be seen.