Access to information is at the core of Google’s mission, and every day we work to make information from the web available to everyone. We design our systems to return the most relevant and reliable information possible, but our search results include pages from the open web. Depending on what you search for, the results can include content that people might find objectionable or offensive.
While we’re committed to providing open access to information, we also have a strong commitment and responsibility to comply with the law and protect our users. When content is against local law, we remove it from being accessible in Google Search.
Overall, our approach to information quality and webpage removals aims to strike a balance between ensuring that people have access to the information they need, while also doing our best to protect against harmful information online. Here’s an overview of how we do that.
Complying with the law
We hold ourselves to a high standard when it comes to our legal requirements to remove pages from Google search results. For many issues, such as privacy or defamation, our legal obligations may vary country by country, as different jurisdictions have come to different conclusions about how to deal with these complex topics.
We encourage people and authorities to alert us to content they believe violates the law. In fact, in most cases, this is necessary, because determining whether content is illegal is not always a determination that Google is equipped to make, especially without notice from those who are affected.
For example, in the case of copyrighted material, we can’t automatically confirm whether a given page hosting that particular content has a license to do so, so we need rightsholders to tell us. By contrast, the mere presence of child sex abuse material (CSAM) on a page is illegal in most jurisdictions, so we develop ways to automatically identify that content and prevent it from showing in our results.
In the case of all legal removals, we share information about government requests for removal in our Transparency Report. Where possible, we inform website owners about requests for removal via Search Console.
Voluntary removal policies
Beyond removing content as required by law, we also have a set of policies that go beyond what’s legally required, mostly focused on highly personal content appearing on the open web. Examples of this content include financial or medical information, government-issued IDs, and intimate imagery published without consent.
These types of content are information that people generally intend to keep private and can cause serious harm, like identity theft, so we give people the ability to request removal from our search results.
We also look for new ways to carefully expand these policies to allow further protections for people online. For example, we allow people to request the removal of pages about themselves on sites with exploitative removals policies, as well as pages that include contact information alongside personal threats, a form of “doxxing.” In these cases, while people may want to access these sites to find potentially useful information or understand their policies and practices, the pages themselves provide little value or public interest, and might lead to reputational or even physical harm that we aim to help protect against.
Solving issues at scale
It might seem intuitive to solve content problems by removing more content — either page by page, or by limiting access to entire sites. However, in addition to being in tension with our mission, this approach also doesn’t effectively scale to the size of the open web, with trillions of pages and more being added each minute. Building scalable, automated approaches allows us to not only solve these challenges more effectively, but also avoid unnecessarily limiting access to legal content online.
Our most effective protection is to design systems that rank high-quality, reliable information at the top of our results. And while we do remove pages in compliance with our policies and legal obligations, we also use insights from those removals to improve our systems overall.
For example, when we receive a high volume of valid copyright removal requests from a given site, we are able to use that as a quality signal and demote the site in our results. We’ve developed similar approaches for sites whose pages we’ve removed under our voluntary policies. This allows us to not only help the people requesting the removals, but also scalably fight against the issue in other cases.
An evolving web
Ultimately, it’s important to remember that even when we remove content from Google Search, it may still exist on the web, and only a website owner can remove content entirely. But we do fight against the harmful effects of sensitive personal information appearing in our results, and have strict practices to ensure we’re complying with the law. We’re always evolving our approach to protect against bad actors on the web and ensure Google continues to deliver high-quality, reliable information for everyone.
Beyond how we handle removals of we pages, if you’d like to learn more about how we approach our policies for search features, visit this post. And if you’re still looking for more details about Search, check out more past articles in our How Search Works series.
by Danny Sullivan via The Keyword
Comments
Post a Comment