Google is challenging proposed laws that would require online services to implement age checks in a new framework that theorizes how technology companies should approach children’s safety online. The framework, titled the “Legislative Framework for Protecting Children and Teens Online,” is the tech giant’s response to congressional child online safety proposals.
In its set of principles, Google dismisses policies that would require online services to verify the age of their users before allowing them access to their platforms. For instance, Utah passed a law that aims to start requiring social media companies to verify the age of a user seeking to maintain or open an account. Google says that such age verification policies will lead to tradeoffs and possibly restrict access to important information.
“Good legislative models — like those based on age-appropriate design principles — can help hold companies responsible for promoting safety and privacy, while enabling access to richer experiences for children and teens,” the company wrote in a blog post announcing the framework. “Of course, as policymakers contemplate these issues, they should carefully consider the broader impacts of these bills and avoid side effects like blocking access to critical services, requiring people (including adults) to submit unnecessary identification or sensitive personal information.”
The company states that “data-intrusive methods,” such as verification with government IDs, should be limited to “high-risk” services that deal with alcohol, gambling, or porn. For context, Louisiana recently passed a law that require age verification to access adult websites in an attempt to prevent kids from seeing online porn. Google’s framework is not against age verification in this manner.
Google argues that instead of implementing legislation that would require online services to verify ages, these companies should be required to “prioritize the best interests of children and teens in the design of their products.” Google says that online services used by children and teens should be required to assess the collective interests of children based on “expert research and best practices, to ensure that they are developing, designing and offering age-appropriate products and services.”
In other words, Google says online services shouldn’t be forced to block teens and children from their platforms, and should instead be required to design products appropriately.
Today’s framework comes four years after the Federal Trade Commission (FTC) fined Google and YouTube $170 million for violating children’s privacy. The FTC said YouTube illegally collected personal information from children and used it to profit by targeting them with ads. As part of the settlement, the FTC said YouTube had to develop and maintain a system that asks channel owners to identify their child-directed content to ensure that targeted ads are not placed in such videos.
Interestingly, Google’s framework notes that there should be legislation banning personalized advertising for children and teens. Earlier this year, Senator Ed Markey (D-Mass.) announced the reintroduction of the Children and Teens’ Online Privacy Protection Act (COPPA 2.0), which would would ban targeted ads to minors. Google argues that “for those under 18, legislation should ban personalized advertising, including personalization based on a user’s age, gender, or interests.”
In a separate online safety framework published today by YouTube, the video platform’s CEO Neal Mohan said the service doesn’t serve personalized ads to kids.
Despite this claim, a recent report from advertising performance optimization platform Adalytics alleges that YouTube continues to serve targeted ads to minors. In a blog post, Google stated that Adalytics’ report was “deeply flawed and uninformed.” The report caught the attention of Senator Marsha Blackburn (R-TN) and Senator Markey, who sent a letter to the FTC asking the government agency to investigate the matter.
Source