Meta and Snap are the latest tech firms to get formal requests for information (RFI) from the European Commission about the steps they’re taking to safeguard minors on their platforms in line with requirements set out in the bloc’s Digital Services Act (DSA).

Yesterday the Commission sent similar RFIs to TikTok and YouTube also focused on child protection. The safety of minors has quickly emerged as a priority area for the EU’s DSA oversight.

The Commission designated 19 so-called very large online platforms (VLOPs) and very large online search engines (VLOSEs) back in April, with Meta’s social networks Facebook and Instagram and Snap’s messaging app Snapchat among them.

While the full regime won’t be up and running until February next year, when compliance kicks in for smaller services, larger platforms are already expected to be DSA compliant, as of late August.

The latest RFI asks for more details from Meta and Snap on how they are complying with obligations related to risk assessments and mitigation measures to protect minors online — with particular reference to the risks to kids’ mental and physical health.

The two companies have been given until December 1 to respond to the latest RFI.

Reached for comment a Snap spokesperson said:

We have received the RFI and will be responding to the Commission in due course. We share the goals of the EU and DSA to help ensure digital platforms provide an age appropriate, safe and positive experience for their users.

Meta also sent us a statement:

We’re firmly committed to providing teens with safe, positive experiences online, and have already introduced over 30 tools to support teens and their families. These include supervision tools for parents to decide when, and for how long, their teens use Instagram, age verification technology that helps ensure teens have age-appropriate experiences, and tools like Quiet Mode and Take A Break that help teens manage their screen time. We look forward to providing further details about this work to the European Commission.

It’s not the first DSA RFI Meta has received; the Commission also recently asked it for more details about what it’s doing to mitigate illegal content and disinformation risks related to the Israel-Hamas war; and for more detail on steps it’s taking to ensure election security. 

The war in the Middle East and election security have quickly emerged as other priority areas for the Commission’s enforcement of the DSA, alongside child protection.

In recent days, the EU has also issued an RFI on Chinese ecommerce giant, AliExpress — seeking more information on measures to comply with consumer protection related obligations, especially in areas such as illegal products like fake medicines. So risks related to dangerous goods being sold online looks to be another early focus.

Priority areas

The Commission says its early focus for enforcing the DSA on VLOPs/VLOSEs is “self explanatory” — zooming in on areas where it sees an imperative for the flagship transparency and accountability framework to deliver results and fast.

“When you are a new digital regulator, as we are, you need to start your work by identifying priority areas,” a Commission official said, during a background briefing with journalists. “Obviously in the context of the Hamas-Israel conflict — illegal content, anti semitism, racism — that is an important area. We had to be out there to remind the platforms of their duty to be ready with their systems to be able to take down illegal content rapidly.

“Imagine, you know, potential live footages of what might happen or could have happened to hostages, so we really had to engage with them early on. Also to be a partner in addressing the disinformation there.”

While another “important area”, where the Commission has been particularly acting this week, is child protection — given the “big promise” for the regulation to improve minors’ online experience. The first risk assessments platforms have produced in relation to child safety show room for improvement, per the Commission.

Disclosures in the first set of transparency reports the DSA requires from VLOPs and VLOSEs, which have been published in recent weeks ahead of a deadline earlier this month, are “a mixed bag”, an EU official also said.

The Commission hasn’t set up a centralized repository where people can easily access all the reports. But they are available on the platforms’ own sites. (Meta’s DSA transparency reports for Facebook and Instagram can be downloaded from here, for example; while Snap’s report is here.)

Disclosures include key metrics like active users per EU Member State. The reports also contain information about platforms’ content moderation resources, including details of the linguistic capabilities of content moderation staff.

Platforms failing to have adequate numbers of content moderators fluent in all the languages spoken across the EU has been a long running bone of contention for the bloc. And during today’s briefing a Commission official described it as a “constant struggle” with platforms, including those signed up to the EU’s Code of Practice on Disinformation, which predates the DSA by around five years.

The official went on to say it’s unlikely the EU will end up demanding a set number of moderators are engaged by VLOPs/VLOSEs per Member State language. But they suggested the transparency reporting should work to apply “peer pressure” — such as by showing up some “huge” differences in relative resourcing.

During the briefing, the Commission highlighted some comparisons it’s already extracted from the first sets of reports, including a chart depicting the number of EU content moderators platforms have reported — which puts YouTube far in the lead (reporting 16,974); followed by Google Play (7,319); and TikTok (6,125).

Whereas Meta reported just 1,362 EU content moderators — which is less even than Snap (1,545); or Elon Musk owned X/Twitter (2,294).

Still, Commission officials cautioned the early reporting is not standardized. (Snap’s report, for example, notes that its content moderation team “operates across the globe” — and its breakdown of human moderation resources indicates “the language specialties of moderators”. But it caveats that by noting some moderators specialize in multiple languages. So, presumably, some of its “EU moderators” might not be exclusively moderating content related to EU users.)

“There’s still some technical work to be done, despite the transparency, because we want to be sure that everybody has the same concept of what is a content moderator,” noted one Commission official. “It’s not necessarily the same for every platform. What does it mean to speak a language? It sounds stupid but it actually is something that we have to investigate in a little bit more detail.”

Another element they said they’re keen to understand is “what is the steady state of content moderators” — so whether there’s a permanent level or if, for example, resourcing is dialled up for an election or a crisis event — adding that this is something the Commission is investigating at the moment.

On X, the Commission also said it’s too early to make any statement regarding the effectiveness (or otherwise) of the platform’s crowdsourced approach to content moderation (aka X’s Community Notes feature).

But EU officials said X does still have some election integrity teams who they are engaging with to learn more about its approach to upholding its policies in this area.

Unprecedented transparency

What’s clear is the first set of DSA transparency reports from platforms has opened up fresh questions which, in turn, have triggered a wave of RFIs as the EU seeks to dial in the resolution of the disclosures it’s getting from Big Tech. So the flurry of RFIs reflects gaps in the early disclosures as the regime gets off the ground.

This may, in part, be because transparency reporting is not yet harmonized. But that’s set to change as the Commission confirmed it will be coming, likely early next year, with an implementing act (aka secondary legislation) that will include reporting templates for these disclosures.

That suggests we might — ultimately — expect to see fewer RFIs being fired at platforms down the line, as the information they are obliged to provide becomes more standardized and data flows more steadily and predictably.

But, clearly, it will take time for the regime to bed in and have the impact the EU desires — of forcing Big Tech into a more accountable and responsible relationship with users and wider society.

In the meanwhile, the RFIs are a sign the DSA’s wheels are turning.

The Commission is keen to be seen actively flexing powers to get data that it contends has never been publicly disclosed by the platforms before — such as per market content moderation resourcing; or data about the accuracy of AI moderation tools. So platforms should expect to receive plenty more such requests over the coming months (and years) as regulators deepen their oversight and try to verify whether systems VLOPs/VLOSEs build in response to the new regulatory risk are really “effective” or not.

The Commission’s hope for the DSA is that it will, over time, open an “unprecedented” window onto how tech giants are operating. Or usher in a “whole new dimension of transparency”, as one of the officials put it today. And that reboot will reconfigure how platforms operate for the better, whether they like it or not.

“It’s important to note that there is change happening already,” a Commission official suggested today. “If you look at  the whole area of content moderation you now have it black and white, with the transparency reports… and that’s peer pressure that we will of course continue to apply. But also the public can continue to apply peer pressure and ask, wait a minute, why is X not having the same amount of content moderators as others, for instance?”

Also today, EU officials confirmed it has yet to open any formal DSA investigations. (Again, the RFIs are also a sequential and necessary preceding step to any future possible probes being opened in the weeks and months ahead.)

While enforcement — in terms of fines or other sanctions for confirmed infringements — cannot kick in until next spring, as the full regime needs to be operational before formal enforcement procedures could take place. So the next few months of the DSA will be dominated by information gathering; and — the EU hopes — start to showcase the power of transparency to shape a new, more quantified narrative on Big Tech.

Again, it suggests it’s already seeing positive shifts on this front. So instead of the usual “generic answers and absolute numbers” routinely trotted out by tech giants in voluntary reporting (such as the aforementioned Disinformation Code), the RFIs, under the legally binding DSA, are extracting “much more usable data and information”, according to a Commission official.

“If we see we are not getting the right answers, [we might] open an investigation, a formal investigation; we might come to interim measures; we might come to compliance deals,” noted another official, describing the process as “a whole avalanche of individual steps — and only at the very end would there be the potential sanctions decision”. But they also emphasized that transparency itself can be a trigger for change, pointing back to the power of “peer pressure” and the threat of “reputational risk” to drive reform.



Source link