The population must have more rights vis-à-vis major communication platforms such as Google, Facebook, YouTube and Twitter and be able to demand transparency. This is what the Federal Council believes in Switzerland, which today commissioned the Federal Department of the Environment, Transport, Energy and Communications (DETEC) to draw up a draft regulation on the matter, without limiting freedom of expression. In a note, the Executive notes that people are increasingly turning to these platforms to get information and form an opinion, which therefore increasingly influence public debates. “The Government recognizes that currently they are poorly regulated and the criteria on the basis of which it is decided who sees which contents are not transparent. Moreover – he continues – users have a weak position towards the big platforms. This emerges in particular when one of them blocks a user’s account or deletes the contents published by it. At the moment, users cannot defend themselves, or at least not enough, against such measures”.
The project should be ready in a year
The new rules should apply to managers (intermediaries). Authorities are not expected to intervene on content beyond what they can do in the analogue world. Where appropriate, the new provisions should be based on the European Union’s Digital Services Act. In its note, the Federal Council lists some cornerstones that will have to be included in the new regulation: designating a contact point and a legal representative in Switzerland. Users whose contents are deleted or whose account is blocked should also be able to request the platform directly to evaluate the measure adopted. An independent Swiss conciliation authority should also be created, which should be financed by the platforms. Advertisements should be marked as such and in the case of targeted advertising to specific groups should indicate the parameters used. In this way it would be possible to verify who receives a certain advertisement and on the basis of which criteria. Finally, users should be able to easily notify hate speech, depictions of acts of egregious violence or threats. Platforms should be in charge of evaluating notifications and informing users of the result. The project should be ready in a year.