Friday 26 April 2024

The new Screening Regulation – part 5 of the analysis of new EU asylum laws

 


Professor Steve Peers, Royal Holloway University of London

Photo credit: Rock Cohen, via Wikimedia Commons

Just before Christmas, the European Parliament and the Council (the EU body consisting of Member States’ ministers) reached a deal on five key pieces of EU asylum legislation, concerning asylum procedures, the ‘Dublin’ system on responsibility for asylum applications, the ‘Eurodac’ database supporting the Dublin system, screening of migrants/asylum seekers, and derogations in the event of crises. These five laws joined the previously agreed revised laws on qualification of refugees and people with subsidiary protection, reception conditions for asylum-seekers, and resettlement of refugees from outside the EU. Taken together, all these laws are intended to be part of a ‘package’ of new or revised EU asylum laws.

I’ll be looking at all these agreements for new legislation on this blog in a series of blog posts (see the agreed texts here), unless the deal somehow unravels. This is the fifth post in the series, on the new Regulation on screening of migrants (mostly) at the external borders. The previous blog posts in the series concerned the planned new qualification Regulation (part 1), the revised reception conditions Directive (part 2), the planned new Regulation on resettlement of refugees (part 3), and the revised Regulation on Eurodac (part 4).

As noted in the earlier posts in this series, all of the measures in the asylum package could in principle be amended or blocked before they are adopted, except for the previous Regulation revising the powers of the EU asylum agency, which was separated from the package and adopted already in 2021. I will update this blog post as necessary in light of developments. (On EU asylum law generally, see my asylum law chapter in the latest edition of EU Justice and Home Affairs Law).

The Screening regulation: background

There have been two previous ‘phases’ in development of the Common European Asylum System: a first phase of laws mainly adopted between 2003 and 2005, and a second phase of laws mainly adopted between 2011 and 2013. The 2024 package will, if adopted, in effect be a third phase, although for some reason the EU avoids calling it that.

However, unlike most of the 2024 package of legislation, the Screening Regulation is entirely new – although to some extent it may provide a legal basis for things that were already going on in practice before its adoption. So unlike most of the other laws in the asylum package, there is no current version of the law to compare the new version to – and therefore no prior CJEU case law to consider either.

Having said that, the Screening Regulation will amend a number of other EU measures, to ensure their consistency with it, namely the Regulations on: the Visa Information System; the entry-exit system; ETIAS (the travel authorisation system); and interoperability of databases. Furthermore, a parallel Regulation will amend two EU criminal law measures to ensure that they are also consistent with the main Screening Regulation.

Why two parallel Regulations? Because the Screening Regulation, unlike the rest of the package of EU asylum law measures, is technically a law on external borders, not asylum. As such, it ‘builds upon the Schengen acquis’, and so will be applicable in principle to the Schengen associates (Norway, Iceland, Switzerland and Liechtenstein) too. Ireland must opt out (as it does not participate in Schengen) and Denmark is formally excluded (although it may apply the Regulation as a matter of national law). In contrast, the parallel amendment to EU criminal law is only relevant to Member States (but again, there will be an Irish and Danish opt out from it).

In this context, the preamble to the Regulation makes special provision for Cyprus, which has not yet fully applied Schengen; that country must apply the Regulation to those crossing the line separating the areas controlled by the Cypriot government and the Turkish Cypriot administration, even though it is not legally an international border from the perspective of EU law.  As for Denmark and Schengen associates, the preamble states that for them, references to the EU’s reception conditions Directive in the Screening Regulation should be understood as references to the relevant national law.   

As with all the new EU asylum measures, each must be seen in the broader context of all the others – which I will be discussing over the course of this series of blog posts. Furthermore, the new Screening Regulation will have links with the Schengen Borders Code, the main law governing crossing of external EU borders – although the Regulation will not formally amend the Code. It will also link with (but again, not amend) the EU’s Returns Directive.

The legislative process leading to the agreed text of the revised Eurodac Regulation started with the Commission proposal in 2020, as part of the attempt to ‘relaunch’ the process of amending EU asylum law, started back in 2016. The proposal was subsequently negotiated between EU governments (the Council) and then between the Council and the European Parliament. But this blog post will look only at the final text, leaving aside the politics of the negotiations.

Like most of the other measures in the asylum package, the application date of the Screening Regulation will be two years after adoption (so in spring 2026). However, the provisions on queries of other EU information systems will only start to apply once those information systems enter into operation.

Scope

The Regulation applies to four categories of people, namely those who: 

without fulfilling the entry conditions [in the Schengen Borders Code], have crossed the external border in an unauthorised manner, have applied for international protection during border checks, or have been disembarked after a search and rescue operation

and of

third-country nationals illegally staying within the territory of the Member States where there is no indication that those third-country nationals have been subject to controls at external borders, before they are referred to the appropriate procedure.

The Regulation distinguishes between the first three categories, who are all connected with the external borders, and the fourth category (illegal staying where is there is no indication of having been controlled at external borders). For simplicity’s sake, this blog post refers to the first three categories as ‘external cases’, and the fourth category as ‘internal cases’. Both the first and third groups must be screened regardless of whether they apply for asylum or not.

Member States ‘may refrain’ from screening the fourth category of people (on the territory, having entered without authorisation), if they send the non-EU citizen back, ‘immediately after apprehension, to another Member State under bilateral agreements or arrangements or under bilateral cooperation frameworks.’ In that event, the other Member State must apply a screening process.

The Screening Process

For external borders cases, screening must be ‘carried out without delay’, and in any event completed within seven days of apprehension, disembarkation, or presentation at the border. For internal cases, the deadline is three days. Screening must end if the person concerned is authorised to enter the territory. Screening may end if the person concerned ‘leaves the territory of the Member States, for their country of origin or country of residence or for another third country’ to which they voluntarily decided to return to and were accepted by. In any case, screening ends once the deadline to complete it is reached.

Screening must take place at an ‘adequate and appropriate’ location decided by Member States; for external cases, that location should be ‘generally situated at or in proximity to the external borders’, although it could be at ‘other locations within the territory’. It must entail (referring in part to checks under other EU laws): checks on health, vulnerability, and identity; registration of biometric data ‘to the extent that it has not yet occurred’; a security check; and filling out a screening form.

For those who have made an asylum application, the registration of that application is governed by the asylum procedures Regulation. The preamble to the Screening Regulation explicitly states that an asylum application can be made during the screening process. Furthermore, the Screening Regulation is ‘without prejudice to’ the Dublin rules; and it ‘could be followed by relocation’ (ie movement to a Member State not responsible for the application) under the Dublin rules ‘or another existing solidarity mechanism’.

Member States are obliged to inform the persons being screened about the screening process itself, as well as asylum law and returns law, the Borders Code, national immigration law, the GDPR, and any prospect of relocation. Otherwise, there is no explicit reference to procedural rights. Conversely, the people being screened have procedural obligations: they must ‘remain available to the screening authorities’ and provide both specified personal data and biometric data as set out in the Eurodac Regulation. Finally, after screening ends, the person concerned should be referred to the appropriate procedure – either the asylum process or the returns process.

Treatment During Screening

As regards immigration law status during the screening process, external cases must not be authorised to enter the territory of the Member States, even though the screening might be carried out on the territory de facto. This is obviously a legal fiction, which is exacerbated by the prospect (under the procedures Regulation) of continuing that legal fiction under the ‘borders procedure’ for up to 12 weeks.

Moreover, Member States must provide in their national law that persons being screened ‘remain available to the authorities carrying out the screening for the duration of the screening, to prevent any risk of absconding and potential threats to internal security resulting from such absconding.’ This wording looks like a euphemism for detention, which the Regulation goes on to refer to more explicitly – providing that where the person being screened has not applied for asylum, the rules on detention in the Returns Directive apply.

For those who have applied for asylum, the reception conditions Directive applies to the extent set out in it. This cross-reference is potentially awkward because that Directive applies to those ‘allowed to remain on the territory’ with that status, whereas the Screening Regulation decrees that the people covered by it are not legally on the territory. Logically the reception conditions Directive must apply despite the non-entry rule of the Screening Regulation, otherwise that Regulation’s references to that Directive applying would be meaningless (the preamble to the Regulation also says that the detention rules in the reception conditions Directive ‘should apply’ to asylum seekers covered by the Regulation). Screening is not as such a ground for detention in the exhaustive list of grounds set out in the reception conditions Directive – so Member States will have to find some other ground for it from that list. The preamble to the Regulation sets out general rules on limits to detention, borrowing some language from the reception conditions directive.

As for other aspects of treatment, the Screening Regulation states that Member States ‘shall ensure that all persons subject to the screening are accorded a standard of living which guarantees their subsistence, protects their physical and mental health and respects their rights under the Charter [of Fundamental Rights].’ For asylum-seekers, this overlaps with the more detailed rules in the reception conditions Directive, but for non-asylum seekers, it in principle goes further than the Returns Directive – although the case law on that Directive has required some minimum treatment of people covered by it. Of course, for many people subject to screening, it will be the provisions on detention conditions under those two Directives which will be relevant in practice. There is a more specific provision on health care, stating that those being screened ‘shall have access to emergency health care and essential treatment of illness.’

The Regulation includes specific provisions on minors. The best interests of the child must always be paramount; the minor must be accompanied by an adult family member, if present, during the screening; and Member States must ensure the involvement of a representative for unaccompanied minors (overlapping with the relevant provisions of the reception conditions Directive).

Finally, as for contact with the outside world, ‘[o]rganisations and persons providing advice and counselling shall have effective access to third-country nationals during the screening’, although Member States may limit that access under national law where the limit is ‘objectively necessary for the security, public order or administrative management of a border crossing point or of a facility where the screening is carried out, provided that such access is not severely restricted or rendered impossible’. Presumably such access can help check that the rules on treatment are being applied, including possible challenges to detention and offering advice as regards subsequent asylum or returns procedures, or potential challenges to screening as discussed above.

Human Rights Monitoring

The Regulation sets out an overarching obligation to comply with human rights obligations, including the principle of non-refoulement (not sending a migrant to an unsafe country), as well as a requirement to have an independent human rights monitoring mechanism, which is specified in some detail. Member States must: ‘investigate allegations of failure of respect for fundamental rights’ as regards screening; ensure civil or criminal liability under national law ‘in cases of failure to respect or to enforce fundamental rights’; and create an independent mechanism to monitor human rights compliance during the screening, ensuring that allegations of human rights breaches are triggered or dealt with effectively, with ‘adequate safeguards’ to ensure its independence. The preamble points out that judicial review is not enough to meet these standards. (Also, these rules will apply to monitoring the borders procedure in the procedures Regulation)

Assessment

To what extent has this Regulation ensured a balance between migration control and human rights? It does aim towards a greater degree of migration control by imposing new legal obligations as regards many asylum seekers; but the key point as regards their rights is that the Regulation provides for a filtering process, not a final decision. In other words, the screening process does not entail in itself a decision on the merits or admissibility of an asylum claim, or a return decision. Whilst it is based on a legal fiction of non-entry, that process is strictly and absolutely limited in time, with no prospect of extending the short screening period even as a derogation under the Exceptions Regulation. (In contrast, the border procedure under the procedures Regulation lasts for longer, and can be extended in exceptional cases). And the legal fiction does not in any event mean that no law applies at all to the persons concerned; obviously at the very least, the screening Regulation itself applies, as do other EU laws which it makes applicable. (So does the ECHR: see Amuur v France) For instance, the Regulation refers to detention on the basis of the returns and reception conditions Directives, and although the lack of authorisation to enter means that the right to remain on the territory as an asylum seeker is not triggered as such, nevertheless the Regulation precludes Member States taking return decisions to remove asylum seekers, as it only provides for a filtering process.

Despite the absence of any express procedural rights in the Regulation, it is arguable that in light of the right to effective remedies and access to court set out in Article 47 of the Charter, it should at least be possible to challenge the application of the screening procedure on the basis that (for example) there is no legal ground for the screening at all, or that the screening has exceeded its permitted duration. In any event, the absence of express procedural rights should be seen in the context of the screening process not determining the merits of an asylum application.

The drafters of the Regulation chose instead to focus on the prospect of non-judicial processes to protect human rights in the context of the screening process. While non-judicial mechanisms of course play an important role in protection of human rights in general, it is useful if parallel judicial processes can be relied upon too. And one area where the Regulation should have explicitly provided for both judicial and non-judicial mechanisms is pushbacks from the territory – illegal not only under human rights law but also under EU law, as recently confirmed by the CJEU.

 

Monday 22 April 2024

Access to documents: an important victory for transparency in ClientEarth v Council

 



Dimitrios Kyriazis (DPhil, Oxon), Assistant Professor in EU Law at the Law School of the Aristotle University of Thessaloniki.

Photo credit: Bela Geletneky, via Wikimedia Commons

 

In ClientEarth v Council (Joined Cases T-682/21 and T-683/21), the General Court (GC) heard an action for annulment brought by ClientEarth AISBL (and Ms Leino-Sandberg) against a decision by the Council of the EU refusing access to certain documents requested on the basis of the Public Access to Documents Regulation (1049/2001) and the Aarhus Convention Regulation (1367/2006). The GC found against the Council and annulled its decisions refusing access.

This judgment is important for a variety of reasons. First, it sheds light on the proper application of transparency requirements for EU institutions. Second, it does not allow the EU’s legislative process to remain opaque. Third, it reaffirms the correct standards for providing sufficient justifications to EU decisions.

In this post, the background to the dispute is initially set out, as well as the pleas in law raised. Then, the GC’s key dicta are analysed. Finally, the post concludes with an assessment of the ruling’s broader ramifications. 

Background to the dispute and pleas raised

Lodging actions for annulment under Article 263 TFEU, the applicants, ClientEarth AISBL and Ms Päivi Leino-Sandberg, sought annulment of the decisions contained in the letters with reference numbers SGS 21/2869 and SGS 21/2870 of the Council of 9 August 2021, refusing them access in part to document 8721/21. This document was issued by the Council’s legal service and contained its legal opinion on the then proposed amendment of the EU Aarhus Regulation.

To provide some context, Regulation (EC) No 1367/2006 (“Aarhus Regulation”) was adopted by the EU in late 2006 in order to comply with the requirements of the Aarhus Convention, i.e. the Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters.

In March 2017, the Aarhus Convention Compliance Committee (‘the Aarhus Committee’), which was set up in order to verify compliance by the parties to that convention with the obligations arising therefrom, found, inter alia, that the EU was not in compliance with Article 9(3) and (4) of that convention regarding access to justice by members of the public and recommended that the EU Aarhus Regulation be amended. Its two main concerns were as follows. First, the Aarhus Regulation should not be restricted to acts of individual scope with legally binding and external effects adopted under environmental law, but that it had to be extended to all acts running counter to environmental law. Second, the mechanism should not be open only to certain NGOs entitled to make use of it, but must also be open to ‘members of the public’.

In October 2020, the European Commission published a proposal to amend the Aarhus Regulation, and the Aarhus Committee issued advice on the Commission’s proposal stating that, notwithstanding certain concerns that remained to be addressed, the proposal constituted a ‘significant positive development’. In May 2021, the Council’s legal service issued an opinion relating to the Commission’s proposal and the advice of the Aarhus Committee in document 8721/21. This is the document Client Earth requested full access to a few days later. The Council only partly granted their request, giving them access to only certain paragraphs of the document. Client Earth then made confirmatory applications pursuant to Article 7(2) of Regulation No 1049/2001 and in August 2021, the Council adopted the (now challenged) decisions, by which it determined the applicants’ confirmatory applications. While confirming its previous decision to refuse full access to the requested document, the Council granted additional partial access to some more paragraphs of that document.

The applicants brought an action for annulment against said Council decisions refusing them full access. In support of its action, ClientEarth relied on four pleas in law, under which the Council committed several errors of law and a manifest error of assessment.

The first three pleas were based formally on errors of law, while the fourth one was subsidiary in nature. We will follow the order which the GC followed in its judgment, thus examining the second plea first, then the first one, and finally the third one. Only the key legal dicta are repeated and analysed.

Second plea in law (paras 26-87)

The applicants’ second plea in law alleged that the Council committed an error of law and of assessment in applying the exception provided for in the second indent of Article 4(2) of Regulation No 1049/2001 relating to the protection of legal advice. In summary, this provision provides that access to a document is to be refused where disclosure would undermine the protection of legal advice, unless there is an overriding public interest in disclosure of that document. A three-step test has been set out in settled case law in order to apply this exception.

First, the institution concerned, here the Council, must satisfy itself that the document which it is asked to disclose does indeed relate to legal advice and, if so, it must decide which parts of it are actually concerned and may, therefore, be covered by the exception at issue. Second, the institution must examine whether disclosure of the parts of the document in question which have been identified as relating to legal advice would undermine the protection which must be afforded to that advice. The question to be asked here is whether it would be harmful to the institution’s interest in seeking legal advice and receiving frank, objective and comprehensive advice. The risk of that interest being undermined must, in order to be capable of being relied on, be reasonably foreseeable and not purely hypothetical. Finally, even if said institution considers that disclosure of a document would undermine the protection of legal advice, it is incumbent on it to ascertain whether there is any overriding public interest justifying disclosure despite the fact that its interest in seeking legal advice and receiving frank, objective and comprehensive advice would thereby be undermined.

These conditions were examined in turn. The applicants disputed whether the opinion contained legal advice to begin with, but their argument was  swiftly (and rightly) rejected by the GC, which stressed that ‘the analysis of the requested document shows that its content is intended to answer questions of law and, as a result, is covered by the exception relating to the protection of legal advice’ (para 42).

Moving on to the second condition, the applicants had asserted that  the document was not particularly sensitive and did not have a particularly wide scope, so that the Council erred in assessing that its disclosure was liable to undermine the protection that must be afforded to legal advice. More specifically, they submitted that the Council did not establish that there was an actual, specific, reasonably foreseeable and non-hypothetical risk that would result from disclosure of that document, and did also not establish that the document had a particularly wide scope having regard to the legislative context in which it was adopted.

Regarding the sensitive nature of the requested document, the Council had substantiated it by relying on three considerations. The first consideration was the context in which that document had been drawn up and its content; the second was the risk of external pressure if the document was released; and the third, the fact that the issues addressed could be the subject of litigation before the EU Courts.

The GC very systematically and methodically tore down these defences. First, it stressed that the document itself must be particularly sensitive in nature, not, as argued by the Council, the context of which it forms part (para 58). If it comprises only legal assessments that have no originality and does not contain, in addition to those assessments, sensitive information or does not refer to confidential facts, it cannot be considered sensitive in nature (para 59). The Council’s position on this matter was not endorsed by the GC.

The Court next focused on the Council’s assertion that the disclosure of the requested document would expose its legal service to external pressure which could subsequently affect how its advice is drafted and therefore prejudice the possibility of that legal service of expressing its views free from that pressure. The GC was not receptive to such abstract “dangers” either. First, it reiterated settled case law stressing that openness in the legislative process of the EU institutions contributes to conferring greater legitimacy on the institutions in the eyes of EU citizens and increasing their confidence in those institutions by allowing divergences between various points of view to be openly debated (para 64). Therefore, mere statements relying, in a general and abstract way, on the risk of ‘external pressure’ did not suffice to establish that the protection of legal advice would be undermined. This argument was, accordingly, also rejected by the GC.

As regards the Council’s argument that the requested document was particularly sensitive in so far as the issues addressed could be the subject of litigation before the EU Courts, the GC was not particularly sympathetic here either. In essence, the nub of the Council’s argument here was that it would be difficult for the legal service of an institution which had initially expressed a negative opinion regarding a draft legislative act subsequently to defend the lawfulness of that act before the EU Courts, if its opinion had been published. This, prima facie at least, does make sense. However, the GC reminded the Council that it is settled case law that such an argument was too general an argument to justify an exception to the openness provided for by Regulation No 1049/2001 (para 74). More specifically, the Council had not specified exactly how disclosure of the requested document could harm its ability to defend itself in the event of litigation concerning the interpretation or application of the Aarhus Regulation. Furthermore, it was not apparent from the examination of the content of that document that it could be regarded as expressing a negative opinion regarding the Commission’s proposal for amendment of that regulation. Concluding on this matter, the GC stressed (para 76) that the Council’s refusal was vitiated by an error of assessment and, consequently, the first complaint had to be upheld.

The GC then moved on to the second complaint of the applicants, which alleged that, contrary to what the Council had claimed, the scope of the requested document was not particularly wide. The arguments of the Council were twofold. One, the Commission’s proposal entailed the broadening of the scope of the internal review mechanism provided for by the Aarhus Regulation to acts of general application which run counter to environmental law, but the preexisting limitations were based on the similar limitations of standing under Article 263 TFEU. Therefore, in the Council’s view, the analysis contained in the requested document entailed implications which allegedly went beyond the legislative process in question. Two, the Council maintained that the requested document touched upon issues that could affect the Commission’s choices regarding future legislative proposals in the context of the ‘European Green Deal’, which was being drawn up at that time.

The Council was, once again, rapped over the knuckles by the GC, with the latter asserting that the Council did ‘no more than rely on the possible impact of the requested document in relation to future legislative proposals of the Commission in environmental matters, while the Commission’s proposal for amendment of the Aarhus Regulation [was] restricted to those matters alone’ (para 82). Moreover, the GC (very logically) dismanted the argument relating to an analogy with Article 263 TFEU, stating that the Council had not proven that the Commission’s proposal on the Aarhus Regulation entailed consequences on the conditions for the admissibility of actions for annulment brought by legal or natural persons, which are provided for by Article 263 TFEU and cannot be amended other than by revision of the Treaties (para 84). The second complaint was, thus, also upheld, and the applicant’s second plea in law was upheld in its entirety (para 87). The GC then went on to briefly examine their first plea in law.

First plea in law (paras 88-103)

The applicants’ first plea in law alleged that the Council committed an error of law and of assessment in applying the exception provided for in Article 4(3) of Regulation No 1049/2001 relating to the protection of the decision-making process. Under the first subparagraph of Article 4(3) of Regulation No 1049/2001, access to a document, drawn up by an institution for internal use, which relates to a matter where the decision has not been taken by the institution, is to be refused if disclosure of the document would seriously undermine the institution’s decision-making process, unless there is an overriding public interest in disclosure.

The applicants argued that, since on the date on which the contested decisions were adopted, the Council had already adopted its position on the Commission’s proposal and, moreover, the provisional agreement had already been concluded, there was no longer an ongoing decision-making process which disclosure of the requested document could have seriously undermined.

The GC reminded both parties of the ratio underpinning the relevant provision of Regulation No 1049/2001: it is intended to ensure that those institutions are able to enjoy a space for deliberation in order to be able to decide as to the policy choices to be made and the potential proposals to be submitted (para 93). However, said provision may no longer be relied on in respect of a procedure closed on the date on which the request for access was made (para 96). In practice, as the GC very pragmatically observed, agreements reached in the course of trilogues are subsequently adopted by the co-legislators without substantial amendment. This meant that it was appropriate to consider that the decision-making process of which the adoption of the requested document formed part was closed at the date on which the Council approved the provisional agreement (para 99). Therefore, the Couuncil’s reliance on this provision of the Regulation in order to refuse disclosure was also vitiated by an error of law (par 101).

Third plea in law (paras 104-120)

The applicants’ third plea in law, i.e. the final plea examined by the GC, alleged that the Council committed an error of law and a manifest error of assessment in applying the exception provided for in the third indent of Article 4(1)(a) of Regulation No 1049/2001 relating to the protection of the public interest as regards international relations (for this point in particular, see this excellent piece by Peter and Ankersmit). The applicants submitted that there was no risk that international relations would be undermined and that the exception based on the protection of international relations was inapplicable, given that the requested document is purely legal in nature.

The Council, to justify the application of the exception relating to the protection of international relations within the meaning of the third indent of Article 4(1)(a) of Regulation No 1049/2001, had argued that the full disclosure of the requested document would amount to revealing considerations relating to the ‘legal feasibility of solutions that the European Union could implement to address the alleged non-compliance with the Aarhus Convention’. The Council also stressed that the risk that the public interest would be undermined as far as international relations were concerned was reasonably foreseeable and not purely hypothetical, in so far as the question whether the Aarhus Regulation complied with the Aarhus Convention was to be examined during an upcoming meeting of the parties concerned in 2021. Thus, the requested documents could be used by other parties to the Aarhus Convention during discussions during the meeting of the parties, which could weaken the position that the European Union might have intended to take in that institutional context.

The GC’s strict approach to such assertions will by now be familiar to the reader. The GC noted (para 112) that the existence of a mere link between the elements contained in a document (which is the subject of an application for access) and the objectives pursued by the European Union in the negotiation and implementation of an international agreement is not sufficient to establish that disclosure of those elements would undermine the public interest protected as regards international relations. Even more crucially, the GC noted, the adoption of an act of secondary EU legislation necessarily implies legal analyses from each institution participating in the legislative procedure, which entails a risk of divergences of legal assessment or interpretation. But this is an integral part of any legislative procedure and such divergences are therefore liable to be explained to non-member countries or international organisations in an international body such as the meeting of the parties to the Aarhus Convention, without necessarily weakening the European Union’s position resulting from the final version of the act ultimately adopted (para 114). Consequently, the Council failed to provide sufficient explanations as to the specific, actual, reasonably foreseeable and non-hypothetical risk on which it relied regarding the international relations of the European Union and the other parties to the Aarhus Convention (para 118).

The applicants’ fourth plea in law, raised in the alternative, alleged infringement of Article 4(6) of Regulation No 1049/2001, in that the Council had failed to grant the applicant wider access to the requested document. This plea was not even examined by the GC, since it had already found that the decisions had to be annulled, without there there being any need to examine the (subsidiary) fourth plea (para 120).

Broader Ramifications and Conclusion

This very detailed and well-substantiated ruling by the GC is significant for a number of reasons. Firstly, it sheds light on the exact conditions that need to be fulfilled for access to documents to be validly refused. Secondly, it reiterates, and clarifies, that any “risk” on which an EU institution might wish to rely to refuse disclosure has to be specific, actual, reasonably foreseeable and non-hypothetical. Thirdly, it demonstrates the pragmatic way in which the EU Courts understand the everyday reality of EU rulemaking.

Most importantly, the ruling is important as a matter of principle. Even when the political stakes are high, EU Courts will side with transparency. The quote “sunlight is said to be the best of disinfectants” by Brandeis echoes in Luxembourg just as it did before the US Supreme Court.

 

Saturday 20 April 2024

‘Trusted’ rules on trusted flaggers? Open issues under the Digital Services Act regime





Alessandra Fratini and Giorgia Lo Tauro, FratiniVergano European Lawyers

Photo credit:  Lobo Studio Hamburg, via Wikimedia Commons

 

1 Introduction

The EU’s Digital Services Act (DSA) institutionalises the tasks and responsibilities of ‘trusted flaggers’, key actors in the online platform environment, that have existed, with roles and functions of variable scope, since the early 2000. The newly applicable regime fits with the rationale and aims pursued by the DSA (Article 1): establishing a targeted set of uniform, effective and proportionate mandatory rules at Union level to safeguard and improve the functioning of the internal market (recital 4 in the preamble), with the objective of ensuring a safe, predictable and trusted online environment, within which fundamental rights are effectively protected and innovation is facilitated (recital 9), and for which responsible and diligent behaviour by providers of intermediary services is essential (recital 3). This article, after retracing the main regulatory initiatives and practices at EU level that paved the way for its adoption, looks at the DSA’s trusted flaggers regime and at some open issues that remain to be tested in practice.

 

2 Trusted reporters: the precedents paving the way to the DSA

The activity of flagging can be generally recognised as that of third parties reporting harmful or illegal content to intermediary service providers that hold that content in order for them to moderate it. In general terms, it refers to flaggers that have “certain privileges in flagging”, including “some degree of priority in the processing of notices, as well as access to special interfaces or points of contact to submit their flags”. This, in turn, poses issues in terms of both the flaggers’ responsibility and their trustworthiness since, as rightly noted, “not everyone trusts the same flagger.”

In EU law, the notion of trusted flaggers can be traced back to Directive 2000/31 (the ‘e-Commerce Directive’), the foundational legal framework for online services in the EU. The Directive exempted intermediaries from liability for illegal content they managed if they fulfilled certain conditions: under Articles 12 (‘mere conduit’), 13 (‘caching’) and 14 (‘hosting’) – now replaced by Articles 4-6 DSA – intermediary service providers were liable for the information stored at the request of the recipient of the service if, once become or made aware of any illegal content, such content was not removed or access to it was not disabled “expeditiously” (also recital 46). The Directive encouraged mechanisms and procedures for removing and disabling access to illegal information to be developed on the basis of voluntary agreements between all parties concerned (recital 40).

This conditional liability regime encouraged intermediary services providers to develop, as part of their own content moderation policies, flagging systems that would allow them to rapidly treat notifications so as not to trigger liability. The systems were not imposed as such by the Directive, but adopted as a result of the liability regime provided therein.

Following the provisions of Article 16 of the Directive, which supports the drawing up of codes of conduct at EU level, in 2016 the Commission launched the EU Code of Conduct on countering illegal hate speech online, signed by the Commission and several service providers, with others joining later on. The Code is a voluntary commitment made by signatories to, among others, review the majority of the flagged content within 24 hours and remove or disable access to content assessed as illegal, if necessary, as well as to engage in partnerships with civil society organisations, to enlarge the geographical spread of such partnerships and enable them to fulfil the role of a ‘trusted reporter’ or equivalent. Within the context of the Code, trusted reporters are entrusted to provide high quality notices, and signatories are to make information about them available on their websites.

Subsequently, in 2017 the Commission adopted the Communication on tackling illegal content online, to provide guidance on the responsibilities of online service providers in respect of illegal content online. The Communication suggested criteria based on respect for fundamental rights and of democratic values to be agreed by the industry at EU level through self-regulatory mechanisms or within the EU standardization framework. It also recognised the need to strike a reasonable balance between ensuring a high quality of notices coming from trusted flaggers, the scope of additional measures that companies would take in relation to trusted flaggers and the burden in ensuring these quality standards, including the possibility of removing the privilege of a trusted flagger status in case of abuses.

Building on the progress made through the voluntary arrangements, the Commission adopted Recommendation 2018/334 on measures to effectively tackle illegal content online. The Recommendation establishes that cooperation between hosting service providers and trusted flaggers should be encouraged, in particular, by providing fast-track procedures to process notices submitted by trusted flaggers, and that hosting service providers should be encouraged to publish clear and objective conditions for determining which individuals or entities they consider as trusted flaggers. Those conditions should aim to ensure that the individuals or entities concerned have the necessary expertise and carry out their activities as trusted flaggers in a diligent and objective manner, based on respect for the values on which the Union is founded.

While the 2017 Communication and 2018 Recommendation are the foundation of the trusted flaggers regime institutionalized by the DSA, further initiatives took place in the run-up to it.

In 2018, further to extensive consultations with citizens and stakeholders, the Commission adopted a Communication on tackling online disinformation, which acknowledged once again the role of trusted flaggers to foster credibility of information and shape inclusive solutions. Platform operators agreed on a voluntary basis to set self-regulatory standards to fight disinformation and adopted a Code of Practice on disinformation. The Commission’s assessment in 2020 revealed significant shortcomings, including inconsistent and incomplete application of the Code across platforms and Member States and lack of an appropriate monitoring mechanism. As a result, the Commission issued in May 2021 its Guidance on Strengthening the Code of Practice on Disinformation, containing indications on the dedicated functionality for users to flag false and/or misleading information (p. 7.6). The Guidance also aimed at developing the existing Code of Practice towards a ‘Code of Conduct’ as foreseen in (now) Article 45 DSA.

Further to the Guidance, in 2022 the Strengthened Code of Practice on Disinformation was signed and presented by 34 signatories who had joined the revision process of the 2018 Code. For signatories that are VLOPs, the Code aims to become a mitigation measure and a Code of Conduct recognized under the co-regulatory framework of the DSA (recital 104).

Finally, in the context of provisions/mechanisms defined before the DSA, it is worth mentioning Article 17 of Directive 2019/790 (the ‘Copyright Directive’), which draws upon Article 14(1)(b) of the e-Commerce Directive on the liability limitation for intermediaries and acknowledges the pivotal role of rightholders when it comes to flagging unauthorised use of their protected works. Under Article 17(4), in fact, “[i]f no authorisation is granted, online content-sharing service providers shall be liable for unauthorised acts of communication to the public, including making available to the public, of copyright-protected works and other subject matter, unless the service providers demonstrate that they have: (a) made best efforts to obtain an authorisation, and (b) made, in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information; and in any event (c) acted expeditiously, upon receiving a sufficiently substantiated notice from the rightholders, to disable access to, or to remove from their websites, the notified works or other subject matter, and made best efforts to prevent their future uploads in accordance with point (b)” (emphasis added).

 

3 Trusted flaggers under the DSA

The DSA has given legislative legitimacy to trusted flaggers, granting a formal (and binding) recognition to a practice that far developed on a voluntary basis.

According to the DSA, a trusted flagger is an entity that has been granted such status within a specific area of expertise by the Digital Service Coordinator (DSC) in the Member State in which it is established, because it meets certain legal requirements. Online platform providers must process and decide upon - as a priority and with undue delay - notices from trusted flaggers concerning the presence of illegal content on their online platform. That requires that online platform providers take the necessary technical and organizational measures with regard to their notice and action mechanisms. Recital 61 exposes the rationale and scope of the regime: notices of illegal content submitted by trusted flaggers, acting within their designated area of expertise, are treated with priority by providers of online platforms.

The regime is mainly outlined in Article 22.

Eligibility requirements

Article 22(2) sets out the three cumulative conditions to be met by an applicant wishing to be awarded the status of trusted flagger: 1) expertise and competence in detecting, identifying and notifying illegal content; 2) independence from any provider of online platforms; and 3) diligence, accuracy and objectivity in how it operates. Recital 61 clarifies that only entities - being them public in nature, non-governmental organizations or private or semi-public bodies - can be awarded the status, not individuals. Therefore, (private) entities only representing individual interests, such as brands or copyright owners, are not excluded from accessing the trusted flagger status. However, the DSA displays a preference for industry associations representing their member interests applying for the status of trusted flagger, which appears to be justified by the need to ensure that the added-value of the regime (the fast-track procedure) be maintained, with the overall number of trusted flaggers awarded under the DSA remaining limited. As clarified by recital 62, the rules on trusted flaggers should not be understood to prevent providers of online platforms from giving similar treatment to notices submitted by entities or individuals that have not been awarded trusted flagger status, from otherwise cooperating with other entities, in accordance with the applicable law. The DSA does not prevent online platforms from using mechanisms to act quickly and reliably against content that violates their terms and conditions.

The status’ award

Under Article 22(2), the trusted flagger status shall be awarded by the DSC of the Member State in which the applicant is established. Different from the voluntary trusted flagger schemes, which are a matter for individual providers of online platforms, the status awarded by a DSC must be recognized by all providers falling within the scope of the DSA (recital 61). Accordingly, the DSC shall communicate to the Commission and to the European Board for Digital Services details of the entities to which they have awarded the status of trusted flagger (and whose status they have suspended or revoked - Article 22(4)), and the Commission shall publish and keep up to date such information in a publicly available database (Article 22(5)).

Under Article 49(3), Member States were to designate their DSCs by 17 February 2024; the Commission makes available the list of designated DSCs on its website. The DSCs, who are responsible for all matters relating to supervision and enforcement of the DSA, shall ensure coordination in its supervision and enforcement throughout the EU. The European Board for Digital Services, among other tasks, shall be consulted on Commission’s guidelines on trusted flaggers, to be issued “where necessary”, and for matters “dealing with applications for trusted flaggers” (Article 22(8)).

The fast-track procedure

Article 22(1) requires providers of online platforms to deal with notices submitted by trusted flaggers as a priority and without undue delay. In doing so, it refers to the generally applicable rules on notice and action mechanisms under Article 16. On the priority to be granted to trusted flaggers’ notices, recital 42 invites providers to designate a single electronic point of contact, that “can also be used by trusted flaggers and by professional entities which are under a specific relationship with the provider of intermediary services”. Recital 62 explains further that the faster processing of trusted flaggers’ notices depends, amongst other, on “actual technical procedures” put in place by providers of online platforms. The organizational and technical measures that are necessary to ensure a fast-track procedure for processing trusted flaggers’ notices remain a matter for the providers of online platforms.

Activities and ongoing obligations of trusted flaggers

Article 22(3) requires trusted flaggers to regularly (at least once a year) publish detailed reports on the notices they submitted, make them publicly available and send them to the awarding DSCs. The status of trusted flagger may be revoked or suspended if the required conditions are not consistently upheld and/or the applicable obligations are not correctly fulfilled by the entity. The status can only be revoked by the awarding DSC following an investigation, either on the DSC’s own initiative or on the basis of information received from third parties, including providers of online platforms. Trusted flaggers are thus granted the possibility to react to, and fix where possible, the findings of the investigation (Article 22(6)).

On the other hand, if trusted flaggers detect any violation of the DSA provisions by the platforms, they have the right to lodge a complaint with the DSC of the Member State where they are located or established, according to Article 53. Such a right is granted not only to trusted flaggers but to any recipient of the service, to ensure effective enforcement of the DSA obligations (also recital 118).

The role of the DSCs

With the DSA it becomes mandatory for online platforms to ensure that notices submitted by the designated trusted flaggers are given priority. While online platforms maintain discretion as to entering into bilateral agreements with private entities or individuals they trust and whose notices they want to process with priority (recital 61), they must give priority to entities that have been awarded the trusted flagger status by the DSCs. From the platforms’ perspective, the DSA ‘reduces’ their burden in terms of decision-making responsibility by shifting it to the DSCs, but ‘increases’ their burden in terms of executive liability (for the implementation of measures ensuring the mandated priority). From the reporters’ perspective, the DSA imposes a set of (mostly) harmonised requirements to be awarded the status by a DSC, once and for all platforms, and to maintain such status afterward.

While the Commission’s guidelines are in the pipeline, some DSCs have proposed and adopted guidelines to assist potential applicants with the requirements for the award of the trusted flagger status. Among others, the French ARCOM published “Trusted flaggers: conditions and applications” on its website; the Italian AGCOM published for consultation its draft “Rules of Procedure for the award of the trusted flagger status under Article 22 DSA”; the Irish CoimisiĂşn na Meán published the final version of its “Application Form and Guidance to award the trusted flagger status under Article 22 DSA”; as did the Austrian KommAustria, the Danish KFST and the Romanian ANCOM. The national guidelines have been developed following exchanges with the other authorities designated as DSCs (or about to be so) with the view to ensuring a consistent and harmonised approach in the implementation of Article 22. As a matter of fact, the published guidelines are largely comparable.

 

4 Open issues

While the DSA’s regime is in its early stages and no trusted flagger status has been awarded yet, some of its merits have been acknowledged already, such as the fact that it has standardised existing practices, harmonised eligibility criteria, complemented special regimes – such as the one set out in Article 17 Copyright Directive - confirmed the cooperative approach between stakeholders, and finally formalised the role of trusted flaggers as special entities in the context of notice and action procedures.

At the same time, the DSA’s regime leaves on the table some open issues, such as the respective role of trusted flaggers and other relevant actors in the context of tackling illegal/harmful content online, such as end users and reporters that reach bilateral agreements with the platforms, which remain to be addressed in practice for the system to effectively work.

The role of trusted flaggers vis-Ă -vis end users

While the DSA contains no specific provision on the role of trusted flaggers vis-Ă -vis end users, some of the national guidelines published by the DSCs require that the applicant entity, as part of the condition relating to due diligence in the flagging process, indicates whether it has mechanisms in place to allow end users to report illegal content to it. In general, applicants have to indicate how they select content to monitor (which may include end users’ notices) and how they ensure that they do not unduly concentrate their monitoring on any one side and apply appropriate standards of assessment taking all legitimate rights and interests into account. As a matter of fact, the organisation and management of the relationship with end users (onboarding procedures, collection and processing of their notices, etc.) are left to the trusted flaggers. For example, some organisations (such as those part of the INHOPE network, operating in the current voluntary schemes) offer hotlines to the public to report to them, including anonymously, illegal content found online.

Although it is clear from the DSA that end users retain the right to flag their notices directly to online platforms (Article 16) with no duty to notify trusted flaggers, as well as their right to autonomously lodge a complaint against platforms (Article 53) and to claim compensation for damages (Article 54), it remains unclear whether, in practice, it will be more convenient for end users to rely on specialised trusted flaggers for their notices to be processed more expeditiously – in other words, whether the regime provides sufficient incentives, at least for some end users, to go the trusted flaggers’ way. On the other hand, it remains unclear to what extent applicant entities will be actually ‘required’ to put in place effective mechanisms to allow end users to report illegal or harmful content to them – in other words, whether the due diligence requirements will imply the trusted flaggers’ review of end users’ notices, within their area of expertise.

From another perspective, in connection with the reporting of illegal content, trusted flaggers may come across infringements by the platforms, as any recipient of online services. In such cases, Article 53 provides the right to lodge a complaint with the competent DSC, with no difference being made between complaints lodged respectively by trusted flaggers and by end users. If ‘priority’ is to be understood as the main feature of the privileged status granted to trusted flaggers when flagging illegal content online to platforms, a question arises about the possibility of granting them a corresponding priority before the DSCs when they complain about an infringement by online platforms. And in this context, one may wonder whether lodging a complaint to the DSC on behalf of end users might also fall within the scope of action of trusted flaggers (to the extent of claiming platforms’ abusive practices such as shadow banning, recital 55).

The role of trusted flaggers vis-Ă -vis other reporters

The DSA requires online platforms to put in place notice and action mechanisms that shall be “easy to access and user-friendly” (Article 16) and to ensure an internal complaint-handling system to recipients of the service (Article 20). However, as noted above, these provisions concern all recipients, with no difference in treatment for trusted flaggers. Although their notices are granted priority by virtue of Article 22, which leaves platforms free to choose the most suitable mechanisms, the DSA says nothing about ‘how much priority’ should be guaranteed to trusted flaggers with respect to notices filed not only by end users, but (also - and especially) by other entities/individuals with whom platforms have agreements in place.

In this respect, guidance would be welcome as to the degree of prevalence that platforms are expected to give trusted flaggers’ notices compared to other trusted reporters’, as would a clarification as to whether the nature of the content may influence such prevalence. From the trusted flaggers’ perspective, there should be a rewarding incentive to engage in a role that comes with the price tag of ongoing obligations.

 

5 Concluding remarks

While the role of trusted flaggers is not new when it comes to tackling illegal content online, the tasks newly entrusted to the DSCs in this context are. This results in a different allocation of responsibilities for the actors involved, with the declared aims of ensuring harmonisation of best practices across sectors and territories in the EU and a better protection for users online. Some open issues, as the ones put forward above, appear at this stage to be relevant, in particular for ensuring that the trusted flaggers’ mechanism effectively works as an expeditious remedy against harmful and illegal content online. It is expected that the awaited Commission guidelines under Article 22(8) DSA will shed a clarifying light on those issues. In the absence, there is a risk that the costs-benefits analysis - with the costs being certain and the benefits in terms of actual priority uncertain - might make the “trusted flagger project” unattractive for a potential applicant.

Podchasov v. Russia: the European Court of Human Rights emphasizes the importance of encryption

 

 


 

Mattis van ’t Schip & Frederik Zuiderveen Borgesius*

*Both authors work at the iHub and the Institute for Computing and Information Sciences, Radboud University, The Netherlands - mattis.vantschip[at]ru.nl & frederikzb[at]cs.ru.nl

Photo credit: Gzen92, on wikimedia commons 

 

In a judgment from February 2024 in the case Podchasov v. Russia, the European Court of Human Rights emphasised the role of encryption in protecting the right to privacy. The judgment comes at a time where encryption is central to many legal debates across the world. In this blog post, we summarise the main findings of the Court and add some reflections.

Summary

Podchasov, the applicant in the case, is a user of Telegram. Russia listed Telegram as an ‘internet communication organiser’ in 2017. This registration meant that Telegram, according to Russian law, had to store all its communications data for one year, and the contents of communication data for six months. The obligation concerns all electronic communications (e.g., textual, video, sound) received, transmitted, or processed by internet users. Law enforcement authorities could request access to that data, including access to the decryption key in case communications are encrypted (para 6 of the judgment).

Telegram is a messaging app that users often employ because of its end-to-end encrypted messaging. For instance, Telegram is an important communication channel for Ukrainians to receive updates about the current war. End-to-end encryption means, roughly summarised, that only the sender and the intended recipient can access the content of the encrypted data, in this case Telegram messages.

In July 2017, the Russian Federal Security Service (FSB) required Telegram to disclose data that would allow the FSB to decrypt messages of suspects of ‘terrorism-related’ activities (para 7 of the judgment). Telegram refused. Telegram said that it was impossible to allow the FSB to access encrypted messages without creating a backdoor to their encryption that malicious actors might also use. Because of Telegram’s refusal, a District Court in Moscow ordered the nation-wide blocking of Telegram in Russia. The applicants challenged the disclosure order, but their challenge was dismissed across several Moscow courts. Meanwhile, Telegram remains operational in Russia today. Finally, the applicants lodged their complaint with the European Court of Human Rights. They complained that Russia violated their right to private life in Article 8 of the European Convention on Human Rights (ECHR).

Russia is not a member of the Council of Europe anymore. The Council of Europe stopped Russia’s membership in March 2022, in response to Russia’s invasion of parts of Ukraine. Six months later, on 16 September 2022, Russia ceased to be party to the European Convention on Human Rights. Nevertheless, the Court gives this judgment. The Court says that it has jurisdiction over this case, as the alleged violations occurred before the date that Russia ceased to be a party to the Convention.

The Court quotes several documents that are not directly related to the ECHR, including surveillance case law of the Court of Justice of the European Union, a report on the right to privacy in the digital age by the Office of the United Nations High Commissioner for Human Rights, a statement by Europol and the European Union Agency for Cybersecurity, and an Opinion of the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB).

The surveillance scheme before the European Court of Human Rights resembles earlier Russian surveillance schemes, which the Court held as a violation of providing adequate and sufficient safeguards to protect against indiscriminate breaches of the right to private life in Article 8 ECHR. Earlier holdings thus also apply in the underlying case. Unlike in previous judgments about surveillance in Russia, the Court discusses the role of encryption in protecting the right to private life.

On encryption, the Court holds that the underlying case only concerns the encryption scheme of ‘secret chats’. Telegram offers ‘cloud chats’ by default with ‘custom-built server-client encryption’, but users can also decide to activate ‘secret chats’ which are end-to-end encrypted (para 5 of the judgment). The Court explicitly excludes any considerations of so-called ‘cloud chats’ in the case, as the complaints only concern the ‘secret chats’. The scope of the Court’s holdings is therefore limited to only end-to-end encryption as used for secret chats.

The applicants and several privacy-related civil organisations say that decryption of end-to-end encrypted messages would concern all users of that system, in this case Telegram, as technical experts can never create an encryption backdoor for a specific instance, case, or user. The Russian government did not refute these statements. The Court therefore holds that the Russian authorities interfered with right to private life of Article 8 ECHR. The Court then investigates whether the Russian authorities can justify this violation, for instance because the violation is necessary in a democratic society. The Court analyses encryption in this light.

The Court emphases that encryption contributes to ensuring the enjoyment of the right to private life and other fundamental rights, such as freedom of expression:

[T]he Court observes that international bodies have argued that encryption provides strong technical safeguards against unlawful access to the content of communications and has therefore been widely used as a means of protecting the right to respect for private life and for the privacy of correspondence online. In the digital age, technical solutions for securing and protecting the privacy of electronic communications, including measures for encryption, contribute to ensuring the enjoyment of other fundamental rights, such as freedom of expression (…) (para 76).

The Court adds that encryption is important to secure one’s data and communications:

Encryption, moreover, appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information. This should be given due consideration when assessing measures which may weaken encryption. (para 76)

The Court observes that legal decryption obligations cannot be specific or limited to certain circumstances: once a messaging provider creates a backdoor, there is a backdoor to all communications on the messaging platform:

Weakening encryption by creating backdoors would apparently make it technically possible to perform routine, general and indiscriminate surveillance of personal electronic communications. Backdoors may also be exploited by criminal networks and would seriously compromise the security of all users’ electronic communications. The Court takes note of the dangers of restricting encryption described by many experts in the field. (par 77)

Based on the above-mentioned arguments, the Court holds that the requirement to decrypt communication messages cannot be ‘regarded as necessary in a democratic society.’ (para 80 of the judgment) The Court concludes that Russia breached the right to private life, protected in article 8 ECHR.

Comments

The Podchasov case follows a long debate about the value of end-to-end encryption in democratic societies globally. As the Court mentions, end-to-end encryption is valuable for privacy as it enables people to communicate in such a way that third parties cannot access the communication. In this context, experts herald end-to-end encryption for its capacity to support, for instance, journalists in performing their work safely, or historically marginalised groups to express themselves freely.

At the same time, some law enforcement agencies consider end-to-end encryption a threat to public safety, as malicious actors can benefit from the privacy provided by secure messaging and similar methods, such as data encryption, too.

For instance, the FBI is in a long battle with Apple over the encryption of iPhones, which several suspects employed to keep their phone information and data private. On each occasion, Apple refused to offer decryption keys or software to the FBI, citing security concerns that can stem from enabling such backdoors.

The battle between security and privacy is, of course, long-standing. Encryption is now central to this debate. The EU Commission recently joined the debate with a proposal for a Child Sexual Abuse Material Regulation (CSAM proposal). Roughly summarised, the proposal would require communication providers (such as Telegram or WhatsApp) to analyse people’s communications to find, block, and report child sexual abuse materials, such as inappropriate pictures. Experts agree that communication providers can only do so if they do not encrypt communications, if they include a type of backdoor, or if they analyse communications on people’s devices before they are encrypted. Experts warn that such on-device analysis can be seen as a kind of backdoor of encrypted communications too. Many civil organisations, technical experts, and academics oppose the CSAM proposal. Opponents of the CSAM proposal can be expected to cite his judgment. 

The European Court of Human Rights is clear about the role of end-to-end encryption for the right to private life. In one paragraph, the Court states that end-to-end encryption is vital to privacy. The Court bases its reasoning partly on an opinion of the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) which discusses encryption in the context of the above-mentioned CSAM proposal. The Court also refers to responses from civil society organisations, who can present their views to the Court as amici curiae. The Court follows the reasoning of the EDPS, the EDPB, and privacy organisations regarding the conclusion that once encryption is broken, the entire system is no longer secure for its users.

The Court also mentions that encryption is vital to security of users. Consider, for instance, the importance of data protection in the current privacy context. Without adequate data encryption, people cannot be sure that the data they store in, for instance, cloud storage, is accessible to only them. Encryption therefore also helps against hacking, identity fraud, and data theft (para 76 of the judgment).

The Podchasov case is straight-forward: encryption is vital to the protection of the right to privacy. The Court’s clear statements will influence ongoing encryption debates, but the end of the debate is not in sight.