Consulting

January Data Protection Newsletter

Pasquale Esposito
By:
Pasquale Esposito
insight featured image
Grant Thornton Luxembourg welcomes you to the January Data Protection Newsletter!
Contents

As we begin 2026, we continue to share clear and practical insights on the latest developments in data protection, AI, and tech regulation, helping you stay informed and compliant in this ever-changing digital landscape.

Whether you manage compliance or simply want to stay safer and better informed online, this newsletter is for you.

As always, our Data Protection Team is here to help. If you would like tailored advice or to discuss a specific issue, please contact us using the details at the end of this page.

 

EDPB-EDPS Joint Opinion on the Digital Omnibus on AI

On 20 January 2026, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) adopted a joint opinion (1/2026) on the European Commission’s proposal to simplify the implementation of EU artificial intelligence (AI) Act dated November 2025, commonly referred to as the Digital Omnibus.

🧩Key takeaway

The Digital Omnibus is a recent proposal from the European Commission (November 2025) aiming to simplify and clarify how the EU AI Act is put into practice across Europe. It includes adjustments to make AI regulation more efficient and addresses practical challenges for both AI providers and users.

The EDPB and the EDPS (together “the Authorities”) support the overall objective of the Digital Omnibus particularly where it helps ensure the effective and consistent application of the AI Act.

The Authorities conditionally support the Digital Omnibus where it aims to:

  • Extend the exception for processing special categories of personal data for bias detection and correction beyond high-risk AI systems, to also cover other AI systems and deployers, subject to strict necessity conditions.
  • Create EU-level AI regulatory sandboxes by the AI Office for certain AI systems, such as those based on general-purpose AI models, to promote innovation.
  • Centralise supervision at EU level for certain AI systems, notably by extending the AI Office’s exclusive competence to AI systems that are part of or integrated into very large online platforms (VLOPs) or very large online search engines (VLOSEs) under the Digital Services Act.

Why is it important?

This joint position matters because it provides early guidance on how EU data protection authorities expect the AI Act to be applied in practice, especially where the Commission proposes simplification. 

By distinguishing between:

  • acceptable flexibility (such as EU-level sandboxes and centralised supervision) 
  • and non-negotiable safeguards (including registration, AI literacy, and timelines), they signal clear regulatory red lines. 

 

EU Commission adopts the adequacy decision for Brazil

On January 27th, 2026, the European Commission and Brazil have adopted reciprocal data adequacy decisions, formally recognising that both the EU and Brazil provide comparable levels of protection for personal data.

🧩Key takeaway

The European Commission has concluded that Brazil’s data protection regime ensures a level of protection for personal data essentially equivalent to that of the EU under the GDPR.

In turn, Brazil has recognised the European Union’s level of data protection as adequate.

These mutual adequacy decisions create a framework allowing personal data to flow freely between the EU and Brazil without the need for additional safeguards such as standard contractual clauses.

Why is it important?

For businesses, it means simpler compliance and greater legal certainty. For individuals, it confirms that their personal data remains protected to comparable standards on both sides.

For more details on how to implement CNIL’s guidance into practice, please contact our team via externaldpo@lu.gt.com.

 

CNIL’s Recommendations on the Application of the GDPR to the Development of AI systems

On January 5th, 2026, the French data protection authority (CNIL) published its first recommendations (the Recommendations) for the development of AI systems and the creation of datasets used to train them, where personal data is involved.

🧩Key takeaway

The Recommendations consist of several practical AI guidance sheets (AI how-to sheets), a synthesis document, and a checklist of key points. They focus on the development phase including system design, dataset creation, and model training before deployment of the AI system when personal data is involved in any part of that process.

Among these sheets, CNIL provides a step-by-step guide to follow for AI development:

  • Define a clear purpose: why you are developing the AI system and ensure the purpose is understandable, specific, and legitimate.
  • Determine roles and responsibilities: Establish who is the data controller (deciding why and how personal data is used) and who is a data processor (processing personal data on behalf of someone else).
  • Choose a legal basis: Select the appropriate GDPR legal basis (e.g., consent, legitimate interest) and document why it applies.
  • Review data reuse: Check whether you can legally reuse existing personal data, especially if it comes from public or third-party sources.
  • Minimise personal data: Use only the personal data that is strictly necessary for the training and development of the AI system.
  • Set retention periods: Decide how long personal data will be kept and explain retention limits.
  • Inform individuals: Make sure people know their data is being used to develop an AI system.
  • Enable data subject rights: Ensure individuals can exercise their GDPR rights (access, correction, deletion, etc.) regarding data used in development.
  • Secure your AI system: Apply appropriate technical and organisational measures to protect data throughout development.
  • Assess model status: Determine whether the model itself may contain or continue to process personal data.
  • Comply during annotation: Make sure the annotation and labelling phase of training data also respects GDPR principle


Why is it important?

These recommendations send a strong signal that the GDPR and AI innovation can coexist but only if developers embed data protection principles from the earliest stages of AI design. 

For organizations, this provides with a practical roadmap to reduce legal and regulatory risks by embedding data protection into AI design. 

For more details on how to implement CNIL’s guidance into practice, please contact our team via externaldpo@lu.gt.com.

 

CNIL fined FREE MOBILE and FREE € 42 Million for Data Breach

On 13th January 2026, the French Data Protection Authority (CNIL) imposed fines of €27 million and €15 million respectively on FREE MOBILE and FREE (French phone plan companies), for the inadequacy of the measures taken to ensure the security of their subscribers' data.

🧩Key takeaway

These fines stem from a major data breach in October 2024, when an external attacker accessed personal data belonging to approximately 24 million subscribers of the phone plan company. 

Following a large number of complaints (more than 2,500) from individuals affected by this data breach, CNIL carried out an inspection which revealed breaches of several obligations under the GDPR :

  • Inadequate data security: At the time of the breach, the companies lacked basic security safeguards, including weak VPN authentication for remote access and ineffective monitoring to detect abnormal system activity.
  • Insufficient breach notification: Communications to affected individuals did not include all the information required under Article 34 of the GDPR.
  • Excessive data retention: FREE MOBILE kept millions of former customers personal data longer than necessary, without proper sorting or deletion processes, and only took corrective action after the CNIL inspection.

Why is it important?

The decision highlights that core security measures (such as strong authentication and monitoring), clear and complete breach notifications, and strict data-retention controls are not optional, even for large, well-resourced companies. It also confirms that remedial actions taken only after an inspection will not mitigate liability.

 

Contact

Need advice on Data Protection, AI, or Whistleblowing compliance?

Our Data Protection team is here to support you. Contact us today to discuss your needs and explore how we can assist you: Dara Kelly, Head of Advisory, or Pasquale Esposito, Data Protection Officer.