- What is Pseudonymization?
- Pseudonymized data is not anonymous
- How pseudonymization can be used under the GDPR
1. What is Pseudonymization?
Sensitive data is replaced in a technique called Pseudonymization, wherein it is replaced with realistic fictional data for security purposes. Such information has the following attributes.
- It cannot be attributed to any specific individual without the addition of information, which according to GDPR Article 4 (5), needs to be “kept separately and subject to the organisation and technical measures to ensure non-attribution to an identifiable or identified person.”
- Such data pseudonymization maintains its statistical accuracy and referential integrity, thus enabling testing and development systems, business processes, analysis and training programs to operate normally.
GDPR pseudonymization encourages the pseudonymization meaning for the following specified reasons.
- Article 6 (4) (e) of the GDPR permits the pseudonymization definition of processing personal data for a purpose different from the intended purpose when the environment has “the existence of safeguards that are appropriate and which may include pseudonymization or encryption.” The definition of ‘other purposes’ here includes business analysis, profiling, outsourcing of data processing to non-EEA/EU countries, and using the data for historical, statistical and scientific purposes.
- The Data Controller under Article 11 (2) exempts from the Controller from complying with any kind of individual’s rights to rectification, access, erasure, and/or data portability of her/his personal data (listed under Articles 15 – 20) when and if such personal data can no longer be identified or linked to the individual.
- In Article 25 (1), pseudonymization techniques are made into a central feature of data protection requirements by default and by design.
- Data protection techniques by design pseudonymization are a sufficient and appropriate technical measure to ensure personal data processing security requirements under Article 32 (1) (a).
- Article 34 (1), the process to be followed in the event of a security breach, requires the Data Controllers to notify the data-identified private individuals of such a breach. Where a security breach allows the individual to be identified, notification of the breach is mandatorily and must be made when
- The security breach discloses the pseudonymization key.
- The private individual can be identified by linking the additional and pseudonymized/ non-pseudonymized information.
- Article 40 (2) (d) lays out the use of rules and standards of Codes of Conduct, including the rules for pseudonymization.
- In Article 89 (1), data processing is enabled with personal data for historical, statistical and scientific purposes when such personal data is safeguarded by appropriate security measures like pseudonymization.
2. Pseudonymized data is not anonymous
In anonymized data, personal data is permanently de-linked from specific identifiable or identified persons using data encryption, and the encryption key is completely destroyed. Hence the difference between anonymization and pseudonymization makes GDPR implementation not a requirement for anonymous data. However, anonymization and pseudonymization cannot be considered the same, as the specific individual is identifiable if:
- The additional pseudonymized and non-pseudonymized information are combined to identify the individual in a pseudonymization GDPR example.
- The key to pseudonymization is disclosed in a security breach.
To ensure pseudonymized data is not anonymous, the GDPR requires
- In recital 26 that pseudonymized data becomes personal data when a specific individual is identified, “Through the using of additional information.”
- Recital 29 states that pseudonymized data is to be maintained separately from all “additional information that attributes such as personal data to a specific subject”.
- Recital 75 requires appropriate technical safeguards (e.g., hashing, encryption, tokenization) to be implemented, and organizational policies are present to prevent the reversal of pseudonymization.
3. How pseudonymization can be used under the GDPR
Hashing and data masking are sensitive-data pseudonymization examples. Data masking is the standard for achieving pseudonymization wherein it replaces such sensitive data with realistic but fictitious data to preserving data utility while reducing data risk. Once data pseudonymization tools are applied, data can be safely used in testing and application development, training programs, testing, analysis and business processes both beyond and within the EEA/EU locations.
In conclusion, pseudonymization GDPR data is different from anonymous data and endorsed by the GDPR, which specifically mentions it as a security measure whereby sensitive and personal data is protected and secured.
So, have you made up your mind to make a career in Cyber Security? Visit our Master Certificate in Cyber Security (Red Team) for further help. It is the first program in offensive technologies in India and allows learners to practice in a real-time simulated ecosystem, that will give them an edge in this competitive world.