With the coming of GDPR, the stakes in application security get still higher. There are plenty of reasons for concern already; theft of confidential data can lead to liability, serious financial losses, and damage to reputation. GDPR adds fines as high as 10 million euros or 4% of a business’s annual turnover. It applies not just to businesses in the EU, but to any business that holds and processes personal data on EU citizens.
The new regulations, which go into effect on May 25, 2018, don’t just increase the stakes but shift the landscape. The effect will likely be similar to that of HIPAA on personal healthcare data in the United States. It covers a broad range of personal information, not just high-risk information.
These requirements have serious implications for application security. Businesses need to look at both the design and the configuration of their applications. The risk of exposing personal data needs to be as low as possible.
Those needs are an extension of existing good security practices, but what’s really new and important is the reporting requirements. Even businesses with excellent security records will need to add new reporting policies to be in compliance.
Responsibilities fall on an entity called the “controller,” which may be an individual or group. The controller is whoever “determines the purposes and means of the processing of personal data.” Another entity is the “processor,” which is whoever carries out the work of processing the data. This typically means the IT department. The controller is sometimes required to designate a data protection officer, and having one is a good idea for any organization that has significant GDPR issues.
Under GDPR, personal data should be protected “by design and by default.” Software should handle information only to the extent that it serves a legitimate purpose. For instance, the lazy way to process an individual’s record from a database is to copy the whole record. The secure way is to copy only the fields that are needed.
While all personally identifying information needs protection, the requirements state that the protection measures need to take the “likelihood and severity” of a breach into account. Disclosing financial information is much worse than disclosing a street address. Information which is especially useful to thieves needs extra protection. It should be held only in the most secure locations, and encryption or other measures to give it another layer of protection are necessary.
For a broad range of personal data, the default is to forbid their processing. These include ethnic origin, genetic data, union membership, and even philosophical beliefs. There’s a long list of exceptions to the prohibition, but the message is that there needs to be an affirmative reason for handling these types of information. The days of including information just because it might be useful are over.
The extent of disclosure is also significant. GDPR specifically demands that individuals’ personal data not be made accessible to “an indefinite number of natural persons” without that person’s action. It’s not just software bugs that can make this happen; employee carelessness can be the cause. Some of the biggest HIPAA breaches have happened when people put unencrypted patient data on a portable device which was then stolen. Information that travels in bulk to locations outside a secure perimeter should always be encrypted, without excuses.
What processing measures are required depends on the level of risk. Information that covers more people, is more sensitive, or is more likely to be under attack requires a greater security level. Encryption and pseudonymization are two recommended measures. Verification processes such as penetration testing should be part of the security program.
Handling personal information for statistical purposes presents special challenges. If a third party handles the analysis, or even if it’s done on an in-house system with reduced security, all the information needs to be anonymized. That reduces the amount of harm done in case of a breach. It’s often possible to pick out individuals using a few data points (e.g., postal code and age), so even anonymized data needs protection.
Policies and procedures are important, and an organization with well-designed ones will have a lower chance of experiencing a breach. It will also most likely get more favorable treatment from the supervisory authority — provided, of course, that it has consistently carried them out. GDPR recommends adoption of a code of conduct or a certification procedure.
The processor (IT department) is supposed to work with personal information only under the controller’s instructions, or as required by law. This places the decision-making responsibility squarely on the controller and sharply limits the initiative the processor can take.
If taking on a new technology could produce a high risk of the misuse of personal data, GDPR requires the controller to carry out a risk assessment. Supervisory authorities can set rules for when this requirement applies. The assessment needs to cover what the technology will do, the reason for it, an assessment of the risks, and a description of planned safeguards and mitigation measures.
Risk assessments are a wise practice in general when making significant changes. Rushing the deployment of software without a close look at the security issues is a good way to run into trouble. Fixing problems before deploying an application is easier than fixing them after they appear in a production environment.
Good security practices can greatly reduce the chance of a breach but can’t eliminate it completely. If personal information is leaked, GDPR places stringent reporting requirements. It’s mandatory to report the breach to the appropriate supervisory authority within 72 hours of its discovery. The report has to describe its scope, its likely consequences, and planned mitigation measures. If it isn’t possible to report all of this by the deadline, as much of the required information as possible should be reported.
In addition, where the risk is high, the controller has to notify the people whose data was breached. In some cases, this can be a public announcement rather than individual notifications. There isn’t a specific time limit, but it has to be “without undue delay.”
A breach response plan is an important part of a security policy, and GDPR makes it a legal necessity as well.
The direct benefit of complying with GDPR’s appsec requirements is a reduced chance of inquiries and penalties from supervisory authorities, and that’s reason enough. At the same time, carrying out these requirements improves an organization’s overall security posture. The result is a greatly lowered chance of breaches and panicked responses, as well as less downtime.