By Brad Reimer, Deluxe Corporation
Reposted with permission. Originally posted on Deluxe Blog
I recently attended the 2015 Privacy. Security. Risk. (P.S.R.) conference presented by the International Association of Privacy Professionals (IAPP) and the Cloud Security Alliance (CSA). The keynote speakers offered good reminders about the ongoing duel that privacy and security professionals face in protecting sensitive information from those who would use it for ill-gotten gain. They also highlighted a duel that I hadn’t given much consideration. There are some key learnings from both duels.
Kristin Lovejoy, President of Acuity Solutions, highlighted the concept that if organizations were human beings, they would all be infected with security issues. IBM statistics indicate that an average company with 15,000 employees looks at 1.7 million security events a week with 324 events initiated by “motivated” attackers. Even if your institution is not that large, the reality is that there is an ongoing and present infection in your organization. Based on the IBM findings, you have to fight the infection on two fronts: making sure your people don’t negatively impact the health of your security and privacy programs and ensuring that malicious attacks from the outside don’t compromise the system.
Brian Krebs, Investigative Journalist and Cybersecurity Expert, highlighted some of his experiences and findings related to security. He specifically called out the fact that common practices for customer authentication using readily available information – date of birth, address, Social Security number – and knowledge-based challenge questions make customers more vulnerable. That information can easily be bought or accessed on the Internet, and he illustrated to the attendees at P.S.R. how this can be accomplished. He used that as the basis to discuss instances when the Internal Revenue Service and Social Security Administration were compromised and fraudulent returns and claims were submitted through online authentication mechanisms.
The other point that Lovejoy, Krebs, and Arthur Coviello, Jr. highlighted in the keynotes was the tension between security work and privacy work. Lovejoy stated that security and privacy professionals need to be honest about some of the inherent conflicts in their roles. In order to provide better security, more information needs to be collected and understood. This violates the privacy principle of only collecting the minimum necessary amount and creates a richer, more valuable set of data to be defended. Krebs echoed this sentiment, stating that “Better verification requires collecting more information and increasing privacy risk.”
There were some key takeaways to consider when evaluating how your institution protects sensitive personal information:
Killjoy suggested that you consider a flexible approach to security controls based on risk. Her opinion was that most organizations standardize one set of rigorous controls and then force everyone to conform to those controls. The problem with this approach is that employees will find ways around the controls to get done what they need to get done. The alternative she proposed was to loosen the controls based on access to sensitive information – individuals with no access to sensitive information should have the loosest controls and oversight while super users or master system administrators should have the most rigorous.
NEW AUTHENTICATION METHODS
Krebs’ presentation called out the need to move beyond information that is readily available online or that can be reasonably inferred from various sources. It was apparent that assessing the use of this type of information needs to be evaluated. Personally, when I choose knowledge-based questions for self-authentication, I pick the ones that I feel are the least likely for someone to know, e.g., “What was the first name of your prom date?”
Krebs also pointed out the need to make sure that employee and contractor terminations include a rigorous process to ensure systems access is removed when no longer necessary. I would also expand this to include that the interactions between systems and software programs access is removed and managed when they are retired or their role is changed.
Killjoy and Jean Yang, Assistant Professor at Carnegie Mellon University’s School of Computer Science, reminded attendees to keep abreast of innovation. Killjoy discussed how machine learning can be used to help develop more secure software. By providing a computer with an abundance of examples of “secure” software and applying machine learning techniques, the computer can then identify software code that is not secure without the need for a human to review the code. Yang also discussed the methodologies that she used when developing the Jeeves programming language that automatically incorporated security controls during development.
HEALTHY SECURITY/PRIVACY RELATIONSHIPS
The other area that was highlighted was the need to ensure that your institution’s privacy and security functions have a healthy working relationship. They need to understand what data is necessary to provide robust authentication based on the institution’s risk appetite and the best ways to balance both the need to keep information about account holders private while having the information available to authenticate the user.
Protecting sensitive information in your institution is a duel that goes on every day. Your privacy and security teams need to make sure they are working on their form, study their environment, and keep thinking of new approaches and defenses. They also need to be aware of the potential damage they can do if they are dueling each other.
One of the duels is very public, conjuring images of the Musketeers fighting Cardinal Richelieu’s forces in the streets. The other is very subtle and hidden, conjuring images of back alleys, dark nights, and isolated corridors. The first is one your institution must fight intensely. The second is one your institution must prevent by ensuring a healthy and beneficial partnership.