You’ve likely never heard the term “risk surface” before, but it’s an important concept that captures the way modern enterprises must manage risk. To that end, we’re providing an in-depth definition of what risk surface is so you can begin to expand your understanding of cyber risk management in the current landscape.
If there’s a lack of collegiate enthusiasm among today’s high schoolers, it’s not difficult to discover why: community colleges are woefully packed, universities are prohibitively expensive, and students are crippled by loans. And while those factors alone are enough to steer high school graduates away from the college route, there’s another cost that is almost never factored into the price of higher education—data.
Universities have access to their students’ highly sensitive information, ranging from how much money their parents have in savings to how often they experience bouts of depression. Think about the contents of a student record, for instance: contact information, high school records, test scores, academic grades, parents’ names, emergency contacts, social security numbers, detailed financial information, scholarships, and much more. Then there are the health records, which contain immunization dates, health data, and extremely sensitive health information. And since 68 percent of universities have full law enforcement agencies, most universities also have student police records.
We hold banking institutions to a high degree of accountability with regards to safeguarding our personal information, but when it comes to higher learning institutions, our standards are inexplicably lax. Exactly how lax? We decided to find out.
Putting Data Risk in Context
RiskRecon benchmarked the software vulnerability management practices of a wide range of universities against a broad set of banking institutions. In doing so, we risk-contextualized every discovered software vulnerability based on a proprietary mix of issue severity and the sensitivity of the system in which the issue exists.
Systems that required user authentication or that collected sensitive data, such as email addresses or credit card numbers, were considered to be highly sensitive systems. Issue severity was based on the Common Vulnerability Scoring System or CVSS.
Using this method, we not only determined which industries had higher rates of issues, but also measured how each industry performed in protecting the most sensitive systems from the most critical vulnerabilities.
If you were betting that universities performed worse than banking organizations at safeguarding data, pat yourself on the bank: you win.
The alarming part of our study wasn’t that universities perform worse in software patching—it was in discovering by how much they underperform. Analyzing the data without regard to issue severity or system sensitivity, the rate of software vulnerabilities in internet-facing systems at universities is 10.6 times higher than that of banks.
Universities performed even worse in protecting highly sensitive systems that process regulated data such as personal health information, credit card numbers, email addresses, and authentication credentials. For highly sensitive systems, universities have an issue rate 13.5 times higher than banks.
Outdated Software Means Vulnerable Data
In case you thought it wasn’t going to get worse...it does. Besides having dramatically higher rates of vulnerabilities in systems that process sensitive data, universities also have an extraordinary amount of highly critical issues that have been present in systems for a very long time.
For example, 24% of universities have one or more Internet-facing systems that are running OpenSSL 0.9.7, which has not been supported since February 2007 and has had multiple high-severity vulnerabilities since 2010. In a span of 11 years, universities have yet to remove OpenSSL 0.9.7 from their systems.
The table below shows the percent of organizations with one or more of the selected critical severity issues present in their internet-facing systems.
It’s unfair to saddle students with astronomical debt before they’ve even graduated, but it’s downright criminal to also leave their sensitive information exposed. The most frustrating element in all this is that universities just don’t seem to care. After all, they have the resources to do a good job of managing information risk. Many offer courses and degrees in cyber security, staffed with experts in the field and students eager to learn the discipline. Why not use the research being preached in the classrooms and published in academic papers and apply it to protecting their institutions?
Universities are also good at establishing and complying to performance standards. They do it in the areas of academics, student admissions, and athletics, so why not in information risk management?
It’s time for universities to band together as an industry to self-regulate their information risk management practices. In doing so, universities could achieve good information risk management while providing the world some much-needed practical research in managing information risk that universities are uniquely qualified to provide.
If they don’t self-regulate, government regulators will eventually step in and impose regulations that have real consequences. It’s very possible that a federal regulatory framework would condition funding on certification to regulatory requirements.
Now is the time to act. Sensitive data held by universities is at significant risk of compromise; it’s clear that they’re not doing a good job of managing that risk. Further, public disclosure of a major breach of a university is inevitable. When that happens, the regulatory wheels will start turning in Washington, D.C. Universities and their stakeholders will be much better off if they act now as individual institutions and as an industry to manage information risk well.
We owe it to our students to take better care of their information, otherwise we’re setting them up to fail and delivering them into a world that has no regard for their sensitive data or their success.
We’re well-versed in security breaches by now, but there’s still some uncertainty about whom to blame when things go wrong. A solid example of that is the recent Ascension Breach that involved Rocktop Partners, OpticsML, and various financial institutions in the mishandling of mortgage information.
We’ve delved into the Ascension Breach in a recent article published in Information Management. There are three important takeaways from the Ascension Breach:
- Information security matters – Regardless of the size of your organization, you’re responsible for protecting the privacy of your data. Being a small business is no excuse.
- Risk surface is expansive – Your risk surface isn’t limited to your immediate systems; it’s anywhere the confidentiality, integrity, or availability of your data or transactions are at risk. That risk includes your third- and often fourth-party vendors.
- You’re responsible for investigating your partners’ information security – If your customers have given you data—in this case, sensitive mortgage information—you’re responsible for protecting that information even if you sell it.
- Regulations need to expand – While banks are strongly regulated, entities that deal with financial institutions and interact with their data are often not. Regulations need to regulate every organization that deals with consumer information.
And what about the customer? Where do they stand? Read the full article to delve into the details of the breach.
We’re running a blog post series on the “Seven Deadly Sins of Third-Party Cyber Risk Management;” here’s the second deadly sin, which is failing to make third-party risk management about business risk management.
Get our All-New Playbook reflecting real life data from executives of 30 companies that offers a window into how organizations are confronting persistent breach risks stemming from third parties.
We are excited to announce the release of our inaugural Third-Party Security Risk Management Playbook. An inside look at how real companies are managing third party cyber risk. To get this important information we have conducted in-depth interviews with security executives from 30 participating organizations across multiple industries. The Playbook reveals how companies are managing the security risks of their complex digital supply chains and sensitive business partnerships. Our study identified 14 vendor-neutral capability sets comprising 72 common, emerging, and pioneering practices that firms have implemented to manage third-party security risk. As a study of real-world third-party risk management programs, the Playbook is a valuable tool executives can use to benchmark their own programs and gain insight into pioneering practices other firms are adopting.
In the second part of this two-part blog series, we look at the reality of your risk processes.
The complex, extensive vendor ecosystems in today’s enterprises have impacted the effectiveness of risk control processes. Local or otherwise decentralized IT and business functions procure SaaS solutions on their own, entirely bypassing the formal IT governance process. Paper-based risk control processes were developed for a time when your vendor population was much smaller, data storage was mostly on premise, and third parties were only a small piece of your security programs. Today, risk control processes must be adapted to new risk realities.
You just received an updated security attestation from your third-party provider, but the hair on your arms stands straight up when news of the latest hack appears on your screen. Your vendors may talk the talk, but you anxiously wonder if they're walking the walk. Checklist compliance is not good enough. It's time to confront your risk reality. In part one of this two-part blog series, we look at risk measurement.