Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Tenable Blog

Subscribe

A Look at What Makes a Vulnerability Survive in the Remediation Race

In the first of our three-part series, Tenable Research unveils the key findings from our new report on common persistent vulnerabilities, including their likely causes and the importance of prioritization to effectively reduce cyber risk.

Why do some vulnerabilities persist longer than others? And how should that influence your remediation process? 

In its latest report, Tenable Research looks at the common persistent vulnerabilities that often linger on enterprise systems for months, even years. As the number of potential attack vectors multiplies each year, our findings show that many dangerous threats persist longer than they should, in large part because traditional remediation models are ineffective. 

The first of our three-part blog series provides an overview of the background, research methods and key findings behind this report.

The challenge: CVSS is risk-blind

The last few years have seen a staggering growth in the number of vulnerabilities disclosed. In 2019, over 17,000 vulnerabilities were added to the U.S. National Vulnerability Database (NVD). Given this large volume of vulnerabilities, remediating every one present on an organization’s systems is unsustainable. 

Security teams must prioritize vulnerabilities to ensure they are effectively reducing risk and not misapplying limited resources. However, they have largely been left to their own devices for prioritization. Many organizations have adopted the Common Vulnerability Scoring System (CVSS), a metric designed to describe the technical nature of vulnerabilities, to drive prioritization. But, the misinterpretation and misuse of CVSS only compounds the problem, as CERT researchers noted in their paper on the topic:

“CVSS is designed to identify the technical severity of a vulnerability. What people seem to want to know, instead, is the risk a vulnerability or flaw poses to them, or how quickly they should respond to a vulnerability.” 1

This lack of prioritization exposes organizations to risk, as vulnerabilities go unremediated even as they are actively being exploited in the wild. 

Research methodology

To better understand the nature of this remediation gap, we sought answers to the following research questions: 

  1. Causes of persistence: Do the characteristics of vulnerabilities affect their persistence? Or, is persistence merely related to the remediation process and its pace? 
  2. Variance in remediation: Are there vulnerability remediation differences between organizations? And, are there differences within each organization?

Tenable has one of the most extensive vulnerability and intelligence datasets in the industry. It is derived from a 4.5-petabyte data lake of vulnerability data collected from over 10 different sources, including open-source and commercial intelligence feeds.

We analyzed Time to Remediate data and leveraged the interquartile range technique for outlier detection. The main goal was to understand what makes a vulnerability survive in the remediation race.

We only considered Time to Remediate data on vulnerabilities assessed within three months of their initial publication to NVD (i.e., vulnerabilities assessed from -1 to +3 months from their NVD publication date). This accounts for an average of a one-month delay for NVD publication.2

Key findings: Why prioritization beats the remediation race

Our analysis revealed many shortcomings in traditional remediation practices. Even as security teams work around the clock to defend their attack surface, if they are remediating threats based on CVSS data alone, there is no guarantee those efforts are effectively reducing their overall cyber risk. A few key stats that validate this conclusion: 

  • Exploitable vulnerabilities often fly under the radar. Despite their higher risk, vulnerabilities with exploits show roughly the same persistence as those with no available exploit. Defenders are still operating as though all vulnerabilities have the same likelihood of exploitation.
  • Client-side vulnerabilities are the most persistent threats. Over 60 percent of persistent client-side vulnerabilities have been exploited in the wild, compared to just 38 percent across the population at large. Vendors ought to make it easier for customers to fix their products, and security teams must prioritize difficult-to-upgrade software patches. 
  • Few teams can afford to win the remediation race. Only 5.5 percent of organizations prevail in remediating more vulnerabilities than they discover during a given timeframe. This again points to the need for greater prioritization, as attaining 100-percent remediation is unsustainable for most organizations.

Security teams need data-driven tools that can help them work smarter and drive effective remediation. Predictive Prioritization combines proprietary Tenable-collected data with third-party threat intelligence to continually reassess vulnerabilities based on proper threat modeling. This approach, powered by an advanced data science algorithm developed by Tenable Research, enables organizations to focus on the small fraction – roughly three percent – of vulnerabilities that pose actual risk.

The best way for organizations to gain ground against cyberthreats is to change the remediation game altogether. In the next two installments of this series, we’ll dive into the data to look at the lifespan of vulnerabilities as well as remediation trends across the global population. If you’d like to get a head start, you can download the full report below.

Download the Free Report

1. CERT, "Towards Improving CVSS," December 2018
2. Recorded Future, "The Race Between Security Professionals and Adversaries," June 2017

Related Articles

Cybersecurity News You Can Use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.