Cybersecurity Snapshot: Why Organizations Struggle to Prevent Attacks and How They Can Do Better
Find out the top people, process and technology challenges hurting cybersecurity teams identified in a commissioned study by Forrester Consulting on behalf of Tenable.
Dive into six things that are top of mind for the week ending November 3.
1 - Tenable report: Average org fails to prevent 43% of attacks
A combination of people, process and technology challenges is getting in the way of organizations’ efforts to effectively reduce cyber risk, as the attack surface becomes larger and more complex.
That’s a key finding from a commissioned study of 825 global cybersecurity and IT leaders conducted in 2023 by Forrester Consulting on behalf of Tenable.
Specifically, in the last two years, the average organization preventively defended 57% of the cyberattacks it faced, and had to reactively mitigate the remaining 43% of attacks.
Here are other findings from the 24-page report based on the Forrester study, titled "Old Habits Die Hard: How People, Process and Technology Challenges Are Hurting Cybersecurity Teams":
- 75% identified cloud infrastructure as their highest source of risk
- 74% believe their organization would more successfully defend against cyberattacks if it devoted more resources to preventive cybersecurity
- 58% say the security team is too busy fighting critical incidents to take a preventive approach
- Three out four of the most frequently used cyber tools are reactive, which makes practicing preventive cybersecurity difficult
The study points to the practices of high-maturity organizations as examples to follow. Specifically, the study found that in these organizations:
- Preventive practices, communication and collaboration are better
- Cybersecurity teams are engaged earlier
- Coordination among teams is more effective
- A wider variety of data sources and methodologies are used to make decisions
(Source: "Old Habits Die Hard: How People, Process and Technology Challenges Are Hurting Cybersecurity Teams", based on a commissioned study of 825 global cybersecurity and IT leaders conducted in 2023 by Forrester Consulting on behalf of Tenable.)
Here’s a small sampling of the study’s detailed recommendations for overcoming critical people, process and technology challenges:
- People: To help resolve internal conflicts and organizational silos, rethink how you measure the performance of your IT and cybersecurity teams
- Process: Incorporate cyber risk metrics into all decision-making processes by having cybersecurity work closely with business leaders
- Technology: Re-evaluate your siloed cybersecurity tools to determine which ones prevent effective communication and coordination between IT and cybersecurity.
Overall, the study recommends adopting an exposure management program to help cyber teams tame the complexity of the modern attack surface by bringing together vulnerability management, web application security, cloud security, identity security, attack path analysis and external attack surface management.
To get more details:
- Download the report
- Read the blog “How People, Process and Technology Challenges Are Hurting Cybersecurity Teams”
For more information about exposure management, check out these Tenable resources:
- “The 7 Benefits of a Unified Exposure Management Platform” (video)
- “Exposure Management: How To Get Ahead Of Cyber Risk” (resource page)
- “Exposure Management for the Modern Attack Surface” (on-demand webinar)
- “3 Real-World Challenges Facing Cybersecurity Organizations: How an Exposure Management Platform Can Help” (white paper)
- “Exposure Management: Reducing Risk in the Modern Attack Surface” (blog)
2 - White House tackles promise and peril of AI with exec order
For cybersecurity leaders tracking the U.S. government’s evolving oversight of artificial intelligence, here’s a major development to check out. This week, the Biden administration issued an executive order outlining how it plans to maximize the AI’s benefits while reducing its risks.
The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” mandates a variety of actions across multiple areas, including:
- New standards to tackle AI systems’ potential dangers, including threats to critical infrastructure facilities and sophisticated attempts to defraud and deceive individuals
- Safeguards to people’s privacy and personal data
- Protections to the rights of consumers, patients and students
- Support for workers who might face increased surveillance, bias and job loss
Of particular relevance to AI vendors, the executive order requires that AI system developers must share safety test results with the U.S. government. They, along with organizations that use AI, will also be required to comply with stringent standards and tests that’ll be established by a variety of federal agencies.
“The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more,” reads a White House fact sheet.
Later this week, the White House also announced the launch of the U.S. AI Safety Institute, which will be tasked with setting in motion NIST’s AI Risk Management Framework through the creation of tools, benchmarks and best practices.
To get more details, check out:
- The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”
- The “President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” fact sheet
- The “Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence” fact sheet
For more information about the U.S. government’s initiatives to address AI risks:
- “US federal AI governance: Laws, policies and strategies” (International Association of Privacy Professionals)
- “The AI rules that US policymakers are considering, explained” (Vox)
- “The Supreme Court’s major questions doctrine and AI regulation” (Brookings Institution)
- “Everything you need to know about the government’s efforts to regulate AI” (Fast Company)
- “Senators use hearings to explore regulation on artificial intelligence” (Roll Call)
3 - Study: AI models lack transparency
And speaking of the safety and trustworthiness of AI, it looks like the major AI vendors are failing at transparency. At least, that’s the conclusion of a Stanford University study.
Its main takeaway: Visibility into the inner workings of the most popular generative AI tools is, well, murky and getting more opaque – and that’s not good.
“Reversing this trend is essential: transparency is a vital precondition for public accountability, scientific innovation, and effective governance,” reads the study “The Foundation Model Transparency Index,” which is also the name of the scoring system used.
The study rates the transparency of 10 major foundation model companies using 100 criteria. The researchers, a team from Stanford, MIT and Princeton, find the results disappointing, with a mean score of only 37%.
“No major foundation model developer is close to providing adequate transparency, revealing a fundamental lack of transparency in the AI industry,” reads a study highlights page.
To get more details, check out:
- The Foundation Model Transparency Index study highlight page
- The Foundation Model Transparency Index study
- A blog from researcher Sayash Kapoor titled “How Transparent Are Foundation Model Developers?”
- The announcement “Introducing The Foundation Model Transparency Index” from the Stanford Institute for Human-Centered Artificial Intelligence
- The Foundation Model Transparency Index Github page
4 - Global cybersecurity workforce up 8.7% yet record-high skills gap persists
There are 5.5 million people employed in cybersecurity worldwide in 2023, an 8.7% increase from 2022, according to the 2023 ISC2 Cybersecurity Workforce Study. Yet, demand for skilled workers has reached a record high, according to the study — which surveyed 14,864 cyber pros worldwide. The vast majority of respondents (92%) report skills skills gaps at their organization and the report estimates 4 million professionals are needed worldwide to adequately safeguard digital assets.
Source: 2023 ISC2 Cybersecurity Workforce Study
Source: 2023 ISC2 Cybersecurity Workforce Study
The top three skills in shortest supply are:
- cloud computing security (35%)
- artificial intelligence/machine learning (32%)
- zero trust implementation (29%)
In fact, 47% of respondents see cloud computing security as the most sought-after skill for career advancement.
AI is a source of anxiety for many respondents:
- 45% foresee AI as their top challenge over the next two years
- 47% say they have no or minimal knowledge of artificial intelligence (AI)
Source: 2023 ISC2 Cybersecurity Workforce Study
The study also discusses how the current economic climate is impacting staffing, provides data on the benefits of diversity, equity and inclusion (DEI) and offers insights into the value of certifications.
The report includes the following recommendations for organizations looking to bridge the skills gap:
- Implementing initiatives, such as training and DEI, to attract and retain top talent and upskill existing workers. The study found that organizations investing in training today are only half as likely to have critical skills gaps as those that aren’t investing and have no plans to.
- Using DEI initiatives to help navigate times of economic uncertainty. “Preventing your organization from unintentionally excluding large swaths of the available talent pool (by hiring with significant bias or creating an uninclusive environment) will be critical in ensuring that you have the right balance of skills needed to operate effectively during difficult times,” states the report. “In addition, the long-term effects are exceedingly valuable. A workplace where all cybersecurity professionals feel comfortable keeps workers happy, productivity high and attrition low.”
- Rethinking hiring parameters. Organizations seeking to nurture a skilled cybersecurity team should look for those with non-traditional backgrounds and expand their internal and external recruiting.
- Expanding basic cybersecurity training to everyone. To create more well-rounded and knowledgeable cybersecurity employees, try offering basic training/professional development to other departments within the organization. Encouraging basic skills development on a holistic level can also organically promote your cybersecurity team as a new career pathway.
To get more details:
- Read the press release
- Download the 2023 ISC2 Cybersecurity Workforce Study
5 - Let’s quit paying ransoms: the International Counter Ransomware Initiative pledge
The International Counter Ransomware Initiative (CRI) —which includes 48 countries as well as the European Union and INTERPOL — will be developing its first-ever joint policy statement declaring that member governments should not pay ransoms.
The joint policy statement is one of several initiatives emerging from the group’s third annual summit, held in Washington, D.C., Oct. 31 – Nov. 1. Other efforts will include:
- Launching innovative information-sharing platforms enabling CRI member countries to rapidly share threat indicators, including Lithuania’s Malware Information Sharing Project (MISP) and Israel and the UAE’s Crystal Ball platforms
- Creating a shared blacklist of wallets, made possible by the U.S. Department of the Treasury’s pledge to share data on illicit wallets used by ransomware actors with all CRI members
- Committing to assist any CRI member with incident response if their government or lifeline sectors are hit with a ransomware attack.
- Launching a project to leverage artificial intelligence to counter ransomware
To get more details on the initiative check out:
To learn more information about the program take a look at:
- Alliance of 40 countries to vow not to pay ransom to cybercriminals, US says (Reuters)
- White House hosts Counter Ransomware Initiative summit, with a focus on not paying hackers (The Record)
- Governments should not pay ransoms, International Counter Ransomware Initiative members agree (CSO)
- US Leads 40-Country Alliance to Cut Off Ransomware Payments (Dark Reading)
6 - MITRE ATT&CK v14: what’s new?
MITRE ATT&CK v14, released on Oct. 31, offers enhanced detection guidance for many techniques, expanded scope on Enterprise and Mobile, and new Assets in industrial control systems (ICS), according to the organization’s blog post on Medium.
Areas of focus for this release include:
- Lateral Movement, which now features over 75 Bro/Zeek ATT&CK-based Analytics and Reporting (BZAR). A subset of CAR analytics, BZAR analytics enable defenders to detect and analyze network traffic for signs of ATT&CK-based adversary behavior
- Cataloging more adversary activities that are adjacent to direct interactions — and can lead to direct network impact. The increased range incorporates deceptive practices and social engineering techniques that may not have a direct technical component, including financial theft, impersonation and spearphishing voice.
- The addition of 14 inaugural Assets representing the primary functional components of the systems associated with the industrial control systems (ICS) domain. The Assets pages include “in-depth definitions, meticulous mappings to techniques, and a list of related Assets,” according to the MITRE blog post.
The organization has also refined the navigation bar of its ATT&CK website, with a single dynamic menu display and access to secondary links in associated dropdown menus. And, as always, MITRE welcomes your input. You can email [email protected] or reach out via Twitter, or Slack.
To get more details on MITRE ATT&CK v14:
- Read the blog on Medium: ATT&CK v14 Unleashes Detection Enhancements, ICS Assets, and Mobile Structured Detections
- View the release notes
- Catch up with the changelog
Related Articles
- Exposure Management
- Cloud
- Cybersecurity Snapshot
- Exposure Management