Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Tenable Blog

Subscribe

Cybersecurity Snapshot: U.S., U.K. Governments Offer Advice on How To Build Secure AI Systems 

Advice on How To Build Secure AI Systems

Looking for guidance on developing AI systems that are safe and compliant? Check out new best practices from the U.S. and U.K. cyber agencies. Plus, a new survey shows generative AI adoption is booming, but security and privacy concerns remain. In addition, CISA is warning municipal water plants about an active threat involving Unitronics PLCs. And much more!

Dive into six things that are top of mind for the week ending December 1.

1 - U.S., U.K. publish recommendations for building secure AI systems

If you’re involved with creating artificial intelligence systems, how do you ensure they’re safe? That’s the core question that drove the U.S. and U.K. cybersecurity agencies to publish this week a joint document titled “Guidelines for Secure AI System Development.”

“We urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems,” reads the guide.

The 20-page document, jointly issued by the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the Department of Homeland Security (DHS) and the U.K. National Cyber Security Centre (NCSC), focuses on four core areas:

  • Secure design, including risk awareness and threat modeling
  • Secure development, including supply chain security, documentation and technical debt management
  • Secure deployment, including compromise prevention of infrastructure and AI models
  • Secure operation and maintenance, such as logging and monitoring, and update management

To get more details, check out:

For more information about secure AI development:

VIDEOS

Inside the Race to Build Safe AI (Bloomberg)

Stanford Webinar - Building Safe and Reliable Autonomous Systems (Stanford University)

2 - Survey: Orgs embrace GenAI, as security and privacy worries loom

Enterprises are rushing to adopt generative AI with unprecedented speed, while assessing risks including security vulnerabilities, privacy concerns, and safety and reliability issues.

That’s according to the “Generative AI in the Enterprise” report from tech publishing and training company O’Reilly, which polled more than 2,800 technology professionals primarily from North America, Europe and Asia-Pacific who use the company’s learning platform.

“The adoption of generative AI is certainly explosive, but if we ignore the risks and hazards of hasty adoption, it is certainly possible we can slide into another AI winter,” Mike Loukides, O’Reilly VP of content strategy and the report’s author, said in a statement.

When asked what risks they’re testing generative AI for, respondents ranked these as the top five:

  • Unexpected outcomes (49%)
  • Security vulnerabilities (48%)
  • Safety and reliability (46%)
  • Fairness, bias, and ethics (46%)
  • Privacy (46%)

Other findings include:

  • A whopping 67% of respondents said their organizations are already using generative AI
  • The most common usage of generative AI is for programming (77%), followed by data analysis (70%) and customer-facing apps (65%)
  • Generative AI adoption is creating a need for AI programmers, data analysts and AI/machine learning operators
  • The top constraint impeding implementation of generative AI is the inability to identify appropriate use cases, cited by 53% of respondents

Factors holding back AI adoption

(Source: O’Reilly’s “Generative AI in the Enterprise” report, November 2023)

To get more details:

3 - CISA to water authorities: Beware of Unitronics PLC exploit

Unitronics programmable logic controllers (PLCs) used by water and wastewater operators are being actively breached by hackers, CISA warned this week, while outlining mitigation steps that water authorities should take immediately.

In its alert “Exploitation of Unitronics PLCs used in Water and Wastewater Systems,” CISA mentioned an incident at a U.S. water plant tied to this exploit that prompted the facility to take the affected system offline. There is no known risk to the unidentified municipality’s drinking water.

Beware of Unitronics PLC exploit

CISA recommendations include:

  • Change the Unitronics PLC default password
  • Require multi-factor authentication for all remote access to the OT network
  • Unplug the PLC from the open internet, and should remote access be necessary, control it using a firewall and virtual private network (VPN)
  • Use a different port from TCP 20256, which attackers are actively targeting

"These attacks are continued evidence that industrial security is in need of significant improvements, and government regulation at some capacity is necessary to ensure the cyber safety of public services like water and wastewater systems," Marty Edwards, Tenable Deputy CTO for OT and IoT, told Cybersecurity Dive.

To get more details, read the alert “Exploitation of Unitronics PLCs used in Water and Wastewater Systems” and check out CISA’s “Water and Wastewater Cybersecurity” page.

For more coverage about this alert:

4 - A prioritization plan for new CISOs

After CISOs start a new job, especially if it’s their first time as the cybersecurity chief, it’s critical that they clearly delineate their most important priorities. A new checklist from IANS Research published this week aims to help.

 

A prioritization plan for new CISOs

 

The “New CISO Priority Checklist: Ensure a Fast, Successful Start” document outlines priorities for the first 30 days on the job; the first six months; and the first year. Here’s a sampling of entries.

For the first 30 days:

  • Review security policies, standards and procedures, making sure they’re clear, current and comprehensive
  • Verify that a risk register exists, and that it contains the most significant risks; that it has executive buy-in; and that risk ownership and accountability are clear
  • Define and document the information that needs protection

For the first 60 days:

  • Establish key performance indicators and metrics, and make sure they reflect risks and mitigation measures
  • Assess third-party risk management capabilities
  • Review the organization’s security incidents, as well as security audit findings

For the first year:

  • Identify and address the main organizational complaints about the security team
  • Enhance relationships with other departments, such as human resources and finance, and ensure you have continuous communication with them
  • Review the software development lifecycle, and ensure security is part of the process from the early stages

View the full checklist here.

VIDEOS

Do You Really Want to Be a CISO? Spencer Mott, Booking.com CISO (Security Weekly)

Becoming a CISO: Leading Transformation (SANS Institute)

5 - CISA to vendors: Boost security of your web management interfaces

Software vendors should make their web management interfaces secure by design, instead of putting the onus on their customers. So says CISA in its new “Secure by Design Alert: How Software Manufacturers Can Shield Web Management Interfaces from Malicious Cyber Activity” document.

Boost security of your web management interfaces

CISA’s recommendations for software vendors include:

  • Disable your products’ web interfaces by default, and list the risks involved when loosening this default configuration
  • Consistently enforce authentication throughout your products, especially on administrator portal interfaces
  • Be radically transparent and accountable regarding your products’ vulnerabilities

To get more details, read the 2-page guide.

For more information about vulnerabilities in software management interfaces:

6 - California to regulate AI systems

If your organization is using AI for business operations, here’s yet another AI regulation to keep tabs on. 

This week, the California Privacy Protection Agency published a draft of proposed rules governing “automated decision-making technology” (ADMT) systems, including those that use AI.

The draft regulations define new protections related to businesses’ use of ADMT, including customers’ rights to opt out of these systems and to access information about how businesses are using them. The draft rules would also require businesses to inform consumers via “pre-use notices” how they intend to use ADMT.

The California Privacy Protection Agency’s board will offer feedback on the draft regulations in December. The agency expects to begin “formal rulemaking” in 2024.

To get more details, check out:

For more information about AI regulation:

Related Articles

Cybersecurity News You Can Use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.