Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Tenable Blog

Subscribe

AfterBites: My Hospital Robo-Surgeon Has a What?

(This column commences what I am going to call "afterbites" - extended random commentary on topics raised in SANS' Newsbites column. As some of you know, I am one of the volunteer editors/commenters on the weekly Newsbites and it probably won't surprise you to discover that sometimes the discussions we have on the editors' mailing list can get - interesting. Usually, there's not enough space, nor would it be appropriate for the editors to engage in hand-to-hand combat, so I'm going to periodically fire unaimed salvoes from the safety of my blog, here.)

The story:

 --London Hospitals' Worm Infection "Entirely Avoidable"
(February 2, 2009)
A review of the worm infection that affected three London hospitals last
November found that the incident was "entirely avoidable." The Mytob
worm infected 4,700 PCs at St. Bartholomew's, the Royal London Hospital
in Whitechapel and The London Chest Hospital; as a result, some
ambulances were rerouted and some recordkeeping had to be done with pen
and paper. While administrative systems were running again within three
days, it took two additional weeks to scan all the machines to ensure
they were clear of infection. The review determined that the initial
infection resulted from misconfigured anti-virus software and spread so
widely due to a decision by administrators to disable security updates
because they had caused some computers to reboot while surgery was
underway.
http://www.theregister.co.uk/2009/02/02/nhs_worm_infection_aftermath

There is so much wrong with this picture, that it's hard to know for sure where to start. "Ambulences rerouted" could be extremely unpleasant if you were, say, waiting patiently for help after a car crash, or something. "Recordkeeping with pen and paper" is, perhaps, a useful survival drill. The part that makes my blood run cold is "caused some computers to reboot while surgery was underway." I know that if I were a patient and heard the distinctive "cdrom-whirr, beep" of a computer rebooting, I would leap off the table and make a bloody trail toward the taxi stand, if I had working legs.

What this really illustrates, though, is a deeper problem with how computer security is done, with regards to production systems. The first rule(s) of production computing are best described with the following little piece of BASIC code:
10 Set up the system
20 Make it work
30 Do not F* with it
40 While it works GOTO 30
50 Fix it
60 GOTO 30

Security - most particularly "penetrate and patch" security models - have completely broken how we do production systems. In 1998 I predicted that software would start to self-patch (people thought the idea was crazy, then!) and I was right. Today, it's impossible to actually use software if you're not internet connected. I know this in detail because my home internet connection is ultra-secure dial-up, and every so often my iPod insists that it needs to download a new version of 56Megs of stuff or Windows update wants to tie up my line for 3.2 years installing a new service pack. And, yes, the driver model for what passes for operating systems these days requires that they system-wide garbage collect (reboot) any time you change anything remotely interesting. For production systems, this is a disaster.

It's worse than that, even. Back in the 1980s I worked at a large hospital in Baltimore, as a UNIX system adminstrator/programmer - and we were constantly getting zinged by the mainframe guys because we had such pathetic change control. "Change Control" is the mythical process of understanding your software release levels and managing transitions between them. In today's operating system environment, unless you're so fortunate as to have standardized hardware, "software release" is a meaningless concept because the operating system is a mass of shared libraries loaded from wherever, and most administrators are left with a change control process that looks like:
10 Set it up
20 Make it work
30 Fiddle with it
40 Go to 30

Release levels, nowadays, are nonexistent; they're a sliding target. Nobody can be sure anything will actually work - they have to try it and see, then
hope, once they get something that works, that it won't change. And, just to
make matters worse, security demands constant change.

The obvious old school fix would be (as I have suggested many times) Mjr-cut-small
to build separated networks. And, for ages, people have pointed out that it's not cost-effective, that it's a pain to manage, etc. All of that is true, but here's the question: At what point is having a patient die in telesurgery "cost effective" when the lawsuits are over? At what point is having ambulances driving around at random while ER admitting staff try to remember how to write "a pain to manage"? Eventually, will it be cheaper to address security by making some systems set up so that security simply isn't an issue for them?

Check back with me in another decade so I can say "I told you so."


mjr.

Related Articles

Cybersecurity News You Can Use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.