Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Tenable Blog

Subscribe

Information Sharing: Learn From Past Mistakes

I've been asked repeatedly for my opinion about the APT1 report, and every time I try to respond I find myself waffling. The reason is simple: I think the report is a good thing, a sign of deep dysfunction in security, a stimulant to information sharing, an indicator of failed foreign policy, a brilliant marketing maneuver and a bit of business as usual. It's hard to pull those together into a simple, "yes, it's a good thing!" answer. If nothing else, it's going to serve as a stimulant for worthwhile discussion for at least the next 5 years.

One possibility is that it will be the only such report serving in that role, while another possibility is that we'll see more information sharing as a consequence of the attention the report has garnered. I believe almost any information sharing is good, assuming that the information has not been scrubbed-over diluting its efficacy and relevance. In that sense, the APT1 report is interesting, but a bit redundant. Did you need to be told that there are professionalized attackers breaking into your and our networks? Isn't that obvious? And, does knowing anything about the presumed origin of the attacks help you defend against them?

That's what I meant by the "business as usual" above. There is relatively little in the APT1 report that can be considered "actionable" other than the Indicators Of Compromise (IOCs) which, in another context, we can think of as just a set of useful intrusion detection signatures. The "business as usual" aspect of the situation, to me, is that organizations appear to: a) understand that targeted malware attacks are a problem; while b) trying desperately to act as though it doesn't apply to them. So, we learn that organizations that you'd expect to be the target of attacks still have employees that open Adobe PDF documents on vulnerable systems that apparently aren't running with whitelisting or sandboxing or system logging that collects the creation of new processes and the drop of new DLLs? I am not blaming the victims - rather I look at the reports of how APTs are penetrating networks and it's the same as we read about in the 80's and 90's - the technology has changed a bit, but the organizational inertia that allows security disasters to happen unnoticed remains.

In that sense, the APT1 report is a good thing because it's too long for a typical pointy-haired boss to read, but it serves to document that yes, this stuff is real and it's better to build your systems well to begin with than to go back and hire extremely expensive consultants to come sweep them clean. By the way, when they're done - you're still in the same situation of having badly built systems. The only difference is that you now know it can happen to you and that it will happen again and again as long as your security practices are mediocre. (Note - I didn't say "poor", I said "mediocre" - poor security isn't even in the ballpark.) What the APT1 report illustrates conclusively is that "mediocre" isn't either. To me, that's the most valuable aspect of the report. It will serve to show executive management that we've always been in the deep end of the swimming pool and that it never was "amateur hour."

I think that releasing the IOCs was a very good thing for all of us, since many organizations will now be able to use them to look back into the past and discover things they might have been happier not knowing. What this will do is illustrate the depth of that gap between "poor" security, "mediocre" security, and "good" security. The IOCs will provide indisputable data; I've already heard a few security execs ask, "If we look for this, and we find something, what does that mean?" What I hope will happen from the APT1 report is we'll get some industry-wide re-assessment of the effectiveness of some tools and techniques. It's that kind of information that remains sadly lacking.

What I'd like to do is sit down with the CSO of NYT or RSA or Google and ask them how it really happened. What techniques did they have in place (which obviously failed) and what techniques worked? Did they learn about the penetrations through sheer luck, through someone else, or through their existing security practices? And, if the latter: which, how, and why? I heard from one source (for example) that one of the penetrations was detected because the attacker's horizontal exploitation landed them on a system that someone had installed application whitelisting on as an experiment. Is that true? I'd like to hear about how some of those organizations built networks in which their corporate "crown jewels" were connected to their office automation systems and the systems where people read email, or how their source code repositories were accessible from the marketing department, etc. What I'm curious about is whether we're really making any progress at all -- if the problems are the kind of basic mistakes that we simply cannot solve by throwing technology at them or whether there are techniques that are being overlooked which shouldn't be.

That's the kind of information sharing we need: techniques and practices, tied to strong statements about what worked and what didn't. We need battlefield tactics after-action reviews from the noncoms who were there when it happened, not SLA Marshall's carefully polished histories written from the rear echelons a decade later. Finger pointing at China is all well and good, but it's irrelevant as long as we continue to fail to learn from the mistakes of the past decade. Until we start talking about that, our learning experiences remain private and much more costly since they will be repeated over and over again. The important point about the IOCs is that they're a measure of how "too late" you may be.

Related Articles

Cybersecurity News You Can Use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.