Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Tenable Blog

Subscribe

Malware’s Journey from Hobby to Profit-Driven Attacks

While most of my posts focus on malware attacking systems today, the history of malware is a fascinating topic that provides insights into the current landscape. As one of the authors of the Avien Malware Defense Guide, I contributed to the book's chapter on history and will be leveraging and expanding on some of that content here to give context to where we are today.

First what is malware? Malware is a merger of "malicious" and "wares," meaning malicious software. It can range the gamut from traditional viruses and worms to botnets, potentially unwanted Programs (PUPs), adware and spyware. Generally speaking it's software running on your system that can cause unwanted side effects that can be as minor as slow downs and resource utilization, to as severe as data corruption, compromise and leakage of sensitive information.

So how did we get here?

The first widely acknowledged virus on a PC was the Elk Cloner in 1981. Elk Cloner infected Apple II machines, specifically the boot sector of their hard drives, and spread via diskettes. By 1986, boot sector and file infecting viruses had spread to the world of DOS computers. As a little side note, I enjoy using this bit of trivia every time someone says they run a Mac and are immune to viruses.

Prior to this time there were some little bugs running on mainframes, such as Core Wars which wasn't a virus, but a game with self-replicating code, and ChristmasEXEC which was viral in nature.

These early software packages used a fingerprinting type technique to acquire a signature of each piece of software they were looking to detect

1986 is the generally agreed-on start of the antivirus industry, with early enterprises creating software to combat the threat. Names like McAfee and NOVI (which became Norton's Antivirus) were striking a claim to detect and clean the bad software. These early software packages used a fingerprinting type technique to acquire a signature of each piece of software they were looking to detect. It was not uncommon to read technical USENET forums to find a 'signature' for a specific virus, and the signatures were shared in industry magazines (such as Virus Bulletin) and conferences.

In those early days there were many discussions on what the antivirus utilities would detect. Many researchers wanted to stick to the self-replicating code and not touch Trojans. One of the mindsets on this was that what's a Trojan to one person is not to another. One example is the delete command: When used properly it's a way for the user to get rid of unwanted files, but when used maliciously it can erase critical system files and render the computer temporarily unusable. At the time this seemed reasonable, as viruses were spreading at a rate of 'Months to Wild' as opposed to the mass infections that would be seen in the 1999-2001 timeframe.

In the late 80s the first Macro infectors were seen, and as networks were becoming more widely implemented at the time, it only took a day or two before entire companies were being infected. At that time it was also recognized that simple signature based scanning was not going to be sustainable and heuristic and rule based detection was included. This had a side effect of increasing efficiency in detecting new minor variants. Any person with a rudimentary understanding of computers could make a minor editing change, and modify the signature for the virus. Then there were the debates of, "Do we as researchers 'upvert' old MS Word 95 infecting viruses to the new MS Word 97 format and add detection before the users accidently infect themselves and send it to us, or not."

An unintended consequence of this increased detection was the necessary increased overhead needed for the protection of the new threats

It was also around this time that antivirus software began being deployed on a large scale. As it was deployed, management started asking administrators why they couldn't detect more and more things. With home and corporate users increasing pressure on the antivirus vendors, the companies had to rethink whether to keep focus on detecting the traditional threats, or expanding to include new things the users wanted detection for. Those who didn't adapt were quickly swallowed up by those who did, or simply went out of business. An unintended consequence of this increased detection was the necessary increased overhead needed for the protection of the new threats. This has continued to the current day where antivirus utilities are only a part of "desktop security suites," which include firewalls, intrusion detection, access control as well as antivirus, to the point where we have stopped calling it antivirus and started calling it anti-malware.

Back in the 80's there were two types of antivirus products: the scanner and the integrity checker. Corporations, trade magazines and reviews favored scanners due to having "accurate names" with the philosophy of, "we need to know what we're fighting in order to be able to efficiently deal with it." While scanners were real good at giving an exact name, in order to get that name they needed the signature. On the other side of the coin, integrity checkers only informed that something changed. This didn't tell you what caused the change, only that it happened. This did cause some conflicts and consternation, as some examples of legitimate change, such as the early self-modifying Windows executable WIN.COM, would cause unknowing administrators to think they had a virus. Intelligent integrity checking is making its way back into some desktop security applications as it doesn't need a signature to detect new malware.

The long-acknowledged weakness in these systems continues to be the detection. Even with heuristic detection, rule bases and now sandboxing technology, a sample 0 or first sample must be seen by the malware research community in order to be accurately named and detected. Integrity checking is seeing a resurgence by being integrated with intrusion detection systems (IDS). With IDS unless there is a rule setup, malware can slip by these systems. In fact much of the existing malware targets and disables security suites of compromised hosts.

A side trip to the evolution of the threat.

While we've looked at the evolution of the anti-malware portion of the equation, we've not really touched much on why. We've already mentioned that part of the evolution was due to customer demand, but we also need to look at the threats.

Today's malware authors include criminal gangs and governments

In the early days, for the most part, viruses and worms were written by hobbyists. Most often the goal was to show it could be done. Some people wanted to spread a political message; Stoned was one of these with a message of "Legalize Marijuana." Some people said they were trying to increase security awareness, while others wanted bragging rights, e.g. "First to infect..." Even back then, there were some authors looking to profit, such as the authors of the AIDS Trojan, which was mailed on diskette and then demanded a $300 ransom to decrypt the hard drive. Interestingly in later years, the CryptoLocker Trojan would ask for the same sum to decrypt an effected hard drive.

In today's climate, the roles are reversed. Very few malware samples are written by "bored teens in their mothers' basements." A bulk are written for profit, be it for added revenue, bank login credentials, identity theft, or even espionage (not only corporate, but state sponsored) and sabotage. Today's malware authors include criminal gangs and governments.

A lot is being discussed about 'offensive cyber capability,' and more malware is targeted in nature, with the rest being delivered by drive-by download. Back in 2002-2003 I was part of conversations about the fast moving threats such as LoveLetter. We acknowledged that the threat of the day was easy to detect because of the amount of network traffic being generated. We expressed concern that an opponent may attempt to insert a second piece of malware with less traffic into the traffic stream to be covered up by the high volume threat. We also had concerns about a limited use executable with low traffic. Unfortunately we are seeing that threat today, with the targeted APTs. There are some that mutate on the server, so the infecting component is only used once, then changes and infects the next machine. The authors also test their product, both in the Q&A style looking for bugs and defects but also in testing against anti-malware products, so they can avoid detection.

So where do we go from here?

Early on it was stated that once a system was compromised, you can no longer trust it. Once malware has control of a host, the malware can make any changes it wants and the system can be told to ignore and not report those changes. Anti-malware can be disabled, or blocked from effective scanning or updating.

Some of the earliest advice given was, "If you suspect your computer is infected, boot from a known clean, write-protected diskette." I really can't remember how many times I gave that advice myself. Today we CAN do this with live CD based anti-virus scanners, but this is often impractical and very time consuming.

We can also identify infected machines by their communications pattern. Today's malware is largely bot and targeted espionage software. As such, there needs to be bi-directional communications. Since the malware's primary purpose is information collection, it needs a method to export that data to a location where the author can collect it and leverage the stolen information.

We can also identify infected machines by their communications pattern. Today's malware is largely bot and targeted espionage software.

So what does that mean? In short terms, all successful malware must create two things. First are artifacts on the compromised host. These artifacts are unique or modified files, registry entries and memory consumption. The second is network communications. The artifacts allow the malware to run and perform its programmed activity; without them, the malware doesn't function or exist. Without network traffic the program can't report back to the controller/author or replicate on the network.

Let's look at scanning for artifacts first. When we're talking remote scanning for artifacts, we have the benefit of the scanner being unaffected by malware that's on the suspected compromised host. The drawback is that we need credentials to scan the machine. This is not a show stopper as most vulnerability scanners like Tenable's own Nessus product allow embedding credentials to do this. Modern scanners allow inclusion of custom hashes, memory and registry scanning. This allows us to create our own 'signatures' looking for newer malware, PUPs, or just looking for a specific application or application version in a corporate environment. While this isn't as accurate as an anti-virus scanner, it can help us identify problem spots that could identify infections for remediation by anti-virus scanners.

Network activity is very indicative in this day and age. Network scanning can be done without credentials and at a packet or a destination level. Communications to and from a known command and control (C&C) node is very indicative of a bot compromised host. As C&C nodes are discovered they are added to an industry-maintained black list of known hostile sites, but that doesn't mean there are not unidentified C&C nodes out there.

The good news is abnormal traffic is also indicative of a possible new C&C node. In order to identify abnormal traffic, you have to monitor your traffic and find out what normal looks like. Once you have a normalized profile, you can identify one offs, and other legitimate variations, leaving you with a list of abnormal activities that require further investigation.

Packet tracking is similar in that an unusual flow can indicate the presence of a compromised host. Remember not only to look for outbound traffic but inbound as well. Malware such as the Zeus family can update, and most all bots accept some form of commands from the C&C servers. Today malware authors know corporations need to keep at least one port open for Internet browsing and connectivity, and will actively leverage that, including but not limited to checking the Windows proxy server setting and using it to transmit and receive its own traffic.

Summary

So malware has evolved from a hobby to a profit driven activity. Not only is it stealing CPU cycles and memory performance from company machines, but it is compromising company intellectual properties and opening up employees and customers to identity theft. The malware of today is not just an annoyance, but an active spy for someone else. Examples like Stuxnet show that production can be directly impacted and sabotaged. Imagine if a similar piece of badware was to gain access to a pharmaceutical company and change the formula for some medication. This would be devastating for the company's reputation, and if not caught by Q&A could be life threatening to those taking the medication. Centralized scanning and monitoring allow us to have control where the malware can't hide its presence. This central scanning is also less precise and less overhead on the local hosts. All in all this should be a part of your defense in depth, providing more return on investment the more you use it.

Further discussion and malware topics can be found on the Tenable Discussions Forum.

Related Articles

Cybersecurity News You Can Use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.