Bashing Big Data
Big data is everywhere. So is Bash, a command line interface found on most Linux and Unix machines. New Bash bugs are still being uncovered, with new patches coming out regularly. But hackers in the wild continue to look for and find ways to exploit Bash weaknesses. While most Unix-based systems had the Shellshock vulnerability, many of them were not immediately exploitable. With this type of vulnerability, your security teams need to know which servers run Bash, but more importantly, you need to know from the servers running Bash what services or processes are being invoked by a web service, a CGI script, or through other services such as DHCP, PureFTPd, OpenSSH, qmail, OpenVPN, and more. It’s a lot of big data to sift through.
Big data and web logs
Most big data solutions are all about web logs. Web logs are relatively easy to track and send to a big data solution. You can have all the web logs that can possibly be generated and still be in danger of missing and fixing vulnerabilities such as the Bash bugs. If a hacker actually breaks in and achieves command access, he’s not going to show up in the web logs anymore. You need a list of commands that the intruder executed or that a botnet ran. You can have all the big data in the world, but if you don’t have the right data, your systems are susceptible. You don’t need all the big data, you just need the right data, otherwise you may miss something crucial.
Capturing command logs
With big data systems, you need multiple processes to track and detect vulnerabilities like those in Bash. One of the best ways to do this is to enable process accounting on Unix-based systems to log the details of all executed commands.
You can have all the big data in the world, but if you don’t have the right data, your systems are susceptible.
Unix process accounting allows you to view every command executed by a user and the system resources used (CPU and memory time). The original intention was to provide the ability to charge users for system access, though the side effect has often been to uncover suspicious activity that could indicate a system compromise. Cliff Stoll famously wrote about his experience of tracking a US $0.75 accounting error to a spy ring in his book The Cuckoo's Egg: Tracking a Spy through the Maze of Computer Espionage [Stoll, C. (1989) New York: Doubleday].
Process accounting provides a complete replay of what has been happening on your system. From a forensics standpoint, process analysis is invaluable for tracking down attackers or inside threats. Information gathered from process accounting includes: command name, user time, system time, elapsed time, starting time, user id, group id, average memory usage, count of IO blocks, and controlling TTY. Additional flags are present for: fork’d but not exec’d, used super-user permissions, dumped core, and killed by a signal.
Here’s an example of the level of detail that process accounting can provide:
User 'sshd' on TTY 0 executed command 'sshd' with PID 12914 and parent process 12913 on Oct 06, 2014 at 15:39:06. The process executed for 0 seconds (0 CPU seconds), during which an average of 68032K of memory was used. There were 425 minor page faults, 0 major page faults, and 0 swaps. The process exited with code 0. A fork was executed without an exec. Superuser privileges were used.
To put it in layman’s terms, process accounting can help answer questions such as:
- Why was a
tcpdump
run last night on a web server? - What users attempted
sudo
? Where they successful? - Who added a new entry to the
/etc/passwd
file? - When did a username change on a process?
Interpreting process logs
But of course, to take advantage of process logs, you need to be analyzing them. Tenable’s Log Correlation Engine™ (LCE™), a sensor for SecurityCenter Continuous View™, can close the gap between collecting and interpreting logs. It aggregates, normalizes, correlates, and analyzes event log data across your big data infrastructure. It includes many correlation rules to watch for trends, anomalies, data breaches, advanced threats, and trends in client behavior. LCE also summarizes Unix and Windows process names daily and hourly, and identifies unique processes.
Together, detecting and researching anomalies is a form of prevention
Unlike other log files, process accounting logs are in binary. After the logs are captured, you still have the issue of copious data to sift through and the command logs still need to be interpreted. Some big data solutions have difficulties with binary logs, making more development work for the security team. You need a tool that can correlate all the logs, regardless of format. Otherwise you have thousands of packets and no insight; it’s still just raw big data. This is where a tool like the LCE sensor, an intricate part of Tenable’s SecurityCenter CV™, stands above big data systems. The LCE Client can interpret the binary data produced by process accounting and provide human-readable correlation.
Deploying LCE process accounting
The LCE Client will automatically enable process accounting if the psacct
package is installed on a system that is running the Red Hat LCE Client and the <accounting-file>
tag is included in the client policy. The psacct
package is typically available on every default installation of RHEL or CentOS, but the following step will confirm if it’s installed and enabled:
# chkconfig –list |grep psacct
psacct 0:off 1:off 2:off 3:off 4:off 5:off 6:off
If it isn’t installed, run the following command:
# yum install psacct
Next, make sure that the default_rhel_lceclient.lcp
policy file that is in use contains the following line:
<accounting-file>/var/account/pacct
After the line is added to the policy, save it using a different name (such as edited_rhel_lceclient.lcp
) then import it into SecurityCenter CV and assign it to the web server. The policy will then be pushed to the client. Information for editing a policy and assigning policies is covered in the LCE 4.2 Client Guide.
Next restart the lce_client
service, and the collection of process accounting data will start.
Take a look at the blog article on Detecting Shellshock with LCE Process Accounting for an example of how LCE can identify a Bash exploitation with process accounting.
Bashing with process accounting
With respect to Shellshock, first patch your systems and close out the known bugs. For unknown vulnerabilities that may be exploited, implement process accounting, and make sure you have a sound configuration management program and continuous monitoring in place.
With big data systems, you can’t just rely on web logs. Bash is going to be exploited, so you need to monitor for the possibilities. Process accounting provides assurance by gathering the right data. Process accounting won’t prevent a breach, but it will be able to detect it. Together, detecting and researching anomalies is a form of prevention. Process accounting increases your ability to remediate and to keep your big data systems safe.
Related Articles
- LCE
- Log Analysis
- SecurityCenter
- Threat Intelligence
- Vulnerability Management
- Vulnerability Scanning