Recently, SANS Institute has published the 9th log management survey (2014). The paper identifies strengths and weaknesses in log management systems and practices. It further provides advice to improve visibility across systems with proper log collection, normalization and analysis. Log management is very important to Compass as it heavily influences forensic investigations. Evidently, accurate information needs to be available to track down incidents. This post provides a short summary of the paper and reflects Compass research and experiences in these fields.

TL:DR; Positive is, that most of the companies have some sort of log management, at least most collect logs in some form – many do log to a central log server. In summary, log management is a well-established control within companies, but there are challenges (e.g. cloud services, differences in logging by different vendors) which companies cannot solve on their own and depend on the vendors and hosting providers. To differentiate between “good” and “bad” traffic is one of the biggest challenges.

The respondents of the survey rated the following activities as the biggest challenges in log management:

  • Distinguish between normal and suspicious traffic
  • Analysis of “big data” (large amount of volumes and types of log and events)
  • Normalization and categorization of logs and security information
  • Correlation of logs from various sources
  • Cloud causes log management headaches
  • Vendors log similar events differently

The first point “distinguish between normal and suspicious traffic” is clearly a problem – especially, if the infrastructure includes different technologies and vendors and exceeds a small environment. The bad thing is, malware and therefore the malicious traffic uses also “good” traffic to communicate with C&C servers. Here, baselining your logs could help. You might also want to understand applications in-depth and get some meaning from the user’s behavior – network analysis of the given parts could help you to understand the ‘average’ traffic.

The challenges with the cloud log management are rather new – but behind the scenes the same challenges exust. Look at cloud systems as they would be systems managed “by others” and not simply “by the cloud”. Challenge yourself with the same questions as you would challenge a hosted Unified Communications (UC) or a storage provider etc. What is logged? Where is it logged? How long are the logs being kept? Are the logs collected by the central log server? Will they be processed by the security information and event management (SIEM) systems?

The respondents in the survey clearly stated that collecting logs from the cloud is still difficult. Around half of the respondents say that they feel no need to monitor apps in the cloud. Many respondents say they rely on their cloud operator’s ISMS and security services, management and controls. Compass Security has some concern with this view – ONE SHOULD log and monitor all the required information as one would with in-house services. Graham Cluley said in a blog post: “Don’t call it ‘the cloud’. Call it ‘someone else’s computer’.”. Moreover, with the shift to the “cloud”, forensic analysis is getting a big challenge which companies are facing. If a cloud provider is not willing or simply unable to provide logs, you might want to evaluate another one. Some cloud providers actually allow to export logs. See Amazon and Cloudstack.

The top three reasons to collect logs are

  • detect and/or track suspicious behavior (e.g. unauthorized access, insider abuse)
  • support IT/Network routine maintenance and operations
  • support forensic analysis

Unfortunately, the respondents have issues to make meaningful use of the logs for:

  • detection/tracking of suspicious behavior
  • detection of APT-style malware
  • prevention incidents

In a recent presentation in Jona, Compass Security highlighted the difficulties to detect suspicious behavior and thus to detect APTs. It was presented how monitoring and APT traffic detection can be achieved with the correlation of logs of DNS, Mail, Proxy and Firewall. For this purpose, the logs have been enriched with external data like IP Reputation Lists, ZeuS Tracker, DNS Blacklists, Mail Black- and Greylists to identify potential malicious traffic. There are lots of other tricks which help to identify malicious traffic.

Besides the challenges and difficulties, the survey pointed out that SIEM infrastructures have become widely used to claim some form of automated processing and/or alerting of suspicious events. Automation is the key to managing and analyzing the large amounts of data. In recent years, normalization improved but fully “normalized” log information is still not available. Log engines will help to normalize and categorize events and log information systems for many different formats.

Interesting to see is how long the different respondents spend their time on analysis their logs. Most of them spend around 4h-8h a week on log analysis (of course, this depends on the company size). Not surprising was the fact, that regulatory compliance has been one of the main drivers for determining log data retention policies.

Regarding the current SSL (padding vulnerability) discussions, here are two examples of SSLv3 logging shown to identify downgrade attacks or to just see which clients still uses SSLv3. For apache this could be used:

CustomLog logs/ssl_request_log "%t %h \"{User-agent}i\" %{SSL_PROTOCOL}x %{SSL_CIPHER}x "

For nginx the following line could be used within your nginx configuration:

log_format ssl ''$remote_addr "$http_user_agent" $ssl_cipher $ssl_protocol

Offtopic: there is a good overview of different products and how to disable SSLv3.

How Compass can help you

If you like to have some hands-on practice and get a deeper inside how to detect APTs and how they work, Compass has the following upcoming courses regarding this hot topic:

These trainings use our Hacking-Lab in order to practice with log engines to analyze real-world examples. Furthermore, our classic “Beer Talk” series in September was about APT.

Compass Security can help you in the regards of testing your log environment with simulating directed attacks or simulating APT-style malware or by analyzing your log management concept.

Conclusion

While companies implemented log management with some basic log search functionality, detecting malware in real environments or collecting logs from the cloud is still difficult. Environments grow overtime and understanding the traffic within the infrastructure is key but a somewhat tedious and time consuming task. Log engines (e.g. IDH Framework, Splunk , Log Correlation Engine (LCE from TENABLE), ELK) help to collect and analyze log information. SIEM systems help to match and correlate different events. Script languages are needed to normalize data where the log engines reach their limits. Cloud providers must support the companies to log the relevant information or provide connectors for log engines. Furthermore, there is also a “Splunk in the cloud” solution.

Please comment, if I missed challenges or difficulties. I would also be interested in your experiences regarding log management.

Keywords: SIEM, log management, logging, normalization, APT, cloud, SANS

References