ThreatSpike Blog

Why We Built The Slowest Analysis Product.

Posted by Adam Blake on 23rd June 2015

Recent figures show that there are over 100 million companies in the world, 99% of which are small and medium sized enterprises (SMEs). It is therefore concerning that a search on LinkedIn reveals only 3 million Information Security professionals globally. 33/1 is not a good ratio.

You may think that ratio is unfair because SMEs don't have the same need for security as large enterprises - this view is very common, although not necessarily accurate. Many SMEs supply products and services to large enterprises. If an enterprise is looking to apply the latest predictive retail algorithms to customer data or looking to run high speed trading algorithms on custom hardware, they'll often turn to an SME for assistance. In the case of Target who had invested millions in building a strong security monitoring capability, it was Fazio Mechanical Services, a supplier of theirs with less than 100 employees that was the initial point of compromise in the 2013 breach. If Fazio had access to better security technology then perhaps they'd have detected and responded to the threat.

Attackers will usually target the weakest link in the chain so if we are to rely on technologies to protect our networks then those technologies must be available and usable by everybody, regardless of their size and expertise. This was the key principle behind our product - ThreatSpike Wire.

We wanted our product to work for everybody and therefore we could not rely on other vendor technologies being available to provide data feeds and, as noted in my previous post, this often itself leads to data quality issues. We also couldn't rely on any in-house expertise to interpret the output of our analysis. We would need to collect the data, analyse it and present a complete story without any confusion or gaps.

To collect the necessary data, we decided to extract it directly from traffic on the network. Network protocols tend to be very standardised and common across different environments and their content is incredibly rich. This does however mean that there is a lot of work that needs to be done in order to get some meaningful information. If we consider a simple example where a user in a Windows environment copies a file from a file share and uploads it to a file sharing website, we encounter the following protocols:

  • Transport Control Protocol / Internet Protocol (TCP/IP): Used for exchanging data between systems on the network. Server Message Block (SMB): Used for accessing files on a remote Windows system.
  • Hypertext Transport Protocol (HTTP): Used for making requests and receiving content back from a web server. Also supports uploading of file data. Transport Layer Security (TLS): Used for encrypting network protocols such as SMB and HTTP.
  • Zip: Used for grouping and compressing collections of files. [The latest Microsoft file formats (xlsx, docx, pptx) are basically zip files containing lots of separate files that each represent part of the document (footer, header, etc)]
  • Extensible Markup Language (XML): Used for encoding textual data inside the latest Microsoft file formats.
  • Hypertext Markup Language (HTML): Used for representing the layout and content of web pages.

In order to detect that something harmful has potentially happened in our example, we must parse each of the protocols listed to understand the discrete activities occurring (file being copied, website being accessed, file being uploaded). Each of these activities is not individually significant so they must be pieced together into a story and that story must itself be judged to determine whether it represents harmful behaviour. By piecing together the story, we take on the role of a security analyst, who on seeing a file being uploaded to the Internet might ask themselves questions such as:

  • What is the content of the file being uploaded?
  • Where did this file come from?
  • Where is the file going?

In order to piece together the story we must remember what the user has done historically.

All of this analysis represents a significant amount of work that needs to be done and it must be done quick enough to keep up with constant network activity. Many vendor technologies will typically tackle only one of the tasks described - for example IDS analyses individual connections for suspicious indicators and SIEM correlates events from multiple sources to recognise multi-step attacks. By doing the analysis all together in one product however we get a high degree of accuracy and incredible simplicity in deployment. It may be the slowest analysis process, but we are still moving as fast as possible since there is no time spent plumbing together infrastructure, developing rules or carrying out manual investigations. There is a saying in the military: "Slow is smooth, smooth is fast".

In order to put this model to the test, we decided to see if we could install ThreatSpike Wire on a device and successfully detect our example scenario, all within 2 minutes. Watch the video below to see the outcome...



Why SIEM Sucks.

Posted by Adam Blake on 15th June 2015

I left my consultancy job in 2012 to start ThreatSpike Labs because of SIEM.

With an extensive background in programming I was always looking for opportunities to code solutions to client problems, however this is rarely an option when consulting due to the inevitable maintenance issues that arise once the project finishes. Imagine my excitement on seeing SIEM for this first time - a tool which allowed the user to take their ideas and run them against a rich data set of web, firewall and intrusion detection logs to get immediate insight into threats on the network? I was hooked.

I worked mostly with financial industry clients to improve their security monitoring but it was during these projects that I came to realise that the technology is more vendor hype then reality. The types of problems I encountered included:

  • Poor quality data: Since the data available to SIEM rules is provided by other software products (operating systems, firewalls, web proxies), the correctness/completeness of this data is dependent on what the developer chose to log, the configuration of the application when installed in production and the accuracy/availability of the SIEM vendor's connector. SIEM products claim to normalise data input - this is something of a myth. There are distinct differences between Windows security events 528, 672 and 680 but trust me it is up to you to work them out - don't count of SIEM products to interpret the inputs they receive. When Shellshock and Heartbleed were announced it would have been useful to turn to SIEM and its historical data to see if there'd already been an attack but unfortunately the information needed is simply not there.
  • Lack of agility: It takes a huge amount of work to keep a SIEM solution up and running. Some of my clients were processing billions of events each week requiring an extensive data collection infrastructure and significant inter-team collaboration. Over time new threats materialise which inevitably test the current detection capabilities and it can be a difficult and painful process to gather more data to support new detection rules. The lead time required to collect new data significantly impairs the agility that you would hope to achieve using SIEM.
  • Poor programming support: For an experienced programmer it can be a shock when confronted with the dumbed down rule creation functionality provided by SIEM products. The lack of key programming constructs such as loops makes it impossible to do any sort of complex analysis. An example is recursively searching backwards through historic file movements in order to identify all the persons who had handled a file before it was uploaded to the Internet. With such limited functionality it's no surprise that some companies choose to dump log data in a SQL database or Hadoop cluster and query it rather than relying on an off-the-shelf product.
  • No investigation support: Whilst most of us appreciate that intrusion detection systems require a skilled analyst to translate the output into actionable intelligence, it's more of a surprise that SIEM is no different. SIEM rules are very targeted to detecting activities of interest rather than investigating them. For example if an event is received from a web proxy indicating that a file upload has taken place, then it is up to a security analyst working in the operations team to determine what the file was, where it came from, who has handled it, what type of site it was uploaded to, whether the content was sensitive and so forth. All this requires time and money on top of the technology cost.

It was these issues that drove us to begin thinking about security monitoring from a new perspective in hope of realising the many benefits that SIEM advertises without the unfortunate drawbacks. In my next post I will discuss the approach we decided to take and its benefits over conventional technologies.

More in-depth information about our SIEM experience can be found here.