Flipping TQ on its back as a File Integrity Management System to Discover Webshells
POSTED BY MIKE CLARKIn August of 2015, Dell SecureWorks released a fascinating report on a threat group they track as TG-3390. The TG-3390 write-up shows the adversary group often leverages the “China Chopper” webshell in their operations.
TQ isn’t a SIEM but by twisting its usage it can be used as an innovative file integrity management system with some automated log analysis in order to discover a malicious webshell.
What is a webshell?
Webshells are a class of Remote Access Trojans (RAT), which are generally quite simple but extremely prevalent. The scenario usually goes something like this: an attacker exploits a web application vulnerability to upload a malicious file to the server’s webroot. When they access this file, the webserver will execute the code essentially giving them shell access to the system through their browser. From this access they will start to exfiltrate data, gain root privileges, or start to pivot throughout the target’s network. Since the initial file they uploaded is designed to be camouflaged and look like any other file you would expect on a web server, it often goes unnoticed for a long period of time. Compounding the issue is many web servers leverage SSL so unless your security defenses utilize SSL inspection traditional network-based rules (i.e. IPS rules) will not be of much help!
ThreatQ and File Integrity Management
While it is possible to setup a system to grep through your logs for webshell signatures, it is a ‘hopeful’ game of catch-up like most signature-based methods. Instead, we can confront webshell attacks at their weak spot. Most webshells are new files placed on the system in the webroot directory, which are then remotely accessed by an attacker over the network. This interaction gives us an opportunity to detect their presence regardless of encryption.
Web servers should be setup to send their Access Logs to a SIEM for aggregation and analysis. Once they are, ThreatQ can make use of this information to accomplish our goal. Using the Splunk API, we can create a script to periodically search the web server logs for URL Paths as seen below.
For each URL Path the script sees, it would create a new Indicator in ThreatQ using the ThreatQ Platform API. It is very very important to note that we need to find a way these indicators are NEVER deployed to blocking technologies as they will block traffic causing an instant ‘resume generating event’! This can be accomplished by setting the status of these URL path Indicators to something which will not be exported, such as Legitimate. We do not include the Query Parameters as part of the URL Path as many websites use these in a dynamic fashion but instead store them as Attributes of the IOC for proper context. We can also exclude URL’s based on their response code so we are not polluted by random vulnerability scans.
We start with a baselining period, which marks all URL Paths seen with a status of Legitimate, thus providing a way to ensure they are not sent out to detection technologies. After the baselining period is over, the script would consider any new URL Paths it sees in Splunk as suspicious. It should be noted this may not work for websites which are frequently updated as it would generate a lot of noise and false positives to chase.
If the script does discover a new URL Path after the baselining period, they will still be created as Indicators inside of ThreatQ but with a status of Review. An Event will also be created within TQ to group any new Indicators together in a place where an analyst can easily see. The new URLs could be newly created files such as webshells or rarely visited URLs that leaped the baseline. The analyst will need to make the determination and escalate as needed.
Summary
Webshells are a simple backdoor RAT but are widely used due to their difficulty of detecting because they hide in plain site and can be uploaded easily if an attacker is able to exploit a web app. But this is an “outside the box” way to detect webshells since they are legitimate files in a webroot directory. Using the ThreatQ platform, we can alert an analyst to when new URLs have been accessed remotely. They can then verify if these are legitimate files or have been placed there with malicious intent.
0 Comments