Hacking Team inspired Anti-VM Trick spot in the Wild

    Two days we came across an interesting sample (MD5: 9437eabf2fe5d32101e3fbf9f6027880, source: ThreatWave). The sample has been unknown at this time and also did not look interesting from a dynamic behavior analysis perspective. However there were some tiny outliers which brought attention to us:

    We first ran the sample on a virtual machine. The overall score was suspicious but some of the behavior signatures (up to now Joe Sandbox's Behavior Signature set includes over 850 signatures) detected several anti-VM, anti-sandbox and anti-debugging tricks.

    To verify the sample has detect the virtual machine we run the sample on a native analysis machine. A native analysis machine is a pure physical machine like a real laptop or PC. All our products including Joe Sandbox Cloud enable to analyze on physical machines. Compared to virtual machines or emulators (e.g. QEMU or BOCHS) physical machines cannot be easily detected. In addition, you can use directly an existing laptop or PC from your (company) network environment for analysis. This is a perfect malware analysis system since there is no difference to a target system. Some analysis results from the analysis run on the physical machine:

    As the report cuttings outline, the sample has persisted itself and also shown some very interesting network behavior. We analyzed the anti-VM, anti-sandbox and anti-debugging tricks in more depth. Here is a list of them:

    • HKEY_LOCAL_MACHINE\HARDWARE\DEVICEMAP\Scsi\Scsi Port 0\Scsi Bus 0\Target Id 0\Logical Unit Id 0  Identifier
    Another interesting trick used by the malware is checking for PCI devices unique to virtual machine hardware:

    What actually is compared are the device strings (PCI vendor IDs) VEN_80ee (Virtualbox), VEN_1ab8 (Parallels) and VEN_15ad (VMWare). This detection seems to be very similar to the one used by Hacking Team and also recently added to Pafish:

    We have updated all our products to evade this detection on virtual machines. Some full Joe Sandbox 12.5.0 Analysis:

    The Power of Execution Graphs 2/3


    This is the second part of our three-part “Power of Execution Graph” blog series. The first part which introduces Execution Graphs can be found at here.

    As you may recall, Execution Graphs are highly condensed control flow graphs, showing information about which part of the code has been executed and which not. Execution Graphs highlight additional attributes such API calls, threats starts, and key decisions.

    Analyzing Packers

    In this blog post, we are going to focus on an interesting sample we already have analyzed previously with pure Hybrid Code Analysis (HCA). The sample includes various sandbox detection tricks including one trick to identify specifically Joe Sandbox. In the following text, we outline how to spot these tricks by using Execution Graph.

    The analyzed sample relies on packing and encryption as a first layer of evasion. This technique is quite challenging to inspect manually from the PE file and generally poses a major problem for static analysis approaches. Hybrid Code Analysis is resilient against packing and therefore facilitates the analysis of unpacked code.

    Let us start with having a look at the Execution Graph summary tab:

    The first important striking fact is that 99% of the code is tagged as “Dynamic/Decrypted”. When looking at the prefix of the Execution Graph in some detail, we notice the following:

    • The code starts by allocating dynamic memory using NtAllocateVirtualMemory native API calls.
    • Once the allocation is performed, the code reaches the node labeled 401065 which is flagged as Unpacker code. At this point, the code is written into the previously allocated memory sections.
    • After execution the unpacker code then branches to the dynamically generated code.

    By clicking on the node 401065, we can check that it indeed contains unpacker code:

    Checking the basic blocks leads us to the unpacker code itself:

    Similarly, by clicking on the following node 164a00, we can see that the corresponding code is located in a dynamically allocated memory (as almost all nodes reachable from this point):

    Please note that according to the status dynamic or unpacked code status the nodes are highlighted with different color:

    There are three different branches starting from node 401065. By clicking on node 401065, (which covers several basic blocks) then following the hyper-link to basic block 4010CC, we jump to the following disassembled code:

    The computed call call edx at the virtual address 04010E5 represents the execution branching to the unpacked code. 

    Sandbox Evasion

    The Hybrid Code Analysis found three potential targets represented by the three target nodes, while the executed code starting at node 164a00 is the most interesting with respect to its behavior.

    The sub-graph below outlines the various evasion tricks:

    • The sample first checks the its file name (call to GetModuleFileName) and may stall if it looks suspicious (e.g. in this case a file name like “sample”) by sleeping (branch to node 164838).
    • After checking the serial ID information of the volume C: (call to GetVolumeInformationA), it may stall again if it matches a given magic value (sandbox detection via disk serial number).

    Here both evasions fail since the execution proceeds as illustrated by the node coloring.
    Later in the code, at node 1548dc the sample tries to detect if it is being run on a virtual machine. To do so it reads the disk names via registry System\CurrentControlSet\Services\Disk\Enum and compares to well-known products names such as VMWare. Checking disk names of virtualization products is a well-known anti-VM trick which we see in nearly 70% of all samples.  

    Finally, one last check has more success and the execution ends up stalling in an endless Sleep loop:

    Selecting the key-decision node 16499d shows us the disassembly, which indicates the trick is related to the registry key AutoItv3CCleanerWIC:

    The code enumerates all software uninstallers. This enables to collect a list of all installed software on the machine. The fingerprint AutoItv3CCleanerWIC is then used to check if AutoIt, CCleaner and WIC are installed. If true the sample falls asleep. AutoIt and CCleaner are two additional software we often install on machines to make administration more easily. Likely, the guys behind this malware were extracting the fingerprint by using our free Joe Sandbox Cloud Basic online service.

    Process Injection

    Beside the evasion tricks, the Execution Graph can also be browsed for finding hidden / non executed functionalities, in the form of suspicious sub-graphs. Here is an example:

    This sub-graph outlines a remote process injection technique, which has not been executed during analysis but still can be found easily in the graph. The various edges to the lower nodes are error handling, e.g. if CreateToolhelp32Snapshot fails then CloseHandle is directly called. The code is quite extensive and spans several functions. Thanks to the condensed and connected graph it is easy to detect and understand.

    An Execution Graph often consists of a main graph as well as several independent graphs:

    The main graph contains executed nodes (marked as red) while the independent graphs do not have any executed code. The reason behind this is the difficulty to generate a completely connected graph. E.g. consider the non-executed instruction call eax, where eax is previously computed. It is not possible to determine which code location is being called. 

    In order to focus on the main graph, we added a new feature to hide independent graphs. Simply click on the Hide Nodes/Edges label found at the top-left of the Execution Graph panel to hide independent graphs and focus on the main graph. Click again to restore the full view.

    Graph based Signatures

    Of course, manually browsing through the Execution Graph is not the only way for detecting evasive behavior. Execution Graph Analysis uses an extensive set of behavior signatures to automatically detect evasion tricks. A nice feature we have recently added is the ability to jump to the incriminated Execution Graph nodes from the signature by using the links in the report:


    The sample covered in this blog post uses a large panel of techniques to avoid detection by sandboxes. But thanks to the Execution Graph Analysis, the following information could be quickly obtained:

    • The execution starts by dynamically generating code. The Execution Graph enables to easily find the unpacker code as well as the newly generated code.
    • The unpacked code uses various evasion tricks that Execution Graph Analysis automatically detected and rated as malicious. The evasion tricks can be further analyzed in-depth by navigating from the signature hits to the Execution Graph nodes and from the nodes to the disassembly code.
    • Besides the detection of evasive behavior, the Execution Graph provides a good way of detecting complex malware functionalities (such as remote process injection) in the form of sub-graphs.
    Stay tuned for our last blog post in our Power of Execution Graphs series! 

    Report available at:

    Dynamically Analyze Offices Macros by instrumenting VBE


    As you all know, Microsoft Office documents have become a new attack vector. They allow to easily transfer exploit or dropper code by e-mail to victims by embedding macro code. Since sending executable files such as exe, scr or cpl files as an e-mail attachment is usually blocked, Office documents remain one of the last options. However, a further obstacle is that macros are often disabled on the victims host, so the code will not directly be executed. In order to lure the user to enable macros various social engineering tricks are being used:

    Macros can be analyzed with static analysis very easily. In order to do so one parses the document structure, searches for OLE streams, and then extracts the VBA code:

    Signatures can be used to detect suspicious API calls inside the code:

    Writing static deobfuscator is a dead end

    Such static signatures are part of Joe Sandbox since we have seen such malicious Office documents with macro payloads. As you may guess it did not take long and macro code was no longer easily human readable but source code obfuscated:

    Such obfuscations are simple and work well to evade static signatures on the code. In order to get the clean code one may develop deobfuscators. However, this is a dead end. First, it is always reactive, you have to understand the deobfuscation technique first before you can write a deobfuscator. Second, it is very easy to randomize obfuscations. Finally, it takes time and effort to develop new deobfuscator. For instance, the following code does not use any Chr based string obfuscation but rather a more complex algorithm (checkout that all the variables have names of persons):

    Dynamically Analyzing VBA Code by instrumenting VBE

    The solution to the obfuscation problem of VBA code is dynamic analysis. We have successfully instrumented the Visual Basic runtime interpreter in order to track code execution. We already used the same approach in order to capture Java Script compilation and DOM modification events in the Internet Explorer. This greatly helps to understand obfuscated Java Script and browser exploits:

    The VBE instrumentation we have added to Joe Sandbox allows us to see live VBA data, for instance string decryption:


    Signatures to detect suspicious strings inside decrypted data:

    The cool thing about the VBE instrumentation is that as long as the VBA code is executed it enables  to see everything no matter how sophisticated the obfuscation is. In addition, it enables Joe Sandbox to inspect live execution data for malware written in Visual Basic. Lot of APTs have an crypter or obfuscation stub written in VB.


    Using pure static analysis in the context of deobfuscating source code of script languages is a dead end. It costs a lot of time to develop deobfuscator while it is super easy to randomize or change the obfuscation in order to evade the deobfuscator. Custom dynamic analysis which instruments the script interpreter core does not care about code obfuscation, it sees everything such as decrypted data. This feature facilitates the malware reverse engineering and analysis process, and makes generic detection more sound.

    Full Analysis Report:

    The Power of Execution Graphs Part 1/3


    We have been quite busy and will soon release Joe Sandbox 12. It is so far one of the biggest releases we have made and includes several new features such as:

    • Execution graphs
    • Yara rule generator (see http://www.yara-generator.net/)
    • MITM SSL proxy to inspect HTTPS (credits to Daniel Roethlisberger)
    • 63 behavior signatures
    • Behavior signatures to detect unpacked / dynamic code
    • More than 10 behavior signatures to detect evasive behavior
    • Score algorithm with lower FP and FN
    • System event logging
    • Slim PCAPs
    • Per process memory and CPU stats
    In this and two follow-up blog posts we are going to outline a new feature called Execution Graphs. 

    Evading sandboxes is a key feature of today’s advanced threats. To do so malware uses various tricks for checking whether it is running on an analysis system, such as trying to detect if the current system is a virtual / emulated machine or checking whether it is being debugged or analyzed. In such cases, the malware will keep a low profile and avoid exhibiting its actual malicious behavior, potentially evading detection by the malware analysis system. Latest threats also implement generic evasion such as validating user behavior or time and sleep tricks (see blog post http://bit.ly/1uZBmN2 and http://bit.ly/1qNT3Bu).

    Since version 7, released in 2012 Joe Sandbox implements a variety of techniques to prevent or detect evasive malware. This includes execution on native systems, analysis of non-executed functions through Hybrid Code Analysis (HCA), specific signatures for identifying evasive patterns as well as cookbooks. 

    In the last months we have seen a strong increase of more sophisticated evasion techniques in malware which are harder to find. Therefore we have decided to make this topic a key for Joe Security’s research roadmap.  

    Execution Graphs

    One of the new features we added to Joe Sandbox 12 are Execution Graphs. Execution Graphs have been designed to automatically spot evasions but also to help to quickly understand how the malware implements the evasion. 

    In general an Execution Graph is a highly condensed control flow graph with a focus on API-rich paths. Since it is highly compressed it is easier to understand than a full control flow graph. The graph is composed of nodes representing sections of code and edges correspond to the control-flow (call, jmps etc) of the malware. Each node is labeled with the set of API calls it executes. Nodes are colored to highlight additional properties:

    • Yellow: the node is a program / thread entry point or a top level function
    • Orange: the code has been triggered during execution
    • Red: the code has been unpacked and executed
    • Grey / blackish: the code has not been executed
    Different shapes are used for highlighting graph locations. The diamond-shaped nodes correspond to so-called key decision nodes, in the sense that the process decides at this node to avoid execution of a branch which could lead to interesting key behavior. Thus key decision nodes are especially relevant when browsing the execution graph for evasive behavior. Note that determining whether a decision node is key depends on the execution status of the nodes reachable through its branches (one branch should lead to executed APIs, the other to different non-executed APIs), thus different executions may lead to different key decision nodes.

    The following figure shows the initial part of the execution graph for our demo sample (MD5: 0af4ef5069f47a371a0caf22ae2006a6). 

    Notice how the first few nodes after the entry point (colored yellow) have an orange/red color, while the other nodes are grey/black? Recall that red coloring indicates that the corresponding code has been executed, while black is used for non-executed code.

    When zooming in the graph entry node, the following control-flow pattern appears:

    The sample execution graph clearly exhibits a very straightforward evasive behavior: there is a key decision point where the GetSystemTime API is called, followed by another key decision and a call to the ExitProcess API. All these nodes are colored in red and thus are executed: the part of the graph starting at GetVersionExA is not executed (grey and black color): the full execution graph includes a lot of non-executed malicious behavior not shown here. The green edges represent so-called rich paths, which allow the analyst to track the most API intensive paths of the execution graph, independently from their actual execution status.A path is considered to be "intensive" if a lot of APIs are executed which appear in malicious codes. Here the rich path leads to some non-executed part of the graph:

    The blue edges represent thread creations, and the yellow nodes are thread entry points. In the given sample each created thread has its own malicious payload:

    • Thread 4098a0:  its task is to terminate debugging tools and Antivirus. Function 4095e0 is registered as a callback using the EnumWindows API: it enumerates all top-level windows and checks their title against strings such as "avast", "avira" or "kaspersky" among many others. If the title matches the processes is killed instantly.

    • Thread 407230 is in charge of persistence and installation behavior.
    • Thread 407180 spreads its main executable to external drives, since it checks for available system drives and uses API call chains often found in USB drive infection routines (GetDriveType, CopyFile, SetFileAttributes).

    • Thread 407a80: parses remote commands. It is the main payload thread which acts as a broker.

    The structure of the graph as well as all additional properties such as execution coverages or decision nodes are directly passed to the signature interface of Joe Sandbox. This enables to write behavior rules which detect evasive behavior.

    We may navigate between the execution graphs and the corresponding assembly code.  In the case of sample MD5 0af4ef5069f47a371a0caf22ae2006a6, we can determine that the current system time returned by GetSystemTime is checked in the code associated with the key decision nodes, and depending on its value the sample decides to exit the process or continue with execution:

    Same for the command handler found in thread 407a80:


    Execution graphs are a powerful tool for detecting and understanding evasive behavior. Due to its form, coloring and node shapes we can spot evasion pattern very efficiently. Since the graph is reduced and simplified this also works with very complex and extensive codes. The structure of the graph and all attributes are fed to the Joe Sandbox signature interface. Therefore we can easily rate and classify evasive behavior within seconds. Since the graph describes the complete behavior and not just the executed path, any behavior can be rated and classified.

    During development execution graphs already have proven to be very useful. Therefore we will present some of our detection of more complex behaviors / evasion in two additional blog posts. Stay tuned!

    Example Reports for the sample used in the post:

    Introduction Yara Rule Generator

    A couple of months ago we started to work on a new feature for Joe Sandbox we call Yara Rule Generator. Yara is a well known pattern matching engine built for the purpose of writing simple malware detection rules:


    Yara main use is to detect APT and advanced threats which AV does not detect that quickly. A big part of Joe Security's customers use Yara on a daily basis. Due to that we got many requests about adding a feature to Joe Sandbox to automatically generate Yara rules and finally decided to take up that challenge.

    Today we release a new free service you find at www.yara-generator.net. Yara Rule Generator creates Yara rules automatically based on behavior data such as files and memory captured by Joe Sandbox.

    How does the Joe Sandbox Yara Rule Generator work and what kind of rules does it generate? The generator creates three different rules per submitted sample:

    File rules enable to search for the submitted sample. Dropped rules are rules generated out of files which have been created or downloaded by the initial sample during dynamic analysis. Memory opcode rules finally are generated by using memory dumps. File and dropped rules enable  to scan for the particular sample on the file system. Memory opcode rules on the other hand allow to find malware in the process memory (you can specify a process id as a target if you launch Yara or use our batch file to scan all processes) of a target system.

    Further a rule can be a simple or super rule. Simple rule are specific to the submitted sample and its behavior. Therefore they do not match variants of the same malware. Super rules are generic and are built over a set of uploaded samples / behavior. Since they only capture common behavior they often find malware variants:

    To generate rules the Joe Sandbox Yara Rule Generator extracts different kind of behavior data such as:

    • PE structure data (e.g. section names)
    • Strings (unicode and ascii)
    • Code sequences (e.g. entrypoint)
    • Opcodes sequences from HCA (Hybrid Code Analysis)
    All the extracted artifacts are then rated based on knowledge, entropy and location information. After artifact selection a test rule is generated and it's false positive rate measured by using a reference goodware set. Finally the rule is taken if the false positive rate is acceptable.

    For super rules Joe Sandbox Yara Rule Generator uses an efficient clustering algorithm to find common opcode sequences.

    Results look very promising. To test super rules we have generated rules by using malware family sets. We took three samples out of the set and generated super rules. We then infected a test system with a fourth sample of the same family and searched it with our rules:

    Of course also the file and dropped rules work well:

    However please note that the Yara Rule Generator is no silver bullet. Creation of simple and super rule is tricky and far from perfect. During the development of version 1.0.0 we spot lot of areas for improvements. All the rules are well commented and documented. Therefore it is simple to extend or change the rules.

    The Yara Rule Generator has already been deeply integrated into the Joe Sandbox platform and will be shipped with the next major release.

    Happy Rule Creation!

    Update 1:

    We were inspired by yaraGen from Florian Roth as well as https://yaragenerator.com.