If you enjoy Security Research, Pentesting, Red/Blue Teaming, need to prep for the OSCP, GXPN or other certifications, a Home Lab build may be on your radar. I think everyone with these security interests has gotten to the point of running multiple VM’s on their home machine only to have everything bogged down.
It can be very frustrating, especially running environments that eat up a ton of resources. With a lab, you’ll feel refreshed to have an abundance of resources.
If you’re not jumping onto a home server build right off the bat, you may be weighing a few options:
- New PC build to host VM’s locally
- AWS hosting VM’s
- Home server/lab environment
Building a new PC would be the best option if you’re interested in keeping environments light. Will you be running the VM’s constantly in the background, or will you be powering them up and down whenever you want to access them?
This may be problematic for Windows Domain testing, and an annoyance testing other machines. You’d have to spin up your environments on a continual basis.
Purchasing space on AWS is another option. This isn’t something I’ve looked into much since it’s another bill to add to the list. Energy, space and the initial investment of a server/PC is covered, but the control you have over the environment is limited. Also, costs may start to stack up if your environment scales.
A Home Lab is the option I’ll be covering in this post. Initial costs are higher, but can range from a few hundred dollars to multiple thousands, depending on your requirements. A home server doesn’t have to be a full rack taking up space in your place of residence, nor does your electrical bill have to be an outrageous amount. In my opinion, the Pro’s definitely outweigh the Cons.
You have full control, a lab is modular, environments can be DMZ’ed, and you’ll learn more about infrastructure and virtualization.
I recently completed a home lab build with the goal to create a research lab or “cyber range” for Pentesting, Malware research, IoT and other Blue/Red team activities. This post will cover the process I went through for the build, with the hopes it helps others.
A few requirements of mine:
- Fits in Full ATX Tower or Ikea Rack
- Ability to host a generous amount of VM’s
- OpenVPN Access
- Hypervisor with VLAN/DMZ capabilities
- Lab WLAN access for IoT and Mobile devices
- Budget under $1,500. Buy used if reasonable.
Before you start purchasing hardware, think of your requirements. How many machines do you expect to be running at a given time? Budget? Keep in mind any room for growth - physical and virtual.
Albeit my build hasn’t been under heavy load, I’ve still been pleasantly surprised how cool and energy efficient it has been.
Although I’m biased, I’d definitely recommend a similar build if your requirements are analogous to mine. I don’t think I would change anything if I had to do it over. It provides a ton of resources for the money.
I’ll cover each component for the server build, then move on to the Networking and Software equipment. Hopefully this will help anyone in a similar situation trying to iron out the specs and compatibility. Prices may have changed since writing this.
If you’re unfamiliar with the Intel Xeon 2670 as I was, it’s an 8-core (16 logical) CPU launched back in 2012. They are currently discontinued, so many datacenters/enterprises surplused these beasts for newer CPU’s. Now they are easily available, used at a low price point of $60-80 a pop - great for home lab enthusiasts.
There are a few dual socket LGA2011 motherboards available that fit in a Full ATX and are compatible with the 2670. This allows for two E5-2670 CPU’s, which means 32 logical cores in an ATX!
These builds aren’t completely uncommon for virtualization enthusiasts or folks into CAD systems, so documentation is available throughout the web if you want to research more.
Check out eBay for the Intel Xeon E5-2670 CPU’s. There are a few different models, but the SR0KX E5-2670 Xeon Intel 2.60Ghz is the one I went with from eBay - it’s compatible with all other parts in this post.
As mentioned previously, there are a few motherboards with dual sockets that fit with the 2670’s and inside an ATX tower. I went with the ASRock ATX DDR3 1066 Motherboard which has 16 DDR3 DIMM slots allowing for up to 512GB’s of RAM.
Having a DDR3 motherboard will significantly reduce the cost of RAM. Other mobo’s are available, but make sure they are Socket R or LGA2011 compatible.
DDR3 is very cheap and in surplus all over. The above two components are also compatible with DDR3 ECC memory, which can be found in larger quantities. Here’s a serverfault post with details of ECC vs non-ECC memory.
This is another part you can buy on eBay for dirt cheap. I went with 128GB used Low-Profile DDR3 ECC RAM for $300 on eBay - as pictured above. Without going down the rabbit hole on what each spec means for RAM, my purchase was similar to this product on eBay with the same specs. The mobo can handle up to 512GB, but at 128GB that’s average 4GB per CPU logical core, which is more than enough.
Since the server is dedicated to run Virtual Machines, having SSD for the VM’s to run on is a must. Space for backups, ISO’s or other data can be stored on a cheap HDD, but I would keep all VM’s and containers on SSD.
I went with a Samsung 860 Evo 500GB SSD for now, along with a spare HDD for backups and SSD mounting bracket. Keep in mind what you’ll actually be putting on the VM’s. Are you going to load it with data? Assuming it’s not doubling as a Plex Server you shouldn’t need more than OS space plus some space for random apps for testing.
I’ve built a few gaming rigs in the past, so have always shot for 1000W+ PSU due to the power hungry GPU’s. In this case there’s no GPU, which allows for a lower necessity for power - even considering the dual CPU’s. After doing some research on price vs performance, I decided on a Rosewill 850W PSU. Load hasn’t been very heavy, but I’ve had zero issues with this PSU so far.
Note you’ll need a power supply with two 8-pin connectors for the dual CPU’s, which can limit your options.
Since the eBay CPU purchase came with CPU’s only, I needed to get some aftermarket CPU fans that fit without space issues on the dual socket mobo. I was a bit nervous there would be clearance issues, but everything worked out (a rare case).
Space between RAM and Heatsink/Fan:
The two CPU fans are Cooler Master Hyper RR-T4-18PK-R1 which were $20 each at the time of purchase. I’ve ran into zero heat issues - it stays nice and cool.
If you’ve gone with the same Motherboard you’ll need to get a compatible case. The mobo has a SSI EEB form factor which isn’t very common. After some digging I went with a Phanteks Enthoo Pro Full Tower PHS614PC_BK. Fits the motherboard, PSU, fans, etc perfectly. It’s sleek, clean and spacious - nothing snooty.
Ultimately for the setup of the Hypervisor/OS you’ll need a USB or disk drive to boot from, along with a wired mouse and keyboard. I also got a Corsair SSD Mounting Bracket for the drive - you’ll need a SSD bracket as well if you go with the same case. Also a SATA cable if you don’t have a spare.
The network infrastructure setup will very much depend on your requirements. Will you be creating any dangerous VLAN’s running malware? Maybe you’ll have other people connecting via VPN? Do you want a separate WLAN segregated from your home network?
I needed a: Reasonably priced Router, without subscriptions costs that allowed for Firewall rules, VPN, Network metrics and at least two physical LAN ports - one for my Home network and the other for the Lab network.
To fit these necessities I purchased the Ubiquiti Unifi Security Gateway USG. It’s a perfect little router/firewall for a home lab. It’s quiet, doesn’t produce a ton of heat and doesn’t need to be mounted in a rack.
Personally, I really like Ubiquiti USG’s administrative interface. As a home lab user, it offers plenty of options to secure and monitor your networks. You can easily set firewall rules, create network groups, add port forwarding and much more.
Another cool feature the USG offers is Deep Packet Inspection (DPI). This is a great feature if you’re into collecting metrics. It can be turned off if you prefer not to log websites your home or lab users access.
Switch and Wifi
I had a spare AP laying around for lab wifi, and used a Netgear 8-Port Gigabit Managed Switch to bring everything together. I found some great reviews and resources searching through a few sub-reddit’s: reddit/r/homelab and reddit/r/homenetworking. Be sure to check those out if you haven’t already.
Regardless of which hypervisor you go with, you can create a pfSense VM to do all of the virtual routing between lab VLAN’s. This allows for a centralized management of VALN’s, firewall rules, log collection, etc. Make sure firewall rules are added on the physical firewall to block any traffic to/from your Home network.
If you’ve never worked with pfSense, it’s fairly simple to use and allows for plenty of configurations. You get to configure how ever many virtual NIC’s you want, allowing for plenty of VLAN’s. More about VLAN configuration under Software.
Another awesome feature of pfSense is the ability to configure OpenVPN. This is a solid option if you want to connect to your lab off premise, allow friends/co-workers to join, or if you simply want a secure VPN while you’re on the road. pfSense has an OpenVPN wizard which guides you through the setup. There are more than a few steps, but it’s relatively straight forward.
You can add a port forwarding rule on your physical router to send all VPN traffic to your virtual pfSense instance. This also allows for centralized firewall management and VPN log access - all configurable through the GUI.
Choosing a Hypervisor was conflicting. I have experience with ESXi, but didn’t want to pay the costs for VMware products. In reality, a home security research lab doesn’t need all the bells and whistles.
I preferred something free, compatible with all OS’s. After weighing the Pro’s and Con’s, I decided on Proxmox and it’s absolutely perfect - it fits the bill for a home security research lab or cyber range.
Proxmox is a hypervisor based on KVM that can be managed via a straight-forward administrative web-GUI or CLI. For a non-enterprise virtualized cyber range, it’s perfect (and free).
There’s not a ton of documentation on Proxmox - or as much as you’d find with Vmware, but enough if you dig. It’s growing larger as time goes on.
If you’re setting up VLAN’s, you can use VLAN tags, or create Virtual Bridges. I went the bridge route since it was straight forward. I may look into the OVS bridge option in the future, but I see no performance issues with Linux bridging at the moment.
An example setup of a VLAN can be created by adding a new Linux Bridge via Proxmox GUI. Create this first bridge as your physical lab LAN, aka virtual WAN, aka your primary VLAN allowing for internet access.
Create a second Linux Bridge with completely empty fields. Everything will be configured through pfSense, you just need a network interface (virtual) for pfSense to recognize, which these Proxmox Linux Bridges provide.
If you want more than two vlan’s, add as many Linux Bridge’s as needed - the IP ranges, Firewall rules, etc will all be configured through pfSense. Once set, go through the pfSense Interface Assignments and dish out your custom IP sets. This may take some finagling if you don’t work with networking day-to-day.
For Windows boxes and some Linux, make sure to use the Network adapter of Intel E1000. VirtIO is a custom network device with higher performance that can be added with the proper drivers. However, I don’t notice any speed issues with the Intel E1000 - again, home lab non-enterprise.
I’ve found Windows XP needs the Realtek RTL8139 network device set, otherwise networking will NOT work. I spent way too much time troubleshooting this one.
If a VM is hanging (ex: VM ID: 105) after an attempt to stop it within the GUI, SSH into your Proxmox server and type
qm stop 105
See my post about creating Proxmox backup storage.