* If you don’t count the amount of time spent maintaining the on-premise equipment.
Abstract My 48-VM (virtual machine) homelab configuration costs me approximately $430/month in hardware, electricity, virtualization software, and internet, but an equivalent configuration on AWS (Amazon Web Services) would cost $1,660/month (almost four times as expensive)!
Disclosures:
I work for VMware, which sells on-premise virtualization software (i.e. vSphere). I didn’t put a dollar value on the time spent maintaining on-premise because I had a hard time assigning a dollar value. For one thing, I don’t track how much time I spend maintaining my on-premise equipment. For another, I enjoy maintaining on-premise equipment, so it doesn’t feel like work. Shortcomings of On-Premise Time and Effort: Before you leap into on-premise, you need to ask yourself the following, “Am I interested, and do I have the time, to maintain my own infrastructure?” If you like swapping out broken hard drives, troubleshooting failed power supplies, creating VLANs, building firewalls, configuring backups, and flashing BIOS—if you like getting your hands dirty—then on-premise is for you. Only one IPv4 address: This is a big drawback. Who gets the sole IPv4 (73.189.219.4) address’s HTTPS port—the Kubernetes cluster or the Cloud Foundry foundation? In my case, the Cloud Foundry foundation won that battle. On the IPv6 front there’s no scarcity: Xfinity has allocated me 2601:646💯69f0/60 (eight /64 subnets!). Poor upload speed: Although my Xfinity download speed at 1.4 Gbps can rival the cloud VMs’, the anemic 40 Mbps upload speed can’t. I don’t host large files on my on-premise home lab. This may not be a problem if your internet connection has symmetric speeds (e.g. fiber). Scalability: I can’t easily scale up my home lab. For example, my 15 amp outlet won’t support more than what it already has (2 ESXi hosts, 1 TrueNAS ZFS fileserver, two switches, an access point, a printer). Similarly, my modestly-sized San Francisco apartment’s closet doesn’t have room to accommodate additional hardware. Widespread outages: When I upgraded my TrueNAS ZFS fileserver that supports the VMs, I had to power-off every single VM. Only then could I safely upgrade the fileserver. Ground-up Rebuilds: One time I made the mistake of not powering down my 48 VMs before rebooting my fileserver, and I spent a significant portion of my winter break recovering corrupted VMs (re-installing vCenter, rebuilding my Unifi console from scratch). How I Calculated the AWS Costs First, I pulled a list of my VMs and their hardware configuration (number of CPUs (cores), amount of RAM (Random Access Memory)) I used the following govc command:
...