Trying to run the mighty NSX in a home lab can be challenging… Forget about a single host, forget about 16GB per host, and if you want to have a bit of fun, you will need at least a couple of VLANs. Enters Ravello (www.ravellosystems.com).
Thanks to this incredible product, your expensive and noisy home lab days are over.
Spin up 2 ESXi hosts and a vCenter on Google Cloud or AWS in a minute from this extremely well crafted web interface! There you go, you have your management cluster. In reality things are a bit more challenging, but workable. We will see how in a minute. Ravello is free for 2 weeks for you to try and perfom your initial installations, on the cloud platform of your choice. After the trial expires, and depending on how demanding your setup is, you will have to pay US$1-3/h. Reasonable if you consider that a similar setup would cost at least a couple of grands.
Step 1 (this post): Run an ESXi cluster, a VCenter server and basic required services in the cloud
Step 2: Install NSX in a nested environment and configure controllers
Step 3: Connect a remote environment to the Ravello NSX setup using a Layer 2 VPN!
This will demonstrate that connecting a company network to a cloud instance, securely, is possible using NSX, whether (and it is much more simple) you are going to use vCloud Air, AWS or Google Cloud. Ravello advertise themselves as a lab/testing platform, but I don’t see how such services will not be available for production in the future.
A couple of gotchas:
– I have tried to complete this setup using vSphere 6, unsuccessfully. While ESXi 6 will install without a problem on Ravello’s ESX instances, the new version of vcenter appliance asks for an ESXi host as a target and therefore needs to run in a nested environment (deployed on a virtual ESXi host). The workload created by such an installation (4vcpus and 8GB of RAM) is not -yet- supported by Ravello and WILL crash. Stick to VC5 or install VC6 on a Windows server (no nested deployment). One thing to remember, if you install ESXi6, VC5 will not work.
– I have experienced a few “Cloud Errors” which will make your setup unavailable. They are not lasting long usually, but can happen at random times. (Update 11/08/15: 2 causes, ESXi Nested workload, and at the time I tried this, Google experienced a storage outage).
– This setup assumes that you have the right (and the licenses) to download VC5 and NSX 6 virtual appliances. Licensing is not covered in this document.
– These instructions also assume that you know your way around ESX, NSX, Ravello and Linux.
– Opt for the performance tier of Ravello, NSX manager and 1 controller will crash your nested environment. You have been warned.
Some steps of my setup are similar to these instructions. They are great, have a read, but we will go a little further and will try to simplify the setup.
Now, for our basic setup we will need:
– A Windows jumphost to perform vcenter deployments, access the ESX hosts, etc.
– An authoritative DNS and NTP server for the zone tomlab.com (centOS 6/bind/NTPd) that we can use to ssh into the ESXi hosts as well.
– A software iSCSI SAN (presenting a 100GB target)
– 1 ESXi host to emulate NSX management cluster.
– 1 VCenter appliance version 5.5
– We will use just one network for our setup to keep things simple (192.168.1.0/24) but R
- Static IPs only
- 192.168.1.1 is the default gateway (a very elegant feature of Ravello is automatic creation of default GW, DNS servers, etc when you configure a VM)
- 192.168.1.101 will be our DNS server running bind
- 192.168.1.11: our virtual ESXi host (MGMT)
- 192.168.1.150: our SAN (linux machine running open filer)
- 192.168.1.100: our nested VCenter server (running on the MGMT cluster, we will set it up last)
- 192.168.1.200: our NSX manager appliance (also set up later)
Our setup should look like this once everything is up and running, from the Ravello side:
Detailed network view:
Let’s get started.
Just follow the steps:
1) Open an account on Ravello’s website
2) Once logged in, add a new application. Just click on “Applications” and “Create Application”.
Give it a name, don’t use a blueprint for now
3) Next let’s upload:
– The ESX 5 iso image
– CentOS 6.6 iso image
– OpenFiler iso image (or any software SAN of your preference, I have tested iSCSI but any NFS server should work, I just wanted to avoid the NFS/Firewall setup).
– A Windows client OS image (I used windows 7, tried the supplied XUbuntu but had issues with X11’s performance)
– The VCenter 5 Appliance ovf/ova image.
While iso images will just need to be attached when deploying a new VM, you will have to configure some settings for the OVA/OVF before you can use it. Stick to LSI logic (Parallel) for your virtual disks and the rest should be fine.
Click on Library > Disk Images, and install the windows or mac uploader on your machine, then fire it up and upload the 2 images, after logging in
If you already have a vsphere environment running, you can even export
Once you are done with your 5 images, proceed with the next steps (when you add VMs to your canvas, you will have to “Update” your configuration. This publishes them to the cloud platform on your choice, pay attention to your target cloud on the following screen).
4) First install your jump host by clicking on the + sign and create an empty image. Connect the windows iso to it and install Windows. Give it an IP address. Once the installation is completed, go to your Ravello network preferences and make the IP you configured post installation in Windows matching your Ravello one. Then enable RDP on your windows machine, and in Ravello’s config, go to services and add a service labeled RDP / TCP / 3389.
Using a public IP or an elastic one if you want it to stick, test your RDP connection.
5) Install your DNS server by clicking on the + sign and create an empty image. Do not use the Ravello CentOS image if you plan on not using keys for authentication. Install CentOS 6.6 minimal from scratch, configure its IP address, DNS client settings, hostname etc. 1 CPU, 2GB of RAM and 20GB of HDD will be sufficient.
The 2 services we will need are bind (named) and NTPd.
Install bind using:
yum install bind yum install bind-utils
Configure your bind server and a zone file similar to this one:
[root@ns1 ~]# cat /var/named/tomlab.zone $TTL 86400 @ IN SOA ns1.tomlab.com. hostmaster.example.com. ( 2015070701 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day IN NS ns1.tomlab.com. ns1 IN A 192.168.1.101 esx1 IN A 192.168.1.11 san IN A 192.168.1.150 nsx IN A 192.168.1.200 vcenter IN A 192.168.1.100
Configure your ntpd service like this:
[root@ns1 ~]# cat /etc/ntp.conf # Hosts on local network are less restricted. #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server au.pool.ntp.org iburst # Leave the rest as is
Start your services, make sure that they are enabled on boot (with chkconfig) and that your firewall allows requests on ports 53 (UDP) for DNS and 123 (UDP) for NTP
6) Create another empty host using the + sign and name it san.whatever.com.
Give it a 100GB HDD, (LSI logic Generic) and attach your open filer ISO to it.
Give it an IP address and configure the network settings, make them match on the Ravello interface static settings, and create a service to access OpenFiler’s web interface. For guidance:
If you need help getting Openfiler installed you will find a lot of tutorials around, this one details the steps very well.
7) Now let’s create our nested ESXi servers VMs
On your canvas, just click on the “+” sign and add 3 empty ESX hosts. Connect your ESXi image to the 3 hosts and complete the install using the console. Tip: use the visual keyboard to press F11.
Once installed, configure your networking and enable SSH from the troubleshooting options.
Important steps, to enable nested virtualisation on your ESXi hosts and avoid trouble. From Ravello’s excellent blog entry, and using your freshly installed name server as a jumphost (or windows with putty, whatever you are comfortable with) SSH in your ESX hosts and perform the following steps:
DELETE ESX UUID
- run “vi /etc/vmware/esx.conf”
- go to last line in the file in which “/system/uuid” is defined. Delete this line and save the file.
SET UNIQUE MAC ADDRESSES
- In Ravello GUI, in the Network tab of the VM, make sure “Auto MAC” is checked for both interfaces.
- run ‘esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1′
ENABLE NESTING ON ALL ESX GUESTS
- This step is important in order to be able to power on VMs running on ESXi. It replace the need in configuring each guests with the ‘vmx.allowNested’ flag.
- run ‘vi /etc/vmware/config’.
- add the following to the file ‘vmx.allowNested = “TRUE”‘ and save.
ENSURE CHANGES ARE SAVED
- run ‘/sbin/auto-backup.sh’ (ignore warnings in its output if exist)
Once you are done with this on your host, you should be able to access it from a traditional vsphere client on your Windows jump host, but we won’t need to do this for now.
8) Let’s install vcenter appliance, simply click on “+” again on your canvas and drop your VCA image on the canvas. Give it an IP address, configure your settings as per the beginning of this document. (IP/GW/DNS). Don’t use Ravello’s DNS as some of our infrastructure will be running on the nested environment, use your centOS named instance instead. Once deployed, let the virtual machine boot and using Ravello’s console, log into it (root/vmware). Then, as instructed, execute:
And follow the menus to configure your VCenter networking, default GW, DNS…
Once done, you will be able to access https://192.168.1.100:5480 and finish your VC configuration, and when it is complete, you finally can access VCenter web client at https://192.168.1.100:9443
It’s now time to create a DC, a cluster (call it MGMT) and add your virtual ESXi to it.
Once done, add a software iscsi initiator, and add your SAN IP (192.168.1.150) as a dynamic target.
The result of all this work should give you something like this (without the error):
That’s it for now. In our next post, we will deploy our NSX manager, a controller, and start deploying virtual switches and edge gateways, and later VPN in to this setup!