Category Archives: Fusion

Run vSphere6 + NSX + iSCSI on a laptop – Part 2 (NSX)

 

NSX Manager Appliance Deployment and Installation

In part1 we have configured a full vSphere 6 infrastructure, now let’s work on NSX.

1) Let’s deploy the NSX Manager Appliance as a nested VM.

– In vCenter right click on your host and select “Deploy OVF template”
– Choose the NSX ova downloaded from my.vmware.com
– Follow the wizzard

– In section 2d, choose your NSX manager appliance’s IP address, hostname, default gateway, DNS and NTP server (we will use our ns1 VM settings there). Use the named configuration file that we used in part1 as a guide.

I will use:
Hostname: nsx.tomlab.com
Ip Address: 172.16.127.153
Netmask: 255.255.255.0
GW: 172.16.127.2
DNS/NTP: 172.16.127.160

If everything goes well, you should see this:

Do not start the VM straight away, we will first reduce it’s resource footprint down to 8GB of RAM and 2vCPUs (less will not work). Once this is done, you can power up the VM

2) Once the NSX manager VM is running, head to its web interface:


and login using the admin/password chosen during the OVF deployment. Click on the summary tab and make sure that the NSX Management service is running:

Then configure your SSO and vCenter associations are configured on the setup page:

Once deployed and linked to your VCenter and VCenter Lookup Services, log out from both vCenter and NSX manager and log back in vCenter. You should now see the Networking and Security menu item, click on it.

 

3) It’s time to deploy our unique controller. VMware only officially support 3 controllers in production but we won’t need more than one for our lab.

You will have to:
– Select the NSX manager in the drop-down
– Select the DC
– Select the cluster/resource pool created in step1
– Select your datastore (our iSCSI one, Tier1)
– Select your Distributed Switch portgroup
– Create an IP pool in the same subnet (it’s still a lab!) for the controller(s) IPs

– Enter a password 2 times

The deployment of your controller will start, wait (a while) until it’s status becomes “Normal”. Once it is up and running, shut it down and change its settings to 2vCPUs down from 4 and 1024MB of RAM (down from 8). Thanks to Dale Coghlan from VMware, he came up with these lab specs. Start your controller again and wait until its status goes back to “Normal”

3) Prepare your host in the Host Preparation tab

Once done, click on configure VXLANs and their pool (leave 0 in the VLAN field)

And finally in the logical networks preparation tab, configure your segment IDs as well as your transport zone (leave it to unicast)

That’s it… NSX is now running on your laptop and you can test it’s API, deploy logical switches and routers, configure dynamic routing, or even create a VPN to a remote NSX implementation.

Keep in mind that you can pause your whole lab following this sequence in fusion:

– Pause ESX
– Pause NS1
– Pause iSCSI SAN

To restore your lab, reverse the order:

– Restore iSCSI SAN
– Restore NS1
– Restore ESX

Bonus, the total size of our setup:

Run vSphere6 + NSX + iSCSI on a laptop – Part 1 (vSphere)

Quite a few of organisations I have been dealing with recently are wondering how to test NSX and specifically its API. While you can do some of that online using the excellent VMware’s HOL, you might want to experience it offline – on your own machine and on your own terms – and this is made possible using VMware Workstation or Fusion.

I have chosen to use a nested setup over a native one because it offers a great advantage: You can actually bring back a relatively complex lab to life in less than 2 minutes and run API calls or a demo to your customers immediately after.

The proof (video):

 

Our lab will include:

– An ESXi 6 host
– A vCenter 6 server
– An iSCSI software SAN
– An NSX 6.2 manager appliance
– An NSX 6.2 controller
– A NTP/DNS Linux server

Now let’s proceed with the first part, ESXi 6 and vCenter 6.

Laptop or Desktop should work the same, but 16GB of RAM and good CPUs with VT enabled are mandatory.

High level the steps are to:
– Run ESXi 6 and VCenter6 in a nested environment
– Install and configure NSX 6 on this environment
– Finally, deploy Virtual Switches and test connectivity

My configuration is an i7 MacBook Pro from last year, with 16GB of RAM running Fusion Pro. Please let us know in the comments if you got this to work on a different / more modest configuration.

Let’s get vSphere to run first.

In order to keep things simple (and quick), we will configure 3 VMs at the Fusion (or Workstation) level:

1) A DNS/NTP server

Virtual specs: 1 vCPU, 512MB of RAM, 1 NIC, NAT’ed use the “share with my mac” feature, or the Windows equivalent.

Install the OS using a CentOS6 iso.

Once done, you should be able to log into your CentOS machine via console or SSH (after grabbing the IP address) and ping the outside world.

Just run the following commands to install and configure the required services (thanks Gilles for the comment):

yum update
yum install bind bind-utils
chkconfig ntpd on
chkconfig named on

And edit the networking conf file /etc/sysconfig/network-scripts/ifcfg-eth0
Set a static IP (172.16.127.160/24), gateway (your computer) and public DNS server (8.8.8.8)

Then edit your named.conf file:

/etc/named.conf

And change the following:

//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

options {
// Allows any query, it's a lab...
listen-on port 53 { any; };
listen-on-v6 port 53 { any; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
recursion yes;
// The following setting forwards non-authoritative stuff to:
forwarders { 172.16.127.2; };
dnssec-enable no;
dnssec-validation no;

/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";

managed-keys-directory "/var/named/dynamic";
};

logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};

// Definition of our master zone
zone "tomlab.com" IN {
type master;
file "tomlab.zone";
};

zone "." IN {
type hint;
file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

Now let’s edit our zone file /var/named/tomlab.zone (we will also put the nested VMs records in there)

$ORIGIN tomlab.com.
$TTL 86400
@ IN SOA ns1.tomlab.com. hostmaster.example.com. (
2015070701 ; serial
21600 ; refresh after 6 hours
3600 ; retry after 1 hour
604800 ; expire after 1 week
86400 ) ; minimum TTL of 1 day

IN NS ns1.tomlab.com.

ns1 IN A 172.16.127.160
esxi1 IN A 172.16.127.150
esxi2 IN A 172.16.127.151
san IN A 172.16.127.152
nsx IN A 172.16.127.153

vcenter IN A 172.16.127.155

And finally let’s disable SELinux

setenforce 0

Disable at startup in /etc/sysconfig/selinux

SELINUX=disabled

Now add this in /etc/ntp.conf

restrict 172.16.127.0 mask 255.255.255.0 nomodify notrap
# and
server au.pool.ntp.org iburst

Start your NTP and DNS services and make sure they will start with your VM.

In order to check that everything is working fine, nslookup google.com and esxi1.tomlab.com, you should get answers for both. Ntpdate from another VM on your network (or your computer itself) should also suceed.
Finally, make sure that your firewall allows requests on ports 53 (UDP) for DNS and 123 (UDP) for NTP.

2) An iSCSI SAN

Virtual specs: 1 vCPU, 1GB of RAM, 1 NIC, NAT’ed use the “share with my mac” feature, or the Windows equivalent.

Once your VM is deployed, follow these instructions to configure openfiler. We will later on configure the iSCSI software adapter on our esxi host.

If you don’t want to use iSCSI, just give your ESX host more storage in step 3), 100GB should be more than enough. If you do, give the iSCSI SAN 100GB, thin provisioned.

3) An ESXi 6 host (4 vCPUs, 13656 GB of RAM 1NIC)

Virtual specs: 4 vCPU, 13656MB of RAM, 2 NICs (we will use the second one later), NAT’ed use the “share with my mac” feature, or the Windows equivalent.

In fusion, you should be able to select ESXi in the OS type (from the ISO actually), customise your settings to add more CPU/RAM and start your installation.

This is what my host looks like after installation:

Once this 3d VM is installed, we are done with the top-level virtual infrastructure and we will focus on the nested infrastructure, starting with VCenter 6.

VCenter 6 Installation

Use a Windows host or VM to perform the installation of VC6. Unlike 5, the vCenter Appliance has to be deployed from a guest OS onto an ESXi host. Just mount the VCenter 6 ISO, install the Client Integration Plugin and run vcsa-setup (it will open a browser/wizard).

Use the ESX VM as a target and deploy your server. Once deployed (and it might take a while), don’t start the VM, connect to your virtual ESX host and make sure that your settings are as follows:

Now you can start the VM. Once all the services are started, open a browser and navigate to

http://<the ip you chose during setup>:443

This is where things get interesting, you will add your VM to your nested VCenter… why not? They’re on the same subnet after all. Create a DC and a cluster (important for NSX), enable HA/DRS if you like and add a host to it, using the IP address and Credentials of your ESXi installation.

Finally, let’s put the second NIC in a distributed switch, it will make working with NSX easier.

Once your host added, now that you can deploy nested OVFs, it’s time to deploy your NSX manager appliance. We will focus on NSX Installation/Configuration in part2.