Cloud Computing Lab vSphere

Hardware and Licensing Requirements

This page catalogs the hardware, software, and licensing that was used to successfully test and ultimate create the Cloud Computing Lab. This is not necessarily a minimum requirements document because there may be aspects that can be accomplished with a less robust configuration.

Required Hardware

CCL Master Server Box: DELL PowerEdge r710

  • BIOS 2.1.9
  • 2x Intel Xeon e5620, 2.40 GHz, Quad-core
  • level 2 cache, 4x 256 kb
  • level 3 cache, 3 mb
  • Memory: 144 GB, eccddr3 1067Mz
  • DVD-ROM
  • 4x 146 GB 10000 RPM HDD
  • 4x 900 GB 10000 RPM HDD
  • RAID controller(hardware), perc h700 integrated
  • 1 10GB Network Interface Card (NIC) (additional)
  • 4x 1 GB/s ethernet ports
  • 2x 10 GB/s ethernet ports

CCL Switch Environment: 2 Cisco 2960-S switches (Cisco Catalyst 2960S-24TD-L)

  • 24 Gigabit Ethernet ports
  • 1G/10G SFP+ slots
  • USB interfaces for management and file transfers
  • LAN Base or LAN Lite Cisco IOS® Software feature set

CCL Lab Workstation Environment: 20 Dell Optiplex 790 Workstations

  • Windows 8
  • Intel i7 @ 3.40 GHz
  • 8GB RAM
  • 500 GB Hard Drive
  • Integrated NIC (Enabled w/PXE)

Required Software and Licensing

CCL Master Server Box OS : ESXi 5.5

CCL VMWare Environment : vSphere 5.5

  • 1 vSphere 5.5 License
  • 21 ESXi 5.5 Licenses (for server and hosts)
    • host machines need at least 2 cores
    • minimum of 4GB of RAM

Setting up Auto Deploy

Autodeploy flowchart

VMware Auto Deploy Administrator’s Guide: http://labs.vmware.com/wp-content/uploads/2010/08/autodeploy_fling.pdf


Required client/server resources:

  • vCSA to host DHCP, TFTP and auto-deploy services
  • ESXi 5.5 server
  • PXE bootable client workstation(s)

Required implementation software:


Optional (but helpful) software:

  • A non-IE browser (chrome, firefox etc)

Server Setup

The steps for configuring the CCL environment can be found in the Configuration Guide.


Setting up Auto Deploy

  • Once the TFTP server is configured and vCenter / the vSphere client are installed, preparing for auto deploy can begin. First, open windows powershell and input the command “Set-ExecutionPolicy RemoteSigned”. This will enable you to apply commands from PowerCLI and administer auto deploy in the GUI.
  • Second, download the Auto Deploy GUI fling from VMWare (obtainable at http://labs.vmware.com/flings/autodeploygui - be sure to select the correct version in the drop-down box!) and install it.
  • Next we need to provide the necessary TFTP boot information. To get it, go into the Home menu in the vSphere Client and click the Auto Deploy button (the green arrow). There should be a link that reads “Download TFTP Boot ZIP” (NOTE: Be sure to go into internet explorer and click Tools>Internet Options>Security>Custom Level and then select the radio button that enables file downloads, or you will be unable to obtain the file!). Once the file is downloaded, unzip it and place its contents in the TFTP_Root folder you created earlier.
  • Next, you need to use the Vib2Zip application downloaded with the ESXi customizer (dropbox URL: dl.dropbox.com/u/97021501/ESXi-Customizer-v2.7.1exe) to convert the NIC drivers packaged in the .vib file downloaded from dropbox (dropbox URL: http://dl.dropbox.com/u/97021501/net-e1001e-1.0.0.x86_64.vib) into .zip format.
  • In the vSphere Client’s home menu, click the Auto Deploy button. In the Software Depot tab, right click the upper frame and select Add .zip Depot. Navigate to the folder containing your ESXi depot and add it, then do the same with the newly converted drivers. Following these steps, right-click again and select Add HA Depot to get the required files from VMWare’s servers.
  • In the Image Profile tab, right-click the VMWare-ESXi-799733-standard depot and select Clone to create an editable copy of the depot with whatever name you choose. This will be the image that will ultimately deploy to the PXE booted workstations. Be sure to specify that the copy is community-supported in the drop-down menu so that you can add non-VMWare software packages to the image. When the client asks if you wish to commit this change, click NO.
  • Right-click the new image and select Add Software Packages, then specify the drivers you converted from the .vib file and commit the change.
  • In the Deploy Rule tab, right-click the upper frame and create a new rule, specifying the domain on which the rule will be active and the IP range corresponding to the DHCP scope you set aside for PXE booting your workstations.
  • After the rule has been created, right-click it and set it to Active.
  • Attempt to PXE boot a workstation. If the auto deploy configuration was successful, a dialogue should automatically engage, ending with “Sleeping for five minutes and then rebooting.” Now that the hosts and deploy environment are provisioned, you are ready to create answer files in order to proceed to the next deployment steps.

Integrating the vCenter Server Appliance with NETLAB

Setting up a trunk line between the large ESXi server and NETLAB:

  • At least one NIC needs to have a cable running from the ESXi host to the control switch associated with NETLAB. This must also be configured as a trunk line in order to allow proper communication between NETLAB and the contents of the vCSA’s datastore.
  • Console into the control switch using the appropriate credentials (you should use the defaults suggested by the NETLAB documentation to maintain proper automation and support compatibility).
  • Input the following commands:
    • interface x/x
    • description inside connection for ESXi Server
    • switchport mode trunk
    • switchport nonegotiate
    • no switchport access vlan
    • no shutdown

Create a NETLAB+ user on the appliance:

  • Login to the appliance’s CLI with the username and password you configured when you built it out from .ovf
  • Enter useradd –m NETLAB
  • To change the new user’s password, enter passwd NETLAB. You will be prompted to enter and then confirm the new password for the NETLAB user

Create a NETLAB role in the appliance

  • Enter the appliance through vSphere and click on Administration > Roles.
  • Right click the Administrator role and select Clone, entering NETLAB for the new role object’s name.
  • Right-click on the NETLAB role and select Add Permission.
  • In the window that appears, click Add and then select the NETLAB account and click OK.
  • Back in the Assign Permissions window, use the drop-down menu on the right to select
  • NETLAB and associate the cloned administrative permissions to the NETLAB user you created earlier.

Create a new vSwitch and bind it to a physical NIC

  • In the appliance’s vSphere view, navigate to Inventory > Hosts and Clusters and click on the ESXi host you want to configure in the left pane.
  • In the main pane, click Configuration and then click Networking in the Hardware Group box, then click Add Networking in the upper left.
  • To allow the ESXi host kernel to communicate with the inside NETLAB network, select the VMkernel radio button and click next.
  • Select the Create a Virtual Switch radio button, then select the physical NIC that’s associated with the trunk line to the control switch.
  • In the next screen, set the Network Label to “NETLAB Inside” and check the box labeled “Use this port group for management traffic”.
  • Enter a unique IP address from the table that appears on page 77 of NetDevGroup’s “Remote PC Guide Series – Volume 2” document.