Cloud Computing Lab PXE Booting

The Steps for PXE Booting

Pre-Boot Execution Environment (PXE):

  • Derived from DHCP
  • Implemented in the NIC and BIOS
  • Used to load Operating System images onto a target computer
  • Requires NIC capable of PXE (e.g. Intel® 82574L Gigabit Ethernet Controller)

Steps for PXE booting

Hardware Required for PXE

NIC photo


  • We needed NICs to support PXE booting and Wake on LAN
  • Ordered Intel® 82574L Gigabit Ethernet Controller (qty 3)
  • Cost ~$40-50 each
  • Installed NICs and current Intel driver

Features and Benefits of the Intel® 82574L Gigabit Ethernet Controller:

  • Compatible with Fast Ethernet and Ethernet
  • 10/100/1000 Mbps auto-negotiation
  • Support for most network operating systems
  • Advanced configuration and power interface (ACPI)
  • Wake on LAN* (WoL); Preboot Execution Environment (PXE)
  • Allows low-power consumption, remote wake, and remote booting
  • Remote management support
  • Intel® PROSet Utility for Microsoft* Device Manager
  • Advanced cable diagnostics
  • Intel backing
  • Optimized queues: 2 Transmit (Tx) and 2 Receive (Rx)
  • MSI-X support

PXE Setup and Diskless Hosts

In our environment, the host systems, or lab desktop workstations, must be prepared in advance for a virtual state:

To do this, the systems are set up to PXE boot:

  • The boot order is set in BIOS
  • Devices are set to Wake on LAN
  • Systems will be diskless (local hard drive not accessed)
  • Systems will boot ESXi 5.5 into RAM from TFTP server

Diskless hosts:

  • No local disk needed
  • Requires PXE bootable system
  • Loads OS into RAM from network
  • Install and post-install configuration not persistent
  • Persistent storage required to save changes

Wake on LAN

BIOS setting

The above image displays the BIOS settings required for the NIC in order to Wake on LAN.
NOTE: Machines must be shut off in order for ESXI to boot on wake-up. If the machine is asleep it will boot to the installed OS.

Power Management tab



To Setup Wake on LAN for the NIC:

  • Open Network and Sharing Center
  • Select Local Area Connection
  • This opens a "Local Area Connection Status" window
  • Select Properties
  • Select the Configure button
  • This opens the "Intel Gigabit CT Desktop Adapter Properties" window
  • Select the Power Management Tab
  • Configure Wake on LAN settings as displayed here

Configure the PXE Service Point

In order to share the lab equipment between a virtualized lab designed for remote users and a classroom setting for DTCC students physically on campus, we had to determine how to get systems to move from one working state to another and automate the process (i.e. power on/off, Windows 7 on local drive to ESXi 5.5 running in memory). There are several scenarios outlined below of a possible workstation state and items to consider regarding changing the state and security.

Requirements

  • PCs boot in the morning with BIOS
  • PC restarts/boots with ESXi in the evening
  • PC shuts down ESXi at night.

PC States

  • Off
  • Running lab OS (Windows 7)
  • Running ESXi 5.5

Here are the three possible outcomes:

  1. PC boots in morning from OFF to lab OS (Windows 7)
    • Done in BIOS through auto power on settings
    • Local HDD is accessed
    • BIOS can be locked with password for added security
  2. PC boots in evening from OFF to ESXi
    • PCs will need to be manually shut down at the end of the last class each day
    • Using Wake on Lan magic packet
    • PC is woken at specified time in the evening (ex. 6 pm)
    • PXE boot request is fulfilled
    • ESXi image is pushed out to PC
  3. Shutdown PC at night from ESXi
    • Set up script using PowerCLI to shutdown PC from vCenter Server
    • VMs are shutdown then the host
    • Clean shutdown requires that VMs have VMware Tools installed

Configuring Scheduling:

  • Configuring lab computers for PXE booting and Wake on LAN
  • Onboard: Set BIOS to enable PXE booting and Wake on LAN, and also enable PXE booting and Wake on LAN in internal NIC settings in the BIOS
  • NIC Card: Set BIOS to enable PXE booting and Wake on LAN; next configure the NIC card in Windows Device Manager settings, under Power Management set Wake on Magic Packet, Wake on Pattern Match, and Wake on Magic Packet from power off state

Add boot images to the PXE Service Point

  1. Set up vCenter server
  2. Create C:\TFTP_Root\
  3. Install Auto Deploy on vCenter
  4. Select “TFTP Boot ZIP” and extract these files to C:\TFTP_Root\
  5. Install Tftpd64 with “Run as administrator”
  6. Allow Tftpd64 in the Windows firewall as an inbound rule
  7. Set the “Base Directory” of Tftpd64 to the folder you extracted the “TFTP Boot ZIP” to. (Example: C:\TFTP_Root\ or C:\TFTP_Root\deploy-tftp)
  8. Check the box for PXE Compatibility, and also check the box for “Allow ‘\’ As virtual root”
  9. Leave the Tftpd64 open to allow clients to PXE boot

Time Based ACL's for DHCP Blocking

Initial Configs for time based ACL test.

Screen Shot of Initial Configs



Cisco link for performing the setup on a WS-C2960:

http://www.cisco.com/en/US/docs/switches/lan/catalyst2960/software/release/12.2_55_se/configuration/guide/swacl.html#wp1035167

Time can either use the switch to keep its own time, or configure the switch to set its time from a ntp server. The use of a ntp server would require some configuration on both the switches and the server itself including matching md5 strings and other settings as well.

Actual configs that will need to be placed into switches for DHCP to be directed as required are shown below:

These are the configs needed to make the blade server stop handing out DHCP addresses while actual classes are taking place in the Newton Building, room 456.

Time Range Blade Screen Shot

IP Access List Screen Shot

FastEthernet Interface Screen Shot

These or the configs needed to block incoming DHCP traffic from the rest of Durham Tech into the Newton 456 lab when the computers are acting as part of the vSphere cluster.

Extended IP Access List Screen Shot

Switchport Trunk Screen Shot

A final access list will need to be put in place on the trunk port that the closet switch connects to on the rest of the Durham Tech network.

This access list will block all DHCP traffic coming into that port at all times. This will keep Newton 456 from creating problems on the rest of the Durham Tech network.

The commands for this ACl will be:

access-list 130 deny udp any any eq 67
access-list 130 deny udp any any eq 68
access-list 130 permit ip any any

The time-range commands used in the selective blocking and permitting of the other ACL’s will need to modified each semester to ensure the lab DHCP traffic will be managed properly. The time-range command can have multiple time periods for each named time-range, so the ACL can remain unchanged with only the time ranges being altered to control the traffic.