I recently decided to upgrade my VSphere 5 Lab and move my storage off to a separate network. The following diagram describes my new topology.
Before doing this I had everything on the same network, by isolating all of the storage traffic I was able to enable Jumbo Frames and bring this environment a bit closer to VMWares best practices for NFS storage.
To create the new network first I changed the Iomega IX4 200D's network settings,
I enabled NIC Teaming on the two network cards
Note: Although NIC Teaming will not increase speed for a single NFS Datastore, it will help when there are multiple NFS Datastores, VMWare recommends using multiple NFS Datastores.
Once the network and NAS settings are complete the NFS datastores can be created normally.
Before doing this I had everything on the same network, by isolating all of the storage traffic I was able to enable Jumbo Frames and bring this environment a bit closer to VMWares best practices for NFS storage.
To create the new network first I changed the Iomega IX4 200D's network settings,
I put each Gigabit network card on a seperate network, the 192.x.x.x network is for managing the ESXi environment and the 10.x.x.x network is the storage network. Next I enabled Jumbo Frames on the NIC and set the size to 9000.
Following this change I created a new VMKernel on each ESXi host using the VSphere Client, I added 2 physical network cards to each VMKernel.
I made sure to enable Jumbo Frames on both the new vSwitch that is created for the VMKernel and the VMKernel itself, I set them both to 9000 to match the Iomega settings.
Note: Although NIC Teaming will not increase speed for a single NFS Datastore, it will help when there are multiple NFS Datastores, VMWare recommends using multiple NFS Datastores.
Once the network and NAS settings are complete the NFS datastores can be created normally.