Quantcast
Channel: VMware Communities : Discussion List - All Communities
Viewing all articles
Browse latest Browse all 176483

10GbE and 1GbE mixed setup best practice

$
0
0

Hi there!

 

Soon we'll finally have 10GbE with our new ESXi hosts and I'm wondering how I should set up the different vSS (and probably vDS next year after the planned upgrade) to get the best out of speed and redundancy. The hosts will have 2x 10GbE (on 1 NIC) and 4x 1GbE (those are onboard and guaranteed to be on 1 NIC, too).

 

The current hosts have 6x 1GbE on two NICs, where one of each NIC is in a vSwitch for the vMotion (vmk0) and management (vmk1) adapter. The remaining 4 are on another vSwitch for the VMs. All physical ports are active. We use NFS which is in the same subnet as vmk0.

 

I have no good idea how to use the 10GbE with the 1GbE ports. First, I'll probably create a new vmk for the NFS traffic*, or is it okay to run those two in the same VLAN like we do now? After the migration to the new hosts we will have a maximum of 3 hosts.

Then I'll either put 1 or 2 1GbE into the first vSwitch for the management vmk adapter and the rest into another vSwitch. There the 10GbE adapters will both be active and the 1GbE adapters will be on standby.

 

Would this be a valid and and good way to distribute the physical ports into the hosts and VMs? The physical switch on the other side is of course stacked and I'll connect to it in a redundant fashion like we do already.

 

Kind regards,

Chris

 

*I'll probably just create new vMotion NICs so I don't have to change anything on the storage side, just create a new VLAN and Subnet for vMotion


Viewing all articles
Browse latest Browse all 176483

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>