Measuring Virtual Networking Overhead

After discussing [ovs-discuss] ovs performance on ‘worst case scenario’ with ovs-vswitchd up to 100%. One of my colleagues had a good idea: tcpdump the physical interface and the vif at the same time. The difference between when the packet reaches the vif and the packet reaches the physical device can help measure the amount of time in a userspace->kernelspace transit. Of course, virtual switches aren’t the only culprit in virtual networking overhead- virtual networking is a very complex topic....

May 29, 2013 · itsahill00

iSCSI SAN performance woes with VMware ESX 3.5

We filed support requests with IBM and VMware and went through a very lengthy process without any results. Each of our hosts had the following iSCSI HBAs: QLA4010 QLA4050C A while ago we found out QLA4010 is not on the ESX 3.5 HCL even though it runs with a legacy driver. As our virtual environment grew we noticed storage performance lagging. This was particularly evident with our Oracle 10G Database server running our staging instance of Banner Operational Data Store....

August 27, 2009 · itsahill00