After discussing [ovs-discuss] ovs performance on ‘worst case scenario’ with ovs-vswitchd up to 100%.  One of my colleagues had a good idea: tcpdump the physical interface and the vif at the same time. The difference between when the packet reaches the vif and the packet reaches the physical device can help measure the amount of time in a userspace->kernelspace transit. Of course, virtual switches aren’t the only culprit in virtual networking overhead- virtual networking is a very complex topic.

I created a new tool to help measure this overhead for certain traffic patterns: netweaver. There’s lots of info in the README, so head on by!

NetWeaver does the following:

  • Retrieve the vif details from the hypervisor
  • Start a traffic generating command on source instance(s)
  • Gather packet capture from destination instance’s hypervisor
  • Analyze the packet captures from the vif and eth devices
  • Perform some basic statistical analysis (average, max, min, stdev) on the result set

I intend on using this for analyzing various configurations with Xen, guest OSes, and Open vSwitch.