4 Hypervisors need the ability to bridge traffic between VMs and with the
5 outside world. On Linux-based hypervisors, this used to mean using the
6 built-in L2 switch (the Linux bridge), which is fast and reliable. So,
7 it is reasonable to ask why Open vSwitch is used.
9 The answer is that Open vSwitch is targeted at multi-server
10 virtualization deployments, a landscape for which the previous stack is
11 not well suited. These environments are often characterized by highly
12 dynamic end-points, the maintenance of logical abstractions, and
13 (sometimes) integration with or offloading to special purpose switching
16 The following characteristics and design considerations help Open
17 vSwitch cope with the above requirements.
19 * The mobility of state: All network state associated with a network
20 entity (say a virtual machine) should be easily identifiable and
21 migratable between different hosts. This may include traditional
22 "soft state" (such as an entry in an L2 learning table), L3 forwarding
23 state, policy routing state, ACLs, QoS policy, monitoring
24 configuration (e.g. NetFlow, sFlow), etc.
26 Open vSwitch has support for both configuring and migrating both slow
27 (configuration) and fast network state between instances. For
28 example, if a VM migrates between end-hosts, it is possible to not
29 only migrate associated configuration (SPAN rules, ACLs, QoS) but any
30 live network state (including, for example, existing state which
31 may be difficult to reconstruct). Further, Open vSwitch state is
32 typed and backed by a real data-model allowing for the development of
33 structured automation systems.
35 * Responding to network dynamics: Virtual environments are often
36 characterized by high-rates of change. VMs coming and going, VMs
37 moving backwards and forwards in time, changes to the logical network
38 environments, and so forth.
40 Open vSwitch supports a number of features that allow a network
41 control system to respond and adapt as the environment changes. This
42 includes simple accounting and visibility support such as NetFlow and
43 sFlow. But perhaps more useful, Open vSwitch supports a network state
44 database (OVSDB) that supports remote triggers. Therefore, a piece of
45 orchestration software can "watch" various aspects of the network and
46 respond if/when they change. This is used heavily today, for example,
47 to respond to and track VM migrations.
49 Open vSwitch also supports OpenFlow as a method of exporting remote
50 access to control traffic. There are a number of uses for this
51 including global network discovery through inspection of discovery
52 or link-state traffic (e.g. LLDP, CDP, OSPF, etc.).
54 * Maintenance of logical tags: Distributed virtual switches (such as
55 VMware vDS and Cisco's Nexus 1000V) often maintain logical context
56 within the network through appending or manipulating tags in network
57 packets. This can be used to uniquely identify a VM (in a manner
58 resistant to hardware spoofing), or to hold some other context that
59 is only relevant in the logical domain. Much of the problem of
60 building a distributed virtual switch is to efficiently and correctly
63 Open vSwitch includes multiple methods for specifying and maintaining
64 tagging rules, all of which are accessible to a remote process for
65 orchestration. Further, in many cases these tagging rules are stored
66 in an optimized form so they don't have to be coupled with a
67 heavyweight network device. This allows, for example, thousands of
68 tagging or address remapping rules to be configured, changed, and
71 In a similar vein, Open vSwitch supports a GRE implementation that can
72 handle thousands of simultaneous GRE tunnels and supports remote
73 configuration for tunnel creation, configuration, and tear-down.
74 This, for example, can be used to connect private VM networks in
75 different data centers.
77 * Hardware integration: Open vSwitch's forwarding path (the in-kernel
78 datapath) is designed to be amenable to "offloading" packet processing
79 to hardware chipsets, whether housed in a classic hardware switch
80 chassis or in an end-host NIC. This allows for the Open vSwitch
81 control path to be able to both control a pure software
82 implementation or a hardware switch.
84 There are many ongoing efforts to port Open vSwitch to hardware
85 chipsets. These include multiple merchant silicon chipsets (Broadcom
86 and Marvell), as well as a number of vendor-specific platforms. (The
87 PORTING file discusses how one would go about making such a port.)
89 The advantage of hardware integration is not only performance within
90 virtualized environments. If physical switches also expose the Open
91 vSwitch control abstractions, both bare-metal and virtualized hosting
92 environments can be managed using the same mechanism for automated
95 In many ways, Open vSwitch targets a different point in the design space
96 than previous hypervisor networking stacks, focusing on the need for
97 automated and dynamic network control in large-scale Linux-based
98 virtualization environments.
100 The goal with Open vSwitch is to keep the in-kernel code as small as
101 possible (as is necessary for performance) and to re-use existing
102 subsystems when applicable (for example Open vSwitch uses the existing
103 QoS stack). As of Linux 3.3, Open vSwitch is included as a part of the
104 kernel and packaging for the userspace utilities are available on most
105 popular distributions.