X-Git-Url: http://git.onelab.eu/?a=blobdiff_plain;f=INSTALL.XenServer;h=e31788aef5a2e727085a8bf6f33d204894645207;hb=015ac88281952a1b43ad46e9e6300db1c6e3647b;hp=9d9012b862851a2c77d52e4f7c749a3bd5ca77ff;hpb=be55976089659d082834aae58acd1173f10004e7;p=sliver-openvswitch.git diff --git a/INSTALL.XenServer b/INSTALL.XenServer index 9d9012b86..e31788aef 100644 --- a/INSTALL.XenServer +++ b/INSTALL.XenServer @@ -3,46 +3,75 @@ This document describes how to build and install Open vSwitch on a Citrix XenServer host. If you want to install Open vSwitch on a -generic Linux host, see INSTALL.Linux instead. +generic Linux or BSD host, see INSTALL instead. These instructions have been tested with XenServer 5.6 FP1. Building Open vSwitch for XenServer ----------------------------------- -The recommended build environment to build RPMs for Citrix XenServer -is the DDK VM available from Citrix. If you are building from an Open -vSwitch distribution tarball, this VM has all the tools that you will -need. If you are building from an Open vSwitch Git tree, then you -will need to first create a distribution tarball elsewhere, by running -"./boot.sh; ./configure; make dist" in the Git tree, because the DDK -VM does not include Autoconf or Automake that are required to -bootstrap the Open vSwitch distribution. +You may build from an Open vSwitch distribution tarball or from an +Open vSwitch Git tree. The recommended build environment to build +RPMs for Citrix XenServer is the DDK VM available from Citrix. -Once you have a distribution tarball, copy it into -/usr/src/redhat/SOURCES inside the VM. Then execute the following: +1. If you are building from an Open vSwitch Git tree, then you will + need to first create a distribution tarball by running "./boot.sh; + ./configure; make dist" in the Git tree. You cannot run this in + the DDK VM, because it lacks tools that are necessary to bootstrap + the Open vSwitch distribution. Instead, you must run this on a + machine that has the tools listed in INSTALL as prerequisites for + building from a Git tree. + +2. Copy the distribution tarball into /usr/src/redhat/SOURCES inside + the DDK VM. + +3. In the DDK VM, unpack the distribution tarball into a temporary + directory and "cd" into the root of the distribution tarball. + +4. To build Open vSwitch userspace, run: + + rpmbuild -bb xenserver/openvswitch-xen.spec + + This produces three RPMs in /usr/src/redhat/RPMS/i386: + "openvswitch", "openvswitch-modules-xen", and + "openvswitch-debuginfo". + +Build Parameters +---------------- + +openvswitch-xen.spec needs to know a number of pieces of information +about the XenServer kernel. Usually, it can figure these out for +itself, but if it does not do it correctly then you can specify them +yourself as parameters to the build. Thus, the final "rpmbuild" step +above can be elaborated as: VERSION= - XENKERNEL= - cd /tmp - tar xfz /usr/src/redhat/SOURCES/openvswitch-$VERSION.tar.gz + KERNEL_NAME= + KERNEL_VERSION= + KERNEL_FLAVOR= rpmbuild \ -D "openvswitch_version $VERSION" \ - -D "xen_version $XENKERNEL" \ - -bb openvswitch-$VERSION/xenserver/openvswitch-xen.spec + -D "kernel_name $KERNEL_NAME" \ + -D "kernel_version $KERNEL_VERSION" \ + -D "kernel_flavor $KERNEL_FLAVOR" \ + -bb xenserver/openvswitch-xen.spec where: is the version number that appears in the name of the Open vSwitch tarball, e.g. 0.90.0. - is the version number of the Xen kernel, - e.g. 2.6.32.12-0.7.1.xs5.6.100.307.170586xen. This version number - appears as the name of a directory in /lib/modules inside the VM. - It always ends in "xen". + is the name of the XenServer kernel package, + e.g. kernel-xen or kernel-NAME-xen, without the "kernel-" prefix. + + is the output of: + rpm -q --queryformat "%{Version}-%{Release}" , + e.g. 2.6.32.12-0.7.1.xs5.6.100.323.170596, where is + the name of the -devel package corresponding to . -Three RPMs will be output into /usr/src/redhat/RPMS/i386, whose names begin -with "openvswitch", "openvswitch-modules-xen", and "openvswitch-debuginfo". + is either "xen" or "kdump". + The "xen" flavor is the main running kernel flavor and the "kdump" flavor is + the crashdump kernel flavor. Commonly, one would specify "xen" here. Installing Open vSwitch for XenServer ------------------------------------- @@ -76,7 +105,7 @@ When Open vSwitch is installed on XenServer, its startup script /etc/init.d/openvswitch runs early in boot. It does roughly the following: - * Loads the OVS kernel module, openvswitch_mod. + * Loads the OVS kernel module, openvswitch. * Starts ovsdb-server, the OVS configuration database. @@ -126,10 +155,13 @@ command. The plugin script does roughly the following: configuration to a known state. One effect of emer-reset is to deconfigure any manager from the OVS database. - * If XAPI is configured for a manger, configures the OVS + * If XAPI is configured for a manager, configures the OVS manager to match with "ovs-vsctl set-manager". -The Open vSwitch boot sequence only configures an OVS configuration +Notes +----- + +* The Open vSwitch boot sequence only configures an OVS configuration database manager. There is no way to directly configure an OpenFlow controller on XenServer and, as a consequence of the step above that deletes all of the bridges at boot time, controller configuration only @@ -137,6 +169,14 @@ persists until XenServer reboot. The configuration database manager can, however, configure controllers for bridges. See the BUGS section of ovs-controller(8) for more information on this topic. +* The Open vSwitch startup script automatically adds a firewall rule +to allow GRE traffic. This rule is needed for the XenServer feature +called "Cross-Host Internal Networks" (CHIN) that uses GRE. If a user +configures tunnels other than GRE (ex: VXLAN, LISP), they will have +to either manually add a iptables firewall rule to allow the tunnel traffic +or add it through a startup script (Please refer to the "enable-protocol" +command in the ovs-ctl(8) manpage). + Reporting Bugs --------------