From: Ben Pfaff Date: Wed, 11 May 2011 19:13:10 +0000 (-0700) Subject: ofproto: Break apart into generic and hardware-specific parts. X-Git-Tag: v1.2.0~329^2~27 X-Git-Url: http://git.onelab.eu/?p=sliver-openvswitch.git;a=commitdiff_plain;h=abe529af477b8311a1fd68c130374bd7442003c3 ofproto: Break apart into generic and hardware-specific parts. In addition to the changes to ofproto, this commit changes all of the instances of "struct flow" in the tree so that the "in_port" member is an OpenFlow port number. Previously, this member was an OpenFlow port number in some cases and an ODP port number in other cases. --- diff --git a/PORTING b/PORTING index 6af9f51d1..b83bf64c5 100644 --- a/PORTING +++ b/PORTING @@ -7,6 +7,7 @@ are most likely to be necessary in porting OVS to Unix-like platforms. (Porting OVS to other kinds of platforms is likely to be more difficult.) + Vocabulary ---------- @@ -26,164 +27,65 @@ is a concordance, indexed by the area of the source tree: Open vSwitch Architectural Overview ----------------------------------- -The following diagram shows the conceptual architecture of Open +The following diagram shows the very high-level architecture of Open vSwitch from a porter's perspective. - _ _ - | +-------------------+ | - | | ovs-vswitchd | |Generic - | +-------------------+ |code - userspace | | ofproto | _| - | +---------+---------+ _ - | | netdev |dpif/wdp | | - |_ +---||----+----||---+ |Code that - _ || || |may need - | +---||-----+---||---+ |porting - | | |datapath| _| - kernel | | +--------+ - | | | - |_ +-------||----------+ - || - physical - NIC - -Some of the components are generic. Modulo bugs, these components -should not need to be modified as part of a port: - - - Near the top of the diagram, "ofproto" is the library in Open vSwitch - that contains the core OpenFlow protocol implementation and switching - functionality. It is built from source files in the "ofproto" - directory. - - - Above ofproto, "ovs-vswitchd", the main Open vSwitch userspace - program, is the primary client for ofproto. It is built - from source files in the "vswitchd" directory of the Open - vSwitch distribution. - - ovs-vswitchd is the most sophisticated of ofproto's clients, but - ofproto can have other clients as well. Notably, ovs-openflowd, - in the utilities directory, is much simpler (though less - capable) than ovs-vswitchd, and it may be easier to get up and - running as part of a port. - -The other components require attention during a port: - - - "dpif" or "wdp" is what ofproto uses to directly monitor and - control a "datapath", which is the term used in OVS for a - collection of physical or virtual ports that are exposed over - OpenFlow as a single switch. A datapath implements a flow - table. - - - "netdev" is the interface to "network devices", e.g. eth0 on - Linux. ofproto expects that every port exposed by a datapath - has a corresponding netdev that it can open with netdev_open(). - -The following sections talk about these components in more detail. - -Which Branch? -------------- - -The architectural diagram shows "dpif" and "wdp" as alternatives. -These alternatives correspond to the "master" and "wdp" branches, -respectively, of the Open vSwitch Git repository at -git://openvswitch.org/openvswitch. Both of these branches currently -represent reasonable porting targets for different purposes: - - - The "master" branch is more mature and better tested. Open - vSwitch releases are made from this branch, and most OVS - development and testing occurs on this branch. - - - The "wdp" branch has a software architecture that can take - advantage of hardware with support for wildcards (e.g. TCAMs or - similar). This branch has known important bugs, but is the basis - of a few ongoing hardware projects, so we expect the quality to - improve rapidly. - -Since its architecture is better, in the medium to long term we will -fix the problems in the "wdp" branch and merge it into "master". - -In porting OVS, the major difference between the two branches is the -form of the flow table in the datapath: - - - On "master", the "dpif" datapath interface maintains a simple - flow table, one that does not support any kind of wildcards. - This flow table essentially acts as a cache. When a packet - arrives on an interface, the datapath looks for it in this - exact-match table. If there is a match, then it performs the - associated actions. If there is no match, the datapath passes - the packet up to "ofproto", which maintains a flow table that - supports wildcards. If the packet matches in this flow table, - then ofproto executes its actions and inserts a new exact-match - entry into the dpif flow table. (Otherwise, ofproto sends the - packet to the OpenFlow controller, if one is configured.) - - Thus, on the "master" branch, the datapath has little - opportunity to take advantage of hardware support for wildcards, - since it is only ever presented with exact-match flow entries. - - - On "wdp", the "wdp" datapath interface maintains a flow table - similar to that of OpenFlow, one that supports wildcards. Thus, - a wdp datapath can take advantage of hardware support for - wildcards, since it is free to implement the flow table any way - it likes. - -The following sections describe the two datapath interfaces in a -little more detail. - -dpif: The "master" Branch Datapath ----------------------------------- - -struct dpif_class, in lib/dpif-provider.h, defines the -interfaces required to implement a dpif for new hardware or -software. That structure contains many function pointers, each -of which has a comment that is meant to describe its behavior in -detail. If the requirements are unclear, please report this as -a bug and we will clarify. -There are two existing dpif implementations that may serve as -useful examples during a port: + +-------------------+ + | ovs-vswitchd |<-->ovsdb-server + +-------------------+ + | ofproto |<-->OpenFlow controllers + +--------+-+--------+ + | netdev | | ofproto| + +--------+ |provider| + | netdev | +--------+ + |provider| + +--------+ + +Some of the components are generic. Modulo bugs or inadequacies, +these components should not need to be modified as part of a port: + + - "ovs-vswitchd" is the main Open vSwitch userspace program, in + vswitchd/. It reads the desired Open vSwitch configuration from + the ovsdb-server program over an IPC channel and passes this + configuration down to the "ofproto" library. It also passes + certain status and statistical information from ofproto back + into the database. + + - "ofproto" is the Open vSwitch library, in ofproto/, that + implements an OpenFlow switch. It talks to OpenFlow controllers + over the network and to switch hardware or software to an + "ofproto provider", explained further below. + + - "netdev" is the Open vSwitch library, in lib/netdev.c, that + abstracts interacting with network devices, that is, Ethernet + interfaces. The netdev library is a thin layer over "netdev + provider" code, explained further below. + +The other components may need attention during a port. You will +almost certainly have to implement a "netdev provider". Depending on +the type of port you are doing and the desired performance, you may +also have to implement an "ofproto provider" or a lower-level +component called a "dpif" provider. - * lib/dpif-linux.c is a Linux-specific dpif implementation that - talks to an Open vSwitch-specific kernel module (whose sources - are in the "datapath" directory). The kernel module performs - all of the switching work, passing packets that do not match any - flow table entry up to userspace. This dpif implementation is - essentially a wrapper around calls to "ioctl". - - * lib/dpif-netdev.c is a generic dpif implementation that performs - all switching internally. It delegates most of its work to the - "netdev" library (described below). Using dpif-netdev, instead - of writing a new dpif, can be a simple way to get OVS up and - running on new platforms, but other solutions are likely to - yield higher performance. - -"wdp": The "wdp" Branch Datapath --------------------------------- - -struct wdp_class, in ofproto/wdp-provider.h, defines the interfaces -required to implement a wdp ("wildcarded datapath") for new hardware -or software. That structure contains many function pointers, each of -which has a comment that is meant to describe its behavior in detail. -If the requirements are unclear, please report this as a bug and we -will clarify. +The following sections talk about these components in more detail. -The wdp interface is preliminary. Please let us know if it seems -unsuitable for your purpose. We will try to improve it. -There is currently only one wdp implementation: +Writing a netdev Provider +------------------------- - * ofproto/wdp-xflow.c is an adaptation of "master" branch code - that breaks wildcarded flows up into exact-match flows in the - same way that ofproto always does on the "master" branch. It - delegates its work to exact-match datapath implementations whose - interfaces are identical to "master" branch datapaths, except - that names have been changed from "dpif" to "xfif" ("exact-match - flow interface") and similar. +A "netdev provider" implements an operating system and hardware +specific interface to "network devices", e.g. eth0 on Linux. Open +vSwitch must be able to open each port on a switch as a netdev, so you +will need to implement a "netdev provider" that works with your switch +hardware and software. -"netdev": Interface to network devices --------------------------------------- +struct netdev_class, in lib/netdev-provider.h, defines the interfaces +required to implement a netdev. That structure contains many function +pointers, each of which has a comment that is meant to describe its +behavior in detail. If the requirements are unclear, please report +this as a bug. -The netdev interface can be roughly divided into functionality for the -following purposes: +The netdev interface can be divided into a few rough categories: * Functions required to properly implement OpenFlow features. For example, OpenFlow requires the ability to report the Ethernet @@ -196,15 +98,9 @@ following purposes: table. These functions must be implemented if the corresponding OVS features are to work, but may be omitted initially. - * Functions that may be needed in some implementations but not - others. The dpif-netdev described above, for example, needs to - be able to send and receive packets on a netdev. - -struct netdev_class, in lib/netdev-provider.h, defines the interfaces -required to implement a netdev. That structure contains many function -pointers, each of which has a comment that is meant to describe its -behavior in detail. If the requirements are unclear, please report -this as a bug and we will clarify. + * Functions needed in some implementations but not in others. For + example, most kinds of ports (see below) do not need + functionality to receive packets from a network device. The existing netdev implementations may serve as useful examples during a port: @@ -213,14 +109,137 @@ during a port: network devices, using Linux kernel calls. It may be a good place to start for full-featured netdev implementations. - * lib/netdev-vport.c provides support for "virtual ports" + * lib/netdev-vport.c provides support for "virtual ports" implemented by the Open vSwitch datapath module for the Linux kernel. This may serve as a model for minimal netdev implementations. + * lib/netdev-dummy.c is a fake netdev implementation useful only + for testing. + + +Porting Strategies +------------------ + +After a netdev provider has been implemented for a system's network +devices, you may choose among three basic porting strategies. + +The lowest-effort strategy is to use the "userspace switch" +implementation built into Open vSwitch. This ought to work, without +writing any more code, as long as the netdev provider that you +implemented supports receiving packets. It yields poor performance, +however, because every packet passes through the ovs-vswitchd process. +See INSTALL.userspace for instructions on how to configure a userspace +switch. + +If the userspace switch is not the right choice for your port, then +you will have to write more code. You may implement either an +"ofproto provider" or a "dpif provider". Which you should choose +depends on a few different factors: + + * Only an ofproto provider can take full advantage of hardware + with built-in support for wildcards (e.g. an ACL table or a + TCAM). + + * A dpif provider can take advantage of the Open vSwitch built-in + implementations of bonding, LACP, 802.1ag, 802.1Q VLANs, and + other features. An ofproto provider has to provide its own + implementations, if the hardware can support them at all. + + * A dpif provider is usually easier to implement. + +The following sections describe how to implement each kind of port. + + +ofproto Providers +----------------- + +An "ofproto provider" is what ofproto uses to directly monitor and +control an OpenFlow-capable switch. struct ofproto_class, in +ofproto/private.h, defines the interfaces to implement a ofproto +provider for new hardware or software. That structure contains many +function pointers, each of which has a comment that is meant to +describe its behavior in detail. If the requirements are unclear, +please report this as a bug. + +The ofproto provider interface is preliminary. Please let us know if +it seems unsuitable for your purpose. We will try to improve it. + + +Writing a dpif Provider +----------------------- + +Open vSwitch has a built-in ofproto provider named "ofproto-dpif", +which is built on top of a library for manipulating datapaths, called +"dpif". A "datapath" is a simple flow table, one that supports only +exact-match flows, that is, flows without wildcards. When a packet +arrives on a network device, the datapath looks for it in this +exact-match table. If there is a match, then it performs the +associated actions. If there is no match, the datapath passes the +packet up to ofproto-dpif, which maintains an OpenFlow flow table +(that supports wildcards). If the packet matches in this flow table, +then ofproto-dpif executes its actions and inserts a new exact-match +entry into the dpif flow table. (Otherwise, ofproto-dpif passes the +packet up to ofproto to send the packet to the OpenFlow controller, if +one is configured.) + +The "dpif" library in turn delegates much of its functionality to a +"dpif provider". The following diagram shows how dpif providers fit +into the Open vSwitch architecture: + + _ + | +-------------------+ + | | ovs-vswitchd |<-->ovsdb-server + | +-------------------+ + | | ofproto |<-->OpenFlow controllers + | +--------+-+--------+ + | | netdev | |ofproto-| + userspace | +--------+ | dpif | + | | netdev | +--------+ + | |provider| | dpif | + | +---||---+ +--------+ + | || | dpif | + | || |provider| + |_ || +---||---+ + || || + _ +---||-----+---||---+ + | | |datapath| + kernel | | +--------+ + | | | + |_ +--------||---------+ + || + physical + NIC + +struct dpif_class, in lib/dpif-provider.h, defines the interfaces +required to implement a dpif provider for new hardware or software. +That structure contains many function pointers, each of which has a +comment that is meant to describe its behavior in detail. If the +requirements are unclear, please report this as a bug. + +There are two existing dpif implementations that may serve as +useful examples during a port: + + * lib/dpif-linux.c is a Linux-specific dpif implementation that + talks to an Open vSwitch-specific kernel module (whose sources + are in the "datapath" directory). The kernel module performs + all of the switching work, passing packets that do not match any + flow table entry up to userspace. This dpif implementation is + essentially a wrapper around calls into the kernel module. + + * lib/dpif-netdev.c is a generic dpif implementation that performs + all switching internally. This is how the Open vSwitch + userspace switch is implemented. + + Miscellaneous Notes ------------------- +ovs-vswitchd is the most sophisticated of ofproto's clients, but +ofproto can have other clients as well. ovs-openflowd, in the +utilities directory, is much simpler than ovs-vswitchd. It may be +easier to initially bring up ovs-openflowd as part of a port. + lib/entropy.c assumes that it can obtain high-quality random number seeds at startup by reading from /dev/urandom. You will need to modify it if this is not true on your platform. @@ -228,6 +247,7 @@ modify it if this is not true on your platform. vswitchd/system-stats.c only knows how to obtain some statistics on Linux. Optionally you may implement them for your platform as well. + Questions --------- diff --git a/lib/classifier.c b/lib/classifier.c index 71d26e9a2..36e294e29 100644 --- a/lib/classifier.c +++ b/lib/classifier.c @@ -142,10 +142,10 @@ cls_rule_set_tun_id_masked(struct cls_rule *rule, } void -cls_rule_set_in_port(struct cls_rule *rule, uint16_t odp_port) +cls_rule_set_in_port(struct cls_rule *rule, uint16_t ofp_port) { rule->wc.wildcards &= ~FWW_IN_PORT; - rule->flow.in_port = odp_port; + rule->flow.in_port = ofp_port; } void @@ -506,8 +506,7 @@ cls_rule_format(const struct cls_rule *rule, struct ds *s) break; } if (!(w & FWW_IN_PORT)) { - ds_put_format(s, "in_port=%"PRIu16",", - odp_port_to_ofp_port(f->in_port)); + ds_put_format(s, "in_port=%"PRIu16",", f->in_port); } if (wc->vlan_tci_mask) { ovs_be16 vid_mask = wc->vlan_tci_mask & htons(VLAN_VID_MASK); diff --git a/lib/flow.c b/lib/flow.c index 32157b844..ea7746c97 100644 --- a/lib/flow.c +++ b/lib/flow.c @@ -306,7 +306,7 @@ invalid: } -/* Initializes 'flow' members from 'packet', 'tun_id', and 'in_port. +/* Initializes 'flow' members from 'packet', 'tun_id', and 'ofp_in_port'. * Initializes 'packet' header pointers as follows: * * - packet->l2 to the start of the Ethernet header. @@ -322,7 +322,7 @@ invalid: * present and has a correct length, and otherwise NULL. */ int -flow_extract(struct ofpbuf *packet, ovs_be64 tun_id, uint16_t in_port, +flow_extract(struct ofpbuf *packet, ovs_be64 tun_id, uint16_t ofp_in_port, struct flow *flow) { struct ofpbuf b = *packet; @@ -333,7 +333,7 @@ flow_extract(struct ofpbuf *packet, ovs_be64 tun_id, uint16_t in_port, memset(flow, 0, sizeof *flow); flow->tun_id = tun_id; - flow->in_port = in_port; + flow->in_port = ofp_in_port; packet->l2 = b.data; packet->l3 = NULL; diff --git a/lib/flow.h b/lib/flow.h index 60229f58d..f5f965c5b 100644 --- a/lib/flow.h +++ b/lib/flow.h @@ -45,7 +45,7 @@ struct flow { uint32_t regs[FLOW_N_REGS]; /* Registers. */ ovs_be32 nw_src; /* IPv4 source address. */ ovs_be32 nw_dst; /* IPv4 destination address. */ - uint16_t in_port; /* Input switch port. */ + uint16_t in_port; /* OpenFlow port number of input port. */ ovs_be16 vlan_tci; /* If 802.1Q, TCI | VLAN_CFI; otherwise 0. */ ovs_be16 dl_type; /* Ethernet frame type. */ ovs_be16 tp_src; /* TCP/UDP source port. */ diff --git a/lib/nx-match.c b/lib/nx-match.c index 4d2e590e5..345f0d1e6 100644 --- a/lib/nx-match.c +++ b/lib/nx-match.c @@ -176,9 +176,6 @@ parse_nxm_entry(struct cls_rule *rule, const struct nxm_field *f, /* Metadata. */ case NFI_NXM_OF_IN_PORT: flow->in_port = ntohs(get_unaligned_be16(value)); - if (flow->in_port == OFPP_LOCAL) { - flow->in_port = ODPP_LOCAL; - } return 0; /* Ethernet header. */ @@ -739,9 +736,6 @@ nx_put_match(struct ofpbuf *b, const struct cls_rule *cr) /* Metadata. */ if (!(wc & FWW_IN_PORT)) { uint16_t in_port = flow->in_port; - if (in_port == ODPP_LOCAL) { - in_port = OFPP_LOCAL; - } nxm_put_16(b, NXM_OF_IN_PORT, htons(in_port)); } @@ -1272,7 +1266,7 @@ nxm_read_field(const struct nxm_field *src, const struct flow *flow) { switch (src->index) { case NFI_NXM_OF_IN_PORT: - return flow->in_port == ODPP_LOCAL ? OFPP_LOCAL : flow->in_port; + return flow->in_port; case NFI_NXM_OF_ETH_DST: return eth_addr_to_uint64(flow->dl_dst); diff --git a/lib/odp-util.c b/lib/odp-util.c index e82006bc7..79f4bfc74 100644 --- a/lib/odp-util.c +++ b/lib/odp-util.c @@ -403,7 +403,8 @@ odp_flow_key_from_flow(struct ofpbuf *buf, const struct flow *flow) nl_msg_put_be64(buf, ODP_KEY_ATTR_TUN_ID, flow->tun_id); } - nl_msg_put_u32(buf, ODP_KEY_ATTR_IN_PORT, flow->in_port); + nl_msg_put_u32(buf, ODP_KEY_ATTR_IN_PORT, + ofp_port_to_odp_port(flow->in_port)); eth_key = nl_msg_put_unspec_uninit(buf, ODP_KEY_ATTR_ETHERNET, sizeof *eth_key); @@ -551,7 +552,7 @@ odp_flow_key_to_flow(const struct nlattr *key, size_t key_len, if (nl_attr_get_u32(nla) >= UINT16_MAX) { return EINVAL; } - flow->in_port = nl_attr_get_u32(nla); + flow->in_port = odp_port_to_ofp_port(nl_attr_get_u32(nla)); break; case TRANSITION(ODP_KEY_ATTR_IN_PORT, ODP_KEY_ATTR_ETHERNET): diff --git a/lib/ofp-parse.c b/lib/ofp-parse.c index 7e9a96531..4fadcf3d6 100644 --- a/lib/ofp-parse.c +++ b/lib/ofp-parse.c @@ -608,9 +608,6 @@ parse_field_value(struct cls_rule *rule, enum field_index index, if (!parse_port_name(value, &port_no)) { port_no = atoi(value); } - if (port_no == OFPP_LOCAL) { - port_no = ODPP_LOCAL; - } cls_rule_set_in_port(rule, port_no); break; diff --git a/lib/ofp-util.c b/lib/ofp-util.c index e500bf55d..97a78739e 100644 --- a/lib/ofp-util.c +++ b/lib/ofp-util.c @@ -150,8 +150,7 @@ ofputil_cls_rule_from_match(const struct ofp_match *match, /* Initialize most of rule->flow. */ rule->flow.nw_src = match->nw_src; rule->flow.nw_dst = match->nw_dst; - rule->flow.in_port = (match->in_port == htons(OFPP_LOCAL) ? ODPP_LOCAL - : ntohs(match->in_port)); + rule->flow.in_port = ntohs(match->in_port); rule->flow.dl_type = ofputil_dl_type_from_openflow(match->dl_type); rule->flow.tp_src = match->tp_src; rule->flow.tp_dst = match->tp_dst; @@ -272,8 +271,7 @@ ofputil_cls_rule_to_match(const struct cls_rule *rule, /* Compose most of the match structure. */ match->wildcards = htonl(ofpfw); - match->in_port = htons(rule->flow.in_port == ODPP_LOCAL ? OFPP_LOCAL - : rule->flow.in_port); + match->in_port = htons(rule->flow.in_port); memcpy(match->dl_src, rule->flow.dl_src, ETH_ADDR_LEN); memcpy(match->dl_dst, rule->flow.dl_dst, ETH_ADDR_LEN); match->dl_type = ofputil_dl_type_to_openflow(rule->flow.dl_type); diff --git a/ofproto/automake.mk b/ofproto/automake.mk index 815355123..0279802e2 100644 --- a/ofproto/automake.mk +++ b/ofproto/automake.mk @@ -20,6 +20,7 @@ ofproto_libofproto_a_SOURCES = \ ofproto/netflow.h \ ofproto/ofproto.c \ ofproto/ofproto.h \ + ofproto/ofproto-dpif.c \ ofproto/ofproto-sflow.c \ ofproto/ofproto-sflow.h \ ofproto/pktbuf.c \ diff --git a/ofproto/connmgr.c b/ofproto/connmgr.c index ef9a61c77..d04641ba5 100644 --- a/ofproto/connmgr.c +++ b/ofproto/connmgr.c @@ -1036,7 +1036,7 @@ schedule_packet_in(struct ofconn *ofconn, const struct dpif_upcall *upcall, /* Figure out the easy parts. */ pin.packet = upcall->packet; - pin.in_port = odp_port_to_ofp_port(flow->in_port); + pin.in_port = flow->in_port; pin.reason = upcall->type == DPIF_UC_MISS ? OFPR_NO_MATCH : OFPR_ACTION; /* Get OpenFlow buffer_id. */ diff --git a/ofproto/ofproto-dpif.c b/ofproto/ofproto-dpif.c new file mode 100644 index 000000000..53d7ca436 --- /dev/null +++ b/ofproto/ofproto-dpif.c @@ -0,0 +1,3888 @@ +/* + * Copyright (c) 2009, 2010, 2011 Nicira Networks. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at: + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include + +#include "ofproto/private.h" + +#include + +#include "autopath.h" +#include "bond.h" +#include "byte-order.h" +#include "connmgr.h" +#include "coverage.h" +#include "cfm.h" +#include "dpif.h" +#include "dynamic-string.h" +#include "fail-open.h" +#include "hmapx.h" +#include "lacp.h" +#include "mac-learning.h" +#include "multipath.h" +#include "netdev.h" +#include "netlink.h" +#include "nx-match.h" +#include "odp-util.h" +#include "ofp-util.h" +#include "ofpbuf.h" +#include "ofp-print.h" +#include "ofproto-sflow.h" +#include "poll-loop.h" +#include "timer.h" +#include "unixctl.h" +#include "vlan-bitmap.h" +#include "vlog.h" + +VLOG_DEFINE_THIS_MODULE(ofproto_dpif); + +COVERAGE_DEFINE(ofproto_dpif_ctlr_action); +COVERAGE_DEFINE(ofproto_dpif_expired); +COVERAGE_DEFINE(ofproto_dpif_no_packet_in); +COVERAGE_DEFINE(ofproto_dpif_xlate); +COVERAGE_DEFINE(facet_changed_rule); +COVERAGE_DEFINE(facet_invalidated); +COVERAGE_DEFINE(facet_revalidate); +COVERAGE_DEFINE(facet_unexpected); + +/* Maximum depth of flow table recursion (due to NXAST_RESUBMIT actions) in a + * flow translation. */ +#define MAX_RESUBMIT_RECURSION 16 + +struct ofport_dpif; +struct ofproto_dpif; + +struct rule_dpif { + struct rule up; + + long long int used; /* Time last used; time created if not used. */ + + /* These statistics: + * + * - Do include packets and bytes from facets that have been deleted or + * whose own statistics have been folded into the rule. + * + * - Do include packets and bytes sent "by hand" that were accounted to + * the rule without any facet being involved (this is a rare corner + * case in rule_execute()). + * + * - Do not include packet or bytes that can be obtained from any facet's + * packet_count or byte_count member or that can be obtained from the + * datapath by, e.g., dpif_flow_get() for any facet. + */ + uint64_t packet_count; /* Number of packets received. */ + uint64_t byte_count; /* Number of bytes received. */ + + struct list facets; /* List of "struct facet"s. */ +}; + +static struct rule_dpif *rule_dpif_cast(const struct rule *rule) +{ + return rule ? CONTAINER_OF(rule, struct rule_dpif, up) : NULL; +} + +static struct rule_dpif *rule_dpif_lookup(struct ofproto_dpif *ofproto, + const struct flow *flow); + +#define MAX_MIRRORS 32 +typedef uint32_t mirror_mask_t; +#define MIRROR_MASK_C(X) UINT32_C(X) +BUILD_ASSERT_DECL(sizeof(mirror_mask_t) * CHAR_BIT >= MAX_MIRRORS); +struct ofmirror { + struct ofproto_dpif *ofproto; /* Owning ofproto. */ + size_t idx; /* In ofproto's "mirrors" array. */ + void *aux; /* Key supplied by ofproto's client. */ + char *name; /* Identifier for log messages. */ + + /* Selection criteria. */ + struct hmapx srcs; /* Contains "struct ofbundle *"s. */ + struct hmapx dsts; /* Contains "struct ofbundle *"s. */ + unsigned long *vlans; /* Bitmap of chosen VLANs, NULL selects all. */ + + /* Output (mutually exclusive). */ + struct ofbundle *out; /* Output port or NULL. */ + int out_vlan; /* Output VLAN or -1. */ +}; + +static void mirror_destroy(struct ofmirror *); + +/* A group of one or more OpenFlow ports. */ +#define OFBUNDLE_FLOOD ((struct ofbundle *) 1) +struct ofbundle { + struct ofproto_dpif *ofproto; /* Owning ofproto. */ + struct hmap_node hmap_node; /* In struct ofproto's "bundles" hmap. */ + void *aux; /* Key supplied by ofproto's client. */ + char *name; /* Identifier for log messages. */ + + /* Configuration. */ + struct list ports; /* Contains "struct ofport"s. */ + int vlan; /* -1=trunk port, else a 12-bit VLAN ID. */ + unsigned long *trunks; /* Bitmap of trunked VLANs, if 'vlan' == -1. + * NULL if all VLANs are trunked. */ + struct lacp *lacp; /* LACP if LACP is enabled, otherwise NULL. */ + struct bond *bond; /* Nonnull iff more than one port. */ + + /* Status. */ + bool floodable; /* True if no port has OFPPC_NO_FLOOD set. */ + + /* Port mirroring info. */ + mirror_mask_t src_mirrors; /* Mirrors triggered when packet received. */ + mirror_mask_t dst_mirrors; /* Mirrors triggered when packet sent. */ + mirror_mask_t mirror_out; /* Mirrors that output to this bundle. */ +}; + +static void bundle_remove(struct ofport *); +static void bundle_destroy(struct ofbundle *); +static void bundle_del_port(struct ofport_dpif *); +static void bundle_run(struct ofbundle *); +static void bundle_wait(struct ofbundle *); + +struct action_xlate_ctx { +/* action_xlate_ctx_init() initializes these members. */ + + /* The ofproto. */ + struct ofproto_dpif *ofproto; + + /* Flow to which the OpenFlow actions apply. xlate_actions() will modify + * this flow when actions change header fields. */ + struct flow flow; + + /* The packet corresponding to 'flow', or a null pointer if we are + * revalidating without a packet to refer to. */ + const struct ofpbuf *packet; + + /* If nonnull, called just before executing a resubmit action. + * + * This is normally null so the client has to set it manually after + * calling action_xlate_ctx_init(). */ + void (*resubmit_hook)(struct action_xlate_ctx *, struct rule_dpif *); + + /* If true, the speciality of 'flow' should be checked before executing + * its actions. If special_cb returns false on 'flow' rendered + * uninstallable and no actions will be executed. */ + bool check_special; + +/* xlate_actions() initializes and uses these members. The client might want + * to look at them after it returns. */ + + struct ofpbuf *odp_actions; /* Datapath actions. */ + tag_type tags; /* Tags associated with OFPP_NORMAL actions. */ + bool may_set_up_flow; /* True ordinarily; false if the actions must + * be reassessed for every packet. */ + uint16_t nf_output_iface; /* Output interface index for NetFlow. */ + +/* xlate_actions() initializes and uses these members, but the client has no + * reason to look at them. */ + + int recurse; /* Recursion level, via xlate_table_action. */ + int last_pop_priority; /* Offset in 'odp_actions' just past most + * recent ODP_ACTION_ATTR_SET_PRIORITY. */ +}; + +static void action_xlate_ctx_init(struct action_xlate_ctx *, + struct ofproto_dpif *, const struct flow *, + const struct ofpbuf *); +static struct ofpbuf *xlate_actions(struct action_xlate_ctx *, + const union ofp_action *in, size_t n_in); + +/* An exact-match instantiation of an OpenFlow flow. */ +struct facet { + long long int used; /* Time last used; time created if not used. */ + + /* These statistics: + * + * - Do include packets and bytes sent "by hand", e.g. with + * dpif_execute(). + * + * - Do include packets and bytes that were obtained from the datapath + * when a flow was deleted (e.g. dpif_flow_del()) or when its + * statistics were reset (e.g. dpif_flow_put() with + * DPIF_FP_ZERO_STATS). + * + * - Do not include any packets or bytes that can currently be obtained + * from the datapath by, e.g., dpif_flow_get(). + */ + uint64_t packet_count; /* Number of packets received. */ + uint64_t byte_count; /* Number of bytes received. */ + + uint64_t dp_packet_count; /* Last known packet count in the datapath. */ + uint64_t dp_byte_count; /* Last known byte count in the datapath. */ + + uint64_t rs_packet_count; /* Packets pushed to resubmit children. */ + uint64_t rs_byte_count; /* Bytes pushed to resubmit children. */ + long long int rs_used; /* Used time pushed to resubmit children. */ + + /* Number of bytes passed to account_cb. This may include bytes that can + * currently obtained from the datapath (thus, it can be greater than + * byte_count). */ + uint64_t accounted_bytes; + + struct hmap_node hmap_node; /* In owning ofproto's 'facets' hmap. */ + struct list list_node; /* In owning rule's 'facets' list. */ + struct rule_dpif *rule; /* Owning rule. */ + struct flow flow; /* Exact-match flow. */ + bool installed; /* Installed in datapath? */ + bool may_install; /* True ordinarily; false if actions must + * be reassessed for every packet. */ + size_t actions_len; /* Number of bytes in actions[]. */ + struct nlattr *actions; /* Datapath actions. */ + tag_type tags; /* Tags. */ + struct netflow_flow nf_flow; /* Per-flow NetFlow tracking data. */ +}; + +static struct facet *facet_create(struct rule_dpif *, const struct flow *, + const struct ofpbuf *packet); +static void facet_remove(struct ofproto_dpif *, struct facet *); +static void facet_free(struct facet *); + +static struct facet *facet_find(struct ofproto_dpif *, const struct flow *); +static struct facet *facet_lookup_valid(struct ofproto_dpif *, + const struct flow *); +static bool facet_revalidate(struct ofproto_dpif *, struct facet *); + +static void facet_execute(struct ofproto_dpif *, struct facet *, + struct ofpbuf *packet); + +static int facet_put__(struct ofproto_dpif *, struct facet *, + const struct nlattr *actions, size_t actions_len, + struct dpif_flow_stats *); +static void facet_install(struct ofproto_dpif *, struct facet *, + bool zero_stats); +static void facet_uninstall(struct ofproto_dpif *, struct facet *); +static void facet_flush_stats(struct ofproto_dpif *, struct facet *); + +static void facet_make_actions(struct ofproto_dpif *, struct facet *, + const struct ofpbuf *packet); +static void facet_update_time(struct ofproto_dpif *, struct facet *, + long long int used); +static void facet_update_stats(struct ofproto_dpif *, struct facet *, + const struct dpif_flow_stats *); +static void facet_push_stats(struct facet *); +static void facet_account(struct ofproto_dpif *, struct facet *, + uint64_t extra_bytes); + +static bool facet_is_controller_flow(struct facet *); + +static void flow_push_stats(const struct rule_dpif *, + struct flow *, uint64_t packets, uint64_t bytes, + long long int used); + +struct ofport_dpif { + struct ofport up; + + uint32_t odp_port; + struct ofbundle *bundle; /* Bundle that contains this port, if any. */ + struct list bundle_node; /* In struct ofbundle's "ports" list. */ + struct cfm *cfm; /* Connectivity Fault Management, if any. */ + tag_type tag; /* Tag associated with this port. */ +}; + +static struct ofport_dpif * +ofport_dpif_cast(const struct ofport *ofport) +{ + assert(ofport->ofproto->ofproto_class == &ofproto_dpif_class); + return ofport ? CONTAINER_OF(ofport, struct ofport_dpif, up) : NULL; +} + +static void port_run(struct ofport_dpif *); +static void port_wait(struct ofport_dpif *); +static int set_cfm(struct ofport *, const struct cfm *, + const uint16_t *remote_mps, size_t n_remote_mps); + +struct ofproto_dpif { + struct ofproto up; + struct dpif *dpif; + int max_ports; + + /* Bridging. */ + struct netflow *netflow; + struct ofproto_sflow *sflow; + struct hmap bundles; /* Contains "struct ofbundle"s. */ + struct mac_learning *ml; + struct ofmirror *mirrors[MAX_MIRRORS]; + bool has_bonded_bundles; + + /* Expiration. */ + struct timer next_expiration; + + /* Facets. */ + struct hmap facets; + bool need_revalidate; + struct tag_set revalidate_set; +}; + +static void ofproto_dpif_unixctl_init(void); + +static struct ofproto_dpif * +ofproto_dpif_cast(const struct ofproto *ofproto) +{ + assert(ofproto->ofproto_class == &ofproto_dpif_class); + return CONTAINER_OF(ofproto, struct ofproto_dpif, up); +} + +static struct ofport_dpif *get_ofp_port(struct ofproto_dpif *, + uint16_t ofp_port); +static struct ofport_dpif *get_odp_port(struct ofproto_dpif *, + uint32_t odp_port); + +/* Packet processing. */ +static void update_learning_table(struct ofproto_dpif *, + const struct flow *, int vlan, + struct ofbundle *); +static bool is_admissible(struct ofproto_dpif *, const struct flow *, + bool have_packet, tag_type *, int *vlanp, + struct ofbundle **in_bundlep); +static void handle_upcall(struct ofproto_dpif *, struct dpif_upcall *); + +/* Flow expiration. */ +static int expire(struct ofproto_dpif *); + +/* Utilities. */ +static int send_packet(struct ofproto_dpif *, + uint32_t odp_port, uint16_t vlan_tci, + const struct ofpbuf *packet); + +/* Global variables. */ +static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + +/* Factory functions. */ + +static void +enumerate_types(struct sset *types) +{ + dp_enumerate_types(types); +} + +static int +enumerate_names(const char *type, struct sset *names) +{ + return dp_enumerate_names(type, names); +} + +static int +del(const char *type, const char *name) +{ + struct dpif *dpif; + int error; + + error = dpif_open(name, type, &dpif); + if (!error) { + error = dpif_delete(dpif); + dpif_close(dpif); + } + return error; +} + +/* Basic life-cycle. */ + +static struct ofproto * +alloc(void) +{ + struct ofproto_dpif *ofproto = xmalloc(sizeof *ofproto); + return &ofproto->up; +} + +static void +dealloc(struct ofproto *ofproto_) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + free(ofproto); +} + +static int +construct(struct ofproto *ofproto_) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + const char *name = ofproto->up.name; + int error; + int i; + + error = dpif_create_and_open(name, ofproto->up.type, &ofproto->dpif); + if (error) { + VLOG_ERR("failed to open datapath %s: %s", name, strerror(error)); + return error; + } + + ofproto->max_ports = dpif_get_max_ports(ofproto->dpif); + + error = dpif_recv_set_mask(ofproto->dpif, + ((1u << DPIF_UC_MISS) | + (1u << DPIF_UC_ACTION) | + (1u << DPIF_UC_SAMPLE))); + if (error) { + VLOG_ERR("failed to listen on datapath %s: %s", name, strerror(error)); + dpif_close(ofproto->dpif); + return error; + } + dpif_flow_flush(ofproto->dpif); + dpif_recv_purge(ofproto->dpif); + + ofproto->netflow = NULL; + ofproto->sflow = NULL; + hmap_init(&ofproto->bundles); + ofproto->ml = mac_learning_create(); + for (i = 0; i < MAX_MIRRORS; i++) { + ofproto->mirrors[i] = NULL; + } + ofproto->has_bonded_bundles = false; + + timer_set_duration(&ofproto->next_expiration, 1000); + + hmap_init(&ofproto->facets); + ofproto->need_revalidate = false; + tag_set_init(&ofproto->revalidate_set); + + ofproto_dpif_unixctl_init(); + + return 0; +} + +static void +destruct(struct ofproto *ofproto_) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + int i; + + for (i = 0; i < MAX_MIRRORS; i++) { + mirror_destroy(ofproto->mirrors[i]); + } + + netflow_destroy(ofproto->netflow); + ofproto_sflow_destroy(ofproto->sflow); + hmap_destroy(&ofproto->bundles); + mac_learning_destroy(ofproto->ml); + + hmap_destroy(&ofproto->facets); + + dpif_close(ofproto->dpif); +} + +static int +run(struct ofproto *ofproto_) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + struct ofport_dpif *ofport; + struct ofbundle *bundle; + int i; + + dpif_run(ofproto->dpif); + + for (i = 0; i < 50; i++) { + struct dpif_upcall packet; + int error; + + error = dpif_recv(ofproto->dpif, &packet); + if (error) { + if (error == ENODEV) { + /* Datapath destroyed. */ + return error; + } + break; + } + + handle_upcall(ofproto, &packet); + } + + if (timer_expired(&ofproto->next_expiration)) { + int delay = expire(ofproto); + timer_set_duration(&ofproto->next_expiration, delay); + } + + if (ofproto->netflow) { + netflow_run(ofproto->netflow); + } + if (ofproto->sflow) { + ofproto_sflow_run(ofproto->sflow); + } + + HMAP_FOR_EACH (ofport, up.hmap_node, &ofproto->up.ports) { + port_run(ofport); + } + HMAP_FOR_EACH (bundle, hmap_node, &ofproto->bundles) { + bundle_run(bundle); + } + + /* Now revalidate if there's anything to do. */ + if (ofproto->need_revalidate + || !tag_set_is_empty(&ofproto->revalidate_set)) { + struct tag_set revalidate_set = ofproto->revalidate_set; + bool revalidate_all = ofproto->need_revalidate; + struct facet *facet, *next; + + /* Clear the revalidation flags. */ + tag_set_init(&ofproto->revalidate_set); + ofproto->need_revalidate = false; + + HMAP_FOR_EACH_SAFE (facet, next, hmap_node, &ofproto->facets) { + if (revalidate_all + || tag_set_intersects(&revalidate_set, facet->tags)) { + facet_revalidate(ofproto, facet); + } + } + } + + return 0; +} + +static void +wait(struct ofproto *ofproto_) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + struct ofport_dpif *ofport; + struct ofbundle *bundle; + + dpif_wait(ofproto->dpif); + dpif_recv_wait(ofproto->dpif); + if (ofproto->sflow) { + ofproto_sflow_wait(ofproto->sflow); + } + if (!tag_set_is_empty(&ofproto->revalidate_set)) { + poll_immediate_wake(); + } + HMAP_FOR_EACH (ofport, up.hmap_node, &ofproto->up.ports) { + port_wait(ofport); + } + HMAP_FOR_EACH (bundle, hmap_node, &ofproto->bundles) { + bundle_wait(bundle); + } + if (ofproto->need_revalidate) { + /* Shouldn't happen, but if it does just go around again. */ + VLOG_DBG_RL(&rl, "need revalidate in ofproto_wait_cb()"); + poll_immediate_wake(); + } else { + timer_wait(&ofproto->next_expiration); + } +} + +static void +flush(struct ofproto *ofproto_) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + struct facet *facet, *next_facet; + + HMAP_FOR_EACH_SAFE (facet, next_facet, hmap_node, &ofproto->facets) { + /* Mark the facet as not installed so that facet_remove() doesn't + * bother trying to uninstall it. There is no point in uninstalling it + * individually since we are about to blow away all the facets with + * dpif_flow_flush(). */ + facet->installed = false; + facet->dp_packet_count = 0; + facet->dp_byte_count = 0; + facet_remove(ofproto, facet); + } + dpif_flow_flush(ofproto->dpif); +} + +static int +set_netflow(struct ofproto *ofproto_, + const struct netflow_options *netflow_options) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + + if (netflow_options) { + if (!ofproto->netflow) { + ofproto->netflow = netflow_create(); + } + return netflow_set_options(ofproto->netflow, netflow_options); + } else { + netflow_destroy(ofproto->netflow); + ofproto->netflow = NULL; + return 0; + } +} + +static struct ofport * +port_alloc(void) +{ + struct ofport_dpif *port = xmalloc(sizeof *port); + return &port->up; +} + +static void +port_dealloc(struct ofport *port_) +{ + struct ofport_dpif *port = ofport_dpif_cast(port_); + free(port); +} + +static int +port_construct(struct ofport *port_) +{ + struct ofport_dpif *port = ofport_dpif_cast(port_); + struct ofproto_dpif *ofproto = ofproto_dpif_cast(port->up.ofproto); + + port->odp_port = ofp_port_to_odp_port(port->up.ofp_port); + port->bundle = NULL; + port->cfm = NULL; + port->tag = tag_create_random(); + + if (ofproto->sflow) { + ofproto_sflow_add_port(ofproto->sflow, port->odp_port, + netdev_get_name(port->up.netdev)); + } + + return 0; +} + +static void +port_destruct(struct ofport *port_) +{ + struct ofport_dpif *port = ofport_dpif_cast(port_); + struct ofproto_dpif *ofproto = ofproto_dpif_cast(port->up.ofproto); + + bundle_remove(port_); + set_cfm(port_, NULL, NULL, 0); + if (ofproto->sflow) { + ofproto_sflow_del_port(ofproto->sflow, port->odp_port); + } +} + +static void +port_modified(struct ofport *port_) +{ + struct ofport_dpif *port = ofport_dpif_cast(port_); + + if (port->bundle && port->bundle->bond) { + bond_slave_set_netdev(port->bundle->bond, port, port->up.netdev); + } +} + +static void +port_reconfigured(struct ofport *port_, ovs_be32 old_config) +{ + struct ofport_dpif *port = ofport_dpif_cast(port_); + struct ofproto_dpif *ofproto = ofproto_dpif_cast(port->up.ofproto); + ovs_be32 changed = old_config ^ port->up.opp.config; + + if (changed & htonl(OFPPC_NO_RECV | OFPPC_NO_RECV_STP | + OFPPC_NO_FWD | OFPPC_NO_FLOOD)) { + ofproto->need_revalidate = true; + } +} + +static int +set_sflow(struct ofproto *ofproto_, + const struct ofproto_sflow_options *sflow_options) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + struct ofproto_sflow *os = ofproto->sflow; + if (sflow_options) { + if (!os) { + struct ofport_dpif *ofport; + + os = ofproto->sflow = ofproto_sflow_create(ofproto->dpif); + HMAP_FOR_EACH (ofport, up.hmap_node, &ofproto->up.ports) { + ofproto_sflow_add_port(os, ofport->odp_port, + netdev_get_name(ofport->up.netdev)); + } + } + ofproto_sflow_set_options(os, sflow_options); + } else { + ofproto_sflow_destroy(os); + ofproto->sflow = NULL; + } + return 0; +} + +static int +set_cfm(struct ofport *ofport_, const struct cfm *cfm, + const uint16_t *remote_mps, size_t n_remote_mps) +{ + struct ofport_dpif *ofport = ofport_dpif_cast(ofport_); + int error; + + if (!cfm) { + error = 0; + } else { + if (!ofport->cfm) { + ofport->cfm = cfm_create(); + } + + ofport->cfm->mpid = cfm->mpid; + ofport->cfm->interval = cfm->interval; + memcpy(ofport->cfm->maid, cfm->maid, CCM_MAID_LEN); + + cfm_update_remote_mps(ofport->cfm, remote_mps, n_remote_mps); + + if (cfm_configure(ofport->cfm)) { + return 0; + } + + error = EINVAL; + } + cfm_destroy(ofport->cfm); + ofport->cfm = NULL; + return error; +} + +static int +get_cfm(struct ofport *ofport_, const struct cfm **cfmp) +{ + struct ofport_dpif *ofport = ofport_dpif_cast(ofport_); + *cfmp = ofport->cfm; + return 0; +} + +/* Bundles. */ + +/* Expires all MAC learning entries associated with 'port' and forces ofproto + * to revalidate every flow. */ +static void +bundle_flush_macs(struct ofbundle *bundle) +{ + struct ofproto_dpif *ofproto = bundle->ofproto; + struct mac_learning *ml = ofproto->ml; + struct mac_entry *mac, *next_mac; + + ofproto->need_revalidate = true; + LIST_FOR_EACH_SAFE (mac, next_mac, lru_node, &ml->lrus) { + if (mac->port.p == bundle) { + mac_learning_expire(ml, mac); + } + } +} + +static struct ofbundle * +bundle_lookup(const struct ofproto_dpif *ofproto, void *aux) +{ + struct ofbundle *bundle; + + HMAP_FOR_EACH_IN_BUCKET (bundle, hmap_node, hash_pointer(aux, 0), + &ofproto->bundles) { + if (bundle->aux == aux) { + return bundle; + } + } + return NULL; +} + +/* Looks up each of the 'n_auxes' pointers in 'auxes' as bundles and adds the + * ones that are found to 'bundles'. */ +static void +bundle_lookup_multiple(struct ofproto_dpif *ofproto, + void **auxes, size_t n_auxes, + struct hmapx *bundles) +{ + size_t i; + + hmapx_init(bundles); + for (i = 0; i < n_auxes; i++) { + struct ofbundle *bundle = bundle_lookup(ofproto, auxes[i]); + if (bundle) { + hmapx_add(bundles, bundle); + } + } +} + +static void +bundle_del_port(struct ofport_dpif *port) +{ + struct ofbundle *bundle = port->bundle; + + list_remove(&port->bundle_node); + port->bundle = NULL; + + if (bundle->lacp) { + lacp_slave_unregister(bundle->lacp, port); + } + if (bundle->bond) { + bond_slave_unregister(bundle->bond, port); + } + + bundle->floodable = true; + LIST_FOR_EACH (port, bundle_node, &bundle->ports) { + if (port->up.opp.config & htonl(OFPPC_NO_FLOOD)) { + bundle->floodable = false; + } + } +} + +static bool +bundle_add_port(struct ofbundle *bundle, uint32_t ofp_port, + struct lacp_slave_settings *lacp) +{ + struct ofport_dpif *port; + + port = get_ofp_port(bundle->ofproto, ofp_port); + if (!port) { + return false; + } + + if (port->bundle != bundle) { + if (port->bundle) { + bundle_del_port(port); + } + + port->bundle = bundle; + list_push_back(&bundle->ports, &port->bundle_node); + if (port->up.opp.config & htonl(OFPPC_NO_FLOOD)) { + bundle->floodable = false; + } + } + if (lacp) { + lacp_slave_register(bundle->lacp, port, lacp); + } + + return true; +} + +static void +bundle_destroy(struct ofbundle *bundle) +{ + struct ofproto_dpif *ofproto; + struct ofport_dpif *port, *next_port; + int i; + + if (!bundle) { + return; + } + + ofproto = bundle->ofproto; + for (i = 0; i < MAX_MIRRORS; i++) { + struct ofmirror *m = ofproto->mirrors[i]; + if (m) { + if (m->out == bundle) { + mirror_destroy(m); + } else if (hmapx_find_and_delete(&m->srcs, bundle) + || hmapx_find_and_delete(&m->dsts, bundle)) { + ofproto->need_revalidate = true; + } + } + } + + LIST_FOR_EACH_SAFE (port, next_port, bundle_node, &bundle->ports) { + bundle_del_port(port); + } + + bundle_flush_macs(bundle); + hmap_remove(&ofproto->bundles, &bundle->hmap_node); + free(bundle->name); + free(bundle->trunks); + lacp_destroy(bundle->lacp); + bond_destroy(bundle->bond); + free(bundle); +} + +static int +bundle_set(struct ofproto *ofproto_, void *aux, + const struct ofproto_bundle_settings *s) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + bool need_flush = false; + const unsigned long *trunks; + struct ofport_dpif *port; + struct ofbundle *bundle; + size_t i; + bool ok; + + if (!s) { + bundle_destroy(bundle_lookup(ofproto, aux)); + return 0; + } + + assert(s->n_slaves == 1 || s->bond != NULL); + assert((s->lacp != NULL) == (s->lacp_slaves != NULL)); + + bundle = bundle_lookup(ofproto, aux); + if (!bundle) { + bundle = xmalloc(sizeof *bundle); + + bundle->ofproto = ofproto; + hmap_insert(&ofproto->bundles, &bundle->hmap_node, + hash_pointer(aux, 0)); + bundle->aux = aux; + bundle->name = NULL; + + list_init(&bundle->ports); + bundle->vlan = -1; + bundle->trunks = NULL; + bundle->lacp = NULL; + bundle->bond = NULL; + + bundle->floodable = true; + + bundle->src_mirrors = 0; + bundle->dst_mirrors = 0; + bundle->mirror_out = 0; + } + + if (!bundle->name || strcmp(s->name, bundle->name)) { + free(bundle->name); + bundle->name = xstrdup(s->name); + } + + /* LACP. */ + if (s->lacp) { + if (!bundle->lacp) { + bundle->lacp = lacp_create(); + } + lacp_configure(bundle->lacp, s->lacp); + } else { + lacp_destroy(bundle->lacp); + bundle->lacp = NULL; + } + + /* Update set of ports. */ + ok = true; + for (i = 0; i < s->n_slaves; i++) { + if (!bundle_add_port(bundle, s->slaves[i], + s->lacp ? &s->lacp_slaves[i] : NULL)) { + ok = false; + } + } + if (!ok || list_size(&bundle->ports) != s->n_slaves) { + struct ofport_dpif *next_port; + + LIST_FOR_EACH_SAFE (port, next_port, bundle_node, &bundle->ports) { + for (i = 0; i < s->n_slaves; i++) { + if (s->slaves[i] == odp_port_to_ofp_port(port->odp_port)) { + goto found; + } + } + + bundle_del_port(port); + found: ; + } + } + assert(list_size(&bundle->ports) <= s->n_slaves); + + if (list_is_empty(&bundle->ports)) { + bundle_destroy(bundle); + return EINVAL; + } + + /* Set VLAN tag. */ + if (s->vlan != bundle->vlan) { + bundle->vlan = s->vlan; + need_flush = true; + } + + /* Get trunked VLANs. */ + trunks = s->vlan == -1 ? NULL : s->trunks; + if (!vlan_bitmap_equal(trunks, bundle->trunks)) { + free(bundle->trunks); + bundle->trunks = vlan_bitmap_clone(trunks); + need_flush = true; + } + + /* Bonding. */ + if (!list_is_short(&bundle->ports)) { + bundle->ofproto->has_bonded_bundles = true; + if (bundle->bond) { + if (bond_reconfigure(bundle->bond, s->bond)) { + ofproto->need_revalidate = true; + } + } else { + bundle->bond = bond_create(s->bond); + } + + LIST_FOR_EACH (port, bundle_node, &bundle->ports) { + uint16_t stable_id = (bundle->lacp + ? lacp_slave_get_port_id(bundle->lacp, port) + : port->odp_port); + bond_slave_register(bundle->bond, port, stable_id, + port->up.netdev); + } + } else { + bond_destroy(bundle->bond); + bundle->bond = NULL; + } + + /* If we changed something that would affect MAC learning, un-learn + * everything on this port and force flow revalidation. */ + if (need_flush) { + bundle_flush_macs(bundle); + } + + return 0; +} + +static void +bundle_remove(struct ofport *port_) +{ + struct ofport_dpif *port = ofport_dpif_cast(port_); + struct ofbundle *bundle = port->bundle; + + if (bundle) { + bundle_del_port(port); + if (list_is_empty(&bundle->ports)) { + bundle_destroy(bundle); + } else if (list_is_short(&bundle->ports)) { + bond_destroy(bundle->bond); + bundle->bond = NULL; + } + } +} + +static void +send_pdu_cb(void *port_, const struct lacp_pdu *pdu) +{ + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 10); + struct ofport_dpif *port = port_; + uint8_t ea[ETH_ADDR_LEN]; + int error; + + error = netdev_get_etheraddr(port->up.netdev, ea); + if (!error) { + struct lacp_pdu *packet_pdu; + struct ofpbuf packet; + + ofpbuf_init(&packet, 0); + packet_pdu = eth_compose(&packet, eth_addr_lacp, ea, ETH_TYPE_LACP, + sizeof *packet_pdu); + *packet_pdu = *pdu; + error = netdev_send(port->up.netdev, &packet); + if (error) { + VLOG_WARN_RL(&rl, "port %s: sending LACP PDU on iface %s failed " + "(%s)", port->bundle->name, + netdev_get_name(port->up.netdev), strerror(error)); + } + ofpbuf_uninit(&packet); + } else { + VLOG_ERR_RL(&rl, "port %s: cannot obtain Ethernet address of iface " + "%s (%s)", port->bundle->name, + netdev_get_name(port->up.netdev), strerror(error)); + } +} + +static void +bundle_send_learning_packets(struct ofbundle *bundle) +{ + struct ofproto_dpif *ofproto = bundle->ofproto; + int error, n_packets, n_errors; + struct mac_entry *e; + + error = n_packets = n_errors = 0; + LIST_FOR_EACH (e, lru_node, &ofproto->ml->lrus) { + if (e->port.p != bundle) { + int ret = bond_send_learning_packet(bundle->bond, e->mac, e->vlan); + if (ret) { + error = ret; + n_errors++; + } + n_packets++; + } + } + + if (n_errors) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_WARN_RL(&rl, "bond %s: %d errors sending %d gratuitous learning " + "packets, last error was: %s", + bundle->name, n_errors, n_packets, strerror(error)); + } else { + VLOG_DBG("bond %s: sent %d gratuitous learning packets", + bundle->name, n_packets); + } +} + +static void +bundle_run(struct ofbundle *bundle) +{ + if (bundle->lacp) { + lacp_run(bundle->lacp, send_pdu_cb); + } + if (bundle->bond) { + struct ofport_dpif *port; + + LIST_FOR_EACH (port, bundle_node, &bundle->ports) { + bool may_enable = lacp_slave_may_enable(bundle->lacp, port); + bond_slave_set_lacp_may_enable(bundle->bond, port, may_enable); + } + + bond_run(bundle->bond, &bundle->ofproto->revalidate_set, + lacp_negotiated(bundle->lacp)); + if (bond_should_send_learning_packets(bundle->bond)) { + bundle_send_learning_packets(bundle); + } + } +} + +static void +bundle_wait(struct ofbundle *bundle) +{ + if (bundle->lacp) { + lacp_wait(bundle->lacp); + } + if (bundle->bond) { + bond_wait(bundle->bond); + } +} + +/* Mirrors. */ + +static int +mirror_scan(struct ofproto_dpif *ofproto) +{ + int idx; + + for (idx = 0; idx < MAX_MIRRORS; idx++) { + if (!ofproto->mirrors[idx]) { + return idx; + } + } + return -1; +} + +static struct ofmirror * +mirror_lookup(struct ofproto_dpif *ofproto, void *aux) +{ + int i; + + for (i = 0; i < MAX_MIRRORS; i++) { + struct ofmirror *mirror = ofproto->mirrors[i]; + if (mirror && mirror->aux == aux) { + return mirror; + } + } + + return NULL; +} + +static int +mirror_set(struct ofproto *ofproto_, void *aux, + const struct ofproto_mirror_settings *s) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + mirror_mask_t mirror_bit; + struct ofbundle *bundle; + struct ofmirror *mirror; + struct ofbundle *out; + struct hmapx srcs; /* Contains "struct ofbundle *"s. */ + struct hmapx dsts; /* Contains "struct ofbundle *"s. */ + int out_vlan; + + mirror = mirror_lookup(ofproto, aux); + if (!s) { + mirror_destroy(mirror); + return 0; + } + if (!mirror) { + int idx; + + idx = mirror_scan(ofproto); + if (idx < 0) { + VLOG_WARN("bridge %s: maximum of %d port mirrors reached, " + "cannot create %s", + ofproto->up.name, MAX_MIRRORS, s->name); + return EFBIG; + } + + mirror = ofproto->mirrors[idx] = xzalloc(sizeof *mirror); + mirror->ofproto = ofproto; + mirror->idx = idx; + mirror->out_vlan = -1; + mirror->name = NULL; + } + + if (!mirror->name || strcmp(s->name, mirror->name)) { + free(mirror->name); + mirror->name = xstrdup(s->name); + } + + /* Get the new configuration. */ + if (s->out_bundle) { + out = bundle_lookup(ofproto, s->out_bundle); + if (!out) { + mirror_destroy(mirror); + return EINVAL; + } + out_vlan = -1; + } else { + out = NULL; + out_vlan = s->out_vlan; + } + bundle_lookup_multiple(ofproto, s->srcs, s->n_srcs, &srcs); + bundle_lookup_multiple(ofproto, s->dsts, s->n_dsts, &dsts); + + /* If the configuration has not changed, do nothing. */ + if (hmapx_equals(&srcs, &mirror->srcs) + && hmapx_equals(&dsts, &mirror->dsts) + && vlan_bitmap_equal(mirror->vlans, s->src_vlans) + && mirror->out == out + && mirror->out_vlan == out_vlan) + { + hmapx_destroy(&srcs); + hmapx_destroy(&dsts); + return 0; + } + + hmapx_swap(&srcs, &mirror->srcs); + hmapx_destroy(&srcs); + + hmapx_swap(&dsts, &mirror->dsts); + hmapx_destroy(&dsts); + + free(mirror->vlans); + mirror->vlans = vlan_bitmap_clone(s->src_vlans); + + mirror->out = out; + mirror->out_vlan = out_vlan; + + /* Update bundles. */ + mirror_bit = MIRROR_MASK_C(1) << mirror->idx; + HMAP_FOR_EACH (bundle, hmap_node, &mirror->ofproto->bundles) { + if (hmapx_contains(&mirror->srcs, bundle)) { + bundle->src_mirrors |= mirror_bit; + } else { + bundle->src_mirrors &= ~mirror_bit; + } + + if (hmapx_contains(&mirror->dsts, bundle)) { + bundle->dst_mirrors |= mirror_bit; + } else { + bundle->dst_mirrors &= ~mirror_bit; + } + + if (mirror->out == bundle) { + bundle->mirror_out |= mirror_bit; + } else { + bundle->mirror_out &= ~mirror_bit; + } + } + + ofproto->need_revalidate = true; + mac_learning_flush(ofproto->ml); + + return 0; +} + +static void +mirror_destroy(struct ofmirror *mirror) +{ + struct ofproto_dpif *ofproto; + mirror_mask_t mirror_bit; + struct ofbundle *bundle; + + if (!mirror) { + return; + } + + ofproto = mirror->ofproto; + ofproto->need_revalidate = true; + mac_learning_flush(ofproto->ml); + + mirror_bit = MIRROR_MASK_C(1) << mirror->idx; + HMAP_FOR_EACH (bundle, hmap_node, &ofproto->bundles) { + bundle->src_mirrors &= ~mirror_bit; + bundle->dst_mirrors &= ~mirror_bit; + bundle->mirror_out &= ~mirror_bit; + } + + hmapx_destroy(&mirror->srcs); + hmapx_destroy(&mirror->dsts); + free(mirror->vlans); + + ofproto->mirrors[mirror->idx] = NULL; + free(mirror->name); + free(mirror); +} + +static int +set_flood_vlans(struct ofproto *ofproto_, unsigned long *flood_vlans) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + if (mac_learning_set_flood_vlans(ofproto->ml, flood_vlans)) { + ofproto->need_revalidate = true; + mac_learning_flush(ofproto->ml); + } + return 0; +} + +static bool +is_mirror_output_bundle(struct ofproto *ofproto_, void *aux) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + struct ofbundle *bundle = bundle_lookup(ofproto, aux); + return bundle && bundle->mirror_out != 0; +} + +/* Ports. */ + +static struct ofport_dpif * +get_ofp_port(struct ofproto_dpif *ofproto, uint16_t ofp_port) +{ + return ofport_dpif_cast(ofproto_get_port(&ofproto->up, ofp_port)); +} + +static struct ofport_dpif * +get_odp_port(struct ofproto_dpif *ofproto, uint32_t odp_port) +{ + return get_ofp_port(ofproto, odp_port_to_ofp_port(odp_port)); +} + +static void +ofproto_port_from_dpif_port(struct ofproto_port *ofproto_port, + struct dpif_port *dpif_port) +{ + ofproto_port->name = dpif_port->name; + ofproto_port->type = dpif_port->type; + ofproto_port->ofp_port = odp_port_to_ofp_port(dpif_port->port_no); +} + +static void +port_run(struct ofport_dpif *ofport) +{ + if (ofport->cfm) { + cfm_run(ofport->cfm); + + if (cfm_should_send_ccm(ofport->cfm)) { + struct ofpbuf packet; + struct ccm *ccm; + + ofpbuf_init(&packet, 0); + ccm = eth_compose(&packet, eth_addr_ccm, ofport->up.opp.hw_addr, + ETH_TYPE_CFM, sizeof *ccm); + cfm_compose_ccm(ofport->cfm, ccm); + send_packet(ofproto_dpif_cast(ofport->up.ofproto), + ofport->odp_port, 0, &packet); + ofpbuf_uninit(&packet); + } + } +} + +static void +port_wait(struct ofport_dpif *ofport) +{ + if (ofport->cfm) { + cfm_wait(ofport->cfm); + } +} + +static int +port_query_by_name(const struct ofproto *ofproto_, const char *devname, + struct ofproto_port *ofproto_port) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + struct dpif_port dpif_port; + int error; + + error = dpif_port_query_by_name(ofproto->dpif, devname, &dpif_port); + if (!error) { + ofproto_port_from_dpif_port(ofproto_port, &dpif_port); + } + return error; +} + +static int +port_add(struct ofproto *ofproto_, struct netdev *netdev, uint16_t *ofp_portp) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + uint16_t odp_port; + int error; + + error = dpif_port_add(ofproto->dpif, netdev, &odp_port); + if (!error) { + *ofp_portp = odp_port_to_ofp_port(odp_port); + } + return error; +} + +static int +port_del(struct ofproto *ofproto_, uint16_t ofp_port) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + int error; + + error = dpif_port_del(ofproto->dpif, ofp_port_to_odp_port(ofp_port)); + if (!error) { + struct ofport_dpif *ofport = get_ofp_port(ofproto, ofp_port); + if (ofport) { + /* The caller is going to close ofport->up.netdev. If this is a + * bonded port, then the bond is using that netdev, so remove it + * from the bond. The client will need to reconfigure everything + * after deleting ports, so then the slave will get re-added. */ + bundle_remove(&ofport->up); + } + } + return error; +} + +struct port_dump_state { + struct dpif_port_dump dump; + bool done; +}; + +static int +port_dump_start(const struct ofproto *ofproto_, void **statep) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + struct port_dump_state *state; + + *statep = state = xmalloc(sizeof *state); + dpif_port_dump_start(&state->dump, ofproto->dpif); + state->done = false; + return 0; +} + +static int +port_dump_next(const struct ofproto *ofproto_ OVS_UNUSED, void *state_, + struct ofproto_port *port) +{ + struct port_dump_state *state = state_; + struct dpif_port dpif_port; + + if (dpif_port_dump_next(&state->dump, &dpif_port)) { + ofproto_port_from_dpif_port(port, &dpif_port); + return 0; + } else { + int error = dpif_port_dump_done(&state->dump); + state->done = true; + return error ? error : EOF; + } +} + +static int +port_dump_done(const struct ofproto *ofproto_ OVS_UNUSED, void *state_) +{ + struct port_dump_state *state = state_; + + if (!state->done) { + dpif_port_dump_done(&state->dump); + } + free(state); + return 0; +} + +static int +port_poll(const struct ofproto *ofproto_, char **devnamep) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + return dpif_port_poll(ofproto->dpif, devnamep); +} + +static void +port_poll_wait(const struct ofproto *ofproto_) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + dpif_port_poll_wait(ofproto->dpif); +} + +static int +port_is_lacp_current(const struct ofport *ofport_) +{ + const struct ofport_dpif *ofport = ofport_dpif_cast(ofport_); + return (ofport->bundle && ofport->bundle->lacp + ? lacp_slave_is_current(ofport->bundle->lacp, ofport) + : -1); +} + +/* Upcall handling. */ + +/* Given 'upcall', of type DPIF_UC_ACTION or DPIF_UC_MISS, sends an + * OFPT_PACKET_IN message to each OpenFlow controller as necessary according to + * their individual configurations. + * + * If 'clone' is true, the caller retains ownership of 'upcall->packet'. + * Otherwise, ownership is transferred to this function. */ +static void +send_packet_in(struct ofproto_dpif *ofproto, struct dpif_upcall *upcall, + const struct flow *flow, bool clone) +{ + struct ofputil_packet_in pin; + + pin.packet = upcall->packet; + pin.in_port = flow->in_port; + pin.reason = upcall->type == DPIF_UC_MISS ? OFPR_NO_MATCH : OFPR_ACTION; + pin.buffer_id = 0; /* not yet known */ + pin.send_len = upcall->userdata; + connmgr_send_packet_in(ofproto->up.connmgr, upcall, flow, + clone ? NULL : upcall->packet); +} + +static bool +process_special(struct ofproto_dpif *ofproto, const struct flow *flow, + const struct ofpbuf *packet) +{ + if (cfm_should_process_flow(flow)) { + struct ofport_dpif *ofport = get_ofp_port(ofproto, flow->in_port); + if (ofport && ofport->cfm) { + cfm_process_heartbeat(ofport->cfm, packet); + } + return true; + } else if (flow->dl_type == htons(ETH_TYPE_LACP)) { + struct ofport_dpif *port = get_ofp_port(ofproto, flow->in_port); + if (port && port->bundle && port->bundle->lacp) { + const struct lacp_pdu *pdu = parse_lacp_packet(packet); + if (pdu) { + lacp_process_pdu(port->bundle->lacp, port, pdu); + } + return true; + } + } + return false; +} + +static void +handle_miss_upcall(struct ofproto_dpif *ofproto, struct dpif_upcall *upcall) +{ + struct facet *facet; + struct flow flow; + + /* Obtain in_port and tun_id, at least. */ + odp_flow_key_to_flow(upcall->key, upcall->key_len, &flow); + + /* Set header pointers in 'flow'. */ + flow_extract(upcall->packet, flow.tun_id, flow.in_port, &flow); + + /* Handle 802.1ag and LACP. */ + if (process_special(ofproto, &flow, upcall->packet)) { + ofpbuf_delete(upcall->packet); + return; + } + + /* Check with in-band control to see if this packet should be sent + * to the local port regardless of the flow table. */ + if (connmgr_msg_in_hook(ofproto->up.connmgr, &flow, upcall->packet)) { + send_packet(ofproto, OFPP_LOCAL, 0, upcall->packet); + } + + facet = facet_lookup_valid(ofproto, &flow); + if (!facet) { + struct rule_dpif *rule = rule_dpif_lookup(ofproto, &flow); + if (!rule) { + /* Don't send a packet-in if OFPPC_NO_PACKET_IN asserted. */ + struct ofport_dpif *port = get_ofp_port(ofproto, flow.in_port); + if (port) { + if (port->up.opp.config & htonl(OFPPC_NO_PACKET_IN)) { + COVERAGE_INC(ofproto_dpif_no_packet_in); + /* XXX install 'drop' flow entry */ + ofpbuf_delete(upcall->packet); + return; + } + } else { + VLOG_WARN_RL(&rl, "packet-in on unknown port %"PRIu16, + flow.in_port); + } + + send_packet_in(ofproto, upcall, &flow, false); + return; + } + + facet = facet_create(rule, &flow, upcall->packet); + } else if (!facet->may_install) { + /* The facet is not installable, that is, we need to process every + * packet, so process the current packet's actions into 'facet'. */ + facet_make_actions(ofproto, facet, upcall->packet); + } + + if (facet->rule->up.cr.priority == FAIL_OPEN_PRIORITY) { + /* + * Extra-special case for fail-open mode. + * + * We are in fail-open mode and the packet matched the fail-open rule, + * but we are connected to a controller too. We should send the packet + * up to the controller in the hope that it will try to set up a flow + * and thereby allow us to exit fail-open. + * + * See the top-level comment in fail-open.c for more information. + */ + send_packet_in(ofproto, upcall, &flow, true); + } + + facet_execute(ofproto, facet, upcall->packet); + facet_install(ofproto, facet, false); +} + +static void +handle_upcall(struct ofproto_dpif *ofproto, struct dpif_upcall *upcall) +{ + struct flow flow; + + switch (upcall->type) { + case DPIF_UC_ACTION: + COVERAGE_INC(ofproto_dpif_ctlr_action); + odp_flow_key_to_flow(upcall->key, upcall->key_len, &flow); + send_packet_in(ofproto, upcall, &flow, false); + break; + + case DPIF_UC_SAMPLE: + if (ofproto->sflow) { + odp_flow_key_to_flow(upcall->key, upcall->key_len, &flow); + ofproto_sflow_received(ofproto->sflow, upcall, &flow); + } + ofpbuf_delete(upcall->packet); + break; + + case DPIF_UC_MISS: + handle_miss_upcall(ofproto, upcall); + break; + + case DPIF_N_UC_TYPES: + default: + VLOG_WARN_RL(&rl, "upcall has unexpected type %"PRIu32, upcall->type); + break; + } +} + +/* Flow expiration. */ + +static int facet_max_idle(const struct ofproto_dpif *); +static void update_stats(struct ofproto_dpif *); +static void rule_expire(struct rule_dpif *); +static void expire_facets(struct ofproto_dpif *, int dp_max_idle); + +/* This function is called periodically by run(). Its job is to collect + * updates for the flows that have been installed into the datapath, most + * importantly when they last were used, and then use that information to + * expire flows that have not been used recently. + * + * Returns the number of milliseconds after which it should be called again. */ +static int +expire(struct ofproto_dpif *ofproto) +{ + struct rule_dpif *rule, *next_rule; + struct cls_cursor cursor; + int dp_max_idle; + + /* Update stats for each flow in the datapath. */ + update_stats(ofproto); + + /* Expire facets that have been idle too long. */ + dp_max_idle = facet_max_idle(ofproto); + expire_facets(ofproto, dp_max_idle); + + /* Expire OpenFlow flows whose idle_timeout or hard_timeout has passed. */ + cls_cursor_init(&cursor, &ofproto->up.cls, NULL); + CLS_CURSOR_FOR_EACH_SAFE (rule, next_rule, up.cr, &cursor) { + rule_expire(rule); + } + + /* All outstanding data in existing flows has been accounted, so it's a + * good time to do bond rebalancing. */ + if (ofproto->has_bonded_bundles) { + struct ofbundle *bundle; + + HMAP_FOR_EACH (bundle, hmap_node, &ofproto->bundles) { + if (bundle->bond) { + bond_rebalance(bundle->bond, &ofproto->revalidate_set); + } + } + } + + return MIN(dp_max_idle, 1000); +} + +/* Update 'packet_count', 'byte_count', and 'used' members of installed facets. + * + * This function also pushes statistics updates to rules which each facet + * resubmits into. Generally these statistics will be accurate. However, if a + * facet changes the rule it resubmits into at some time in between + * update_stats() runs, it is possible that statistics accrued to the + * old rule will be incorrectly attributed to the new rule. This could be + * avoided by calling update_stats() whenever rules are created or + * deleted. However, the performance impact of making so many calls to the + * datapath do not justify the benefit of having perfectly accurate statistics. + */ +static void +update_stats(struct ofproto_dpif *p) +{ + const struct dpif_flow_stats *stats; + struct dpif_flow_dump dump; + const struct nlattr *key; + size_t key_len; + + dpif_flow_dump_start(&dump, p->dpif); + while (dpif_flow_dump_next(&dump, &key, &key_len, NULL, NULL, &stats)) { + struct facet *facet; + struct flow flow; + + if (odp_flow_key_to_flow(key, key_len, &flow)) { + struct ds s; + + ds_init(&s); + odp_flow_key_format(key, key_len, &s); + VLOG_WARN_RL(&rl, "failed to convert ODP flow key to flow: %s", + ds_cstr(&s)); + ds_destroy(&s); + + continue; + } + facet = facet_find(p, &flow); + + if (facet && facet->installed) { + + if (stats->n_packets >= facet->dp_packet_count) { + uint64_t extra = stats->n_packets - facet->dp_packet_count; + facet->packet_count += extra; + } else { + VLOG_WARN_RL(&rl, "unexpected packet count from the datapath"); + } + + if (stats->n_bytes >= facet->dp_byte_count) { + facet->byte_count += stats->n_bytes - facet->dp_byte_count; + } else { + VLOG_WARN_RL(&rl, "unexpected byte count from datapath"); + } + + facet->dp_packet_count = stats->n_packets; + facet->dp_byte_count = stats->n_bytes; + + facet_update_time(p, facet, stats->used); + facet_account(p, facet, stats->n_bytes); + facet_push_stats(facet); + } else { + /* There's a flow in the datapath that we know nothing about. + * Delete it. */ + COVERAGE_INC(facet_unexpected); + dpif_flow_del(p->dpif, key, key_len, NULL); + } + } + dpif_flow_dump_done(&dump); +} + +/* Calculates and returns the number of milliseconds of idle time after which + * facets should expire from the datapath and we should fold their statistics + * into their parent rules in userspace. */ +static int +facet_max_idle(const struct ofproto_dpif *ofproto) +{ + /* + * Idle time histogram. + * + * Most of the time a switch has a relatively small number of facets. When + * this is the case we might as well keep statistics for all of them in + * userspace and to cache them in the kernel datapath for performance as + * well. + * + * As the number of facets increases, the memory required to maintain + * statistics about them in userspace and in the kernel becomes + * significant. However, with a large number of facets it is likely that + * only a few of them are "heavy hitters" that consume a large amount of + * bandwidth. At this point, only heavy hitters are worth caching in the + * kernel and maintaining in userspaces; other facets we can discard. + * + * The technique used to compute the idle time is to build a histogram with + * N_BUCKETS buckets whose width is BUCKET_WIDTH msecs each. Each facet + * that is installed in the kernel gets dropped in the appropriate bucket. + * After the histogram has been built, we compute the cutoff so that only + * the most-recently-used 1% of facets (but at least 1000 flows) are kept + * cached. At least the most-recently-used bucket of facets is kept, so + * actually an arbitrary number of facets can be kept in any given + * expiration run (though the next run will delete most of those unless + * they receive additional data). + * + * This requires a second pass through the facets, in addition to the pass + * made by update_stats(), because the former function never looks + * at uninstallable facets. + */ + enum { BUCKET_WIDTH = ROUND_UP(100, TIME_UPDATE_INTERVAL) }; + enum { N_BUCKETS = 5000 / BUCKET_WIDTH }; + int buckets[N_BUCKETS] = { 0 }; + struct facet *facet; + int total, bucket; + long long int now; + int i; + + total = hmap_count(&ofproto->facets); + if (total <= 1000) { + return N_BUCKETS * BUCKET_WIDTH; + } + + /* Build histogram. */ + now = time_msec(); + HMAP_FOR_EACH (facet, hmap_node, &ofproto->facets) { + long long int idle = now - facet->used; + int bucket = (idle <= 0 ? 0 + : idle >= BUCKET_WIDTH * N_BUCKETS ? N_BUCKETS - 1 + : (unsigned int) idle / BUCKET_WIDTH); + buckets[bucket]++; + } + + /* Find the first bucket whose flows should be expired. */ + for (bucket = 0; bucket < N_BUCKETS; bucket++) { + if (buckets[bucket]) { + int subtotal = 0; + do { + subtotal += buckets[bucket++]; + } while (bucket < N_BUCKETS && subtotal < MAX(1000, total / 100)); + break; + } + } + + if (VLOG_IS_DBG_ENABLED()) { + struct ds s; + + ds_init(&s); + ds_put_cstr(&s, "keep"); + for (i = 0; i < N_BUCKETS; i++) { + if (i == bucket) { + ds_put_cstr(&s, ", drop"); + } + if (buckets[i]) { + ds_put_format(&s, " %d:%d", i * BUCKET_WIDTH, buckets[i]); + } + } + VLOG_INFO("%s: %s (msec:count)", ofproto->up.name, ds_cstr(&s)); + ds_destroy(&s); + } + + return bucket * BUCKET_WIDTH; +} + +static void +facet_active_timeout(struct ofproto_dpif *ofproto, struct facet *facet) +{ + if (ofproto->netflow && !facet_is_controller_flow(facet) && + netflow_active_timeout_expired(ofproto->netflow, &facet->nf_flow)) { + struct ofexpired expired; + + if (facet->installed) { + struct dpif_flow_stats stats; + + facet_put__(ofproto, facet, facet->actions, facet->actions_len, + &stats); + facet_update_stats(ofproto, facet, &stats); + } + + expired.flow = facet->flow; + expired.packet_count = facet->packet_count; + expired.byte_count = facet->byte_count; + expired.used = facet->used; + netflow_expire(ofproto->netflow, &facet->nf_flow, &expired); + } +} + +static void +expire_facets(struct ofproto_dpif *ofproto, int dp_max_idle) +{ + long long int cutoff = time_msec() - dp_max_idle; + struct facet *facet, *next_facet; + + HMAP_FOR_EACH_SAFE (facet, next_facet, hmap_node, &ofproto->facets) { + facet_active_timeout(ofproto, facet); + if (facet->used < cutoff) { + facet_remove(ofproto, facet); + } + } +} + +/* If 'rule' is an OpenFlow rule, that has expired according to OpenFlow rules, + * then delete it entirely. */ +static void +rule_expire(struct rule_dpif *rule) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(rule->up.ofproto); + struct facet *facet, *next_facet; + long long int now; + uint8_t reason; + + /* Has 'rule' expired? */ + now = time_msec(); + if (rule->up.hard_timeout + && now > rule->up.created + rule->up.hard_timeout * 1000) { + reason = OFPRR_HARD_TIMEOUT; + } else if (rule->up.idle_timeout && list_is_empty(&rule->facets) + && now > rule->used + rule->up.idle_timeout * 1000) { + reason = OFPRR_IDLE_TIMEOUT; + } else { + return; + } + + COVERAGE_INC(ofproto_dpif_expired); + + /* Update stats. (This is a no-op if the rule expired due to an idle + * timeout, because that only happens when the rule has no facets left.) */ + LIST_FOR_EACH_SAFE (facet, next_facet, list_node, &rule->facets) { + facet_remove(ofproto, facet); + } + + /* Get rid of the rule. */ + ofproto_rule_expire(&rule->up, reason); +} + +/* Facets. */ + +/* Creates and returns a new facet owned by 'rule', given a 'flow' and an + * example 'packet' within that flow. + * + * The caller must already have determined that no facet with an identical + * 'flow' exists in 'ofproto' and that 'flow' is the best match for 'rule' in + * the ofproto's classifier table. */ +static struct facet * +facet_create(struct rule_dpif *rule, const struct flow *flow, + const struct ofpbuf *packet) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(rule->up.ofproto); + struct facet *facet; + + facet = xzalloc(sizeof *facet); + facet->used = time_msec(); + hmap_insert(&ofproto->facets, &facet->hmap_node, flow_hash(flow, 0)); + list_push_back(&rule->facets, &facet->list_node); + facet->rule = rule; + facet->flow = *flow; + netflow_flow_init(&facet->nf_flow); + netflow_flow_update_time(ofproto->netflow, &facet->nf_flow, facet->used); + + facet_make_actions(ofproto, facet, packet); + + return facet; +} + +static void +facet_free(struct facet *facet) +{ + free(facet->actions); + free(facet); +} + +/* Executes, within 'ofproto', the 'n_actions' actions in 'actions' on + * 'packet', which arrived on 'in_port'. + * + * Takes ownership of 'packet'. */ +static bool +execute_odp_actions(struct ofproto_dpif *ofproto, const struct flow *flow, + const struct nlattr *odp_actions, size_t actions_len, + struct ofpbuf *packet) +{ + if (actions_len == NLA_ALIGN(NLA_HDRLEN + sizeof(uint64_t)) + && odp_actions->nla_type == ODP_ACTION_ATTR_CONTROLLER) { + /* As an optimization, avoid a round-trip from userspace to kernel to + * userspace. This also avoids possibly filling up kernel packet + * buffers along the way. */ + struct dpif_upcall upcall; + + upcall.type = DPIF_UC_ACTION; + upcall.packet = packet; + upcall.key = NULL; + upcall.key_len = 0; + upcall.userdata = nl_attr_get_u64(odp_actions); + upcall.sample_pool = 0; + upcall.actions = NULL; + upcall.actions_len = 0; + + send_packet_in(ofproto, &upcall, flow, false); + + return true; + } else { + int error; + + error = dpif_execute(ofproto->dpif, odp_actions, actions_len, packet); + ofpbuf_delete(packet); + return !error; + } +} + +/* Executes the actions indicated by 'facet' on 'packet' and credits 'facet''s + * statistics appropriately. 'packet' must have at least sizeof(struct + * ofp_packet_in) bytes of headroom. + * + * For correct results, 'packet' must actually be in 'facet''s flow; that is, + * applying flow_extract() to 'packet' would yield the same flow as + * 'facet->flow'. + * + * 'facet' must have accurately composed ODP actions; that is, it must not be + * in need of revalidation. + * + * Takes ownership of 'packet'. */ +static void +facet_execute(struct ofproto_dpif *ofproto, struct facet *facet, + struct ofpbuf *packet) +{ + struct dpif_flow_stats stats; + + assert(ofpbuf_headroom(packet) >= sizeof(struct ofp_packet_in)); + + flow_extract_stats(&facet->flow, packet, &stats); + stats.used = time_msec(); + if (execute_odp_actions(ofproto, &facet->flow, + facet->actions, facet->actions_len, packet)) { + facet_update_stats(ofproto, facet, &stats); + } +} + +/* Remove 'facet' from 'ofproto' and free up the associated memory: + * + * - If 'facet' was installed in the datapath, uninstalls it and updates its + * rule's statistics, via facet_uninstall(). + * + * - Removes 'facet' from its rule and from ofproto->facets. + */ +static void +facet_remove(struct ofproto_dpif *ofproto, struct facet *facet) +{ + facet_uninstall(ofproto, facet); + facet_flush_stats(ofproto, facet); + hmap_remove(&ofproto->facets, &facet->hmap_node); + list_remove(&facet->list_node); + facet_free(facet); +} + +/* Composes the ODP actions for 'facet' based on its rule's actions. */ +static void +facet_make_actions(struct ofproto_dpif *p, struct facet *facet, + const struct ofpbuf *packet) +{ + const struct rule_dpif *rule = facet->rule; + struct ofpbuf *odp_actions; + struct action_xlate_ctx ctx; + + action_xlate_ctx_init(&ctx, p, &facet->flow, packet); + odp_actions = xlate_actions(&ctx, rule->up.actions, rule->up.n_actions); + facet->tags = ctx.tags; + facet->may_install = ctx.may_set_up_flow; + facet->nf_flow.output_iface = ctx.nf_output_iface; + + if (facet->actions_len != odp_actions->size + || memcmp(facet->actions, odp_actions->data, odp_actions->size)) { + free(facet->actions); + facet->actions_len = odp_actions->size; + facet->actions = xmemdup(odp_actions->data, odp_actions->size); + } + + ofpbuf_delete(odp_actions); +} + +static int +facet_put__(struct ofproto_dpif *ofproto, struct facet *facet, + const struct nlattr *actions, size_t actions_len, + struct dpif_flow_stats *stats) +{ + struct odputil_keybuf keybuf; + enum dpif_flow_put_flags flags; + struct ofpbuf key; + + flags = DPIF_FP_CREATE | DPIF_FP_MODIFY; + if (stats) { + flags |= DPIF_FP_ZERO_STATS; + facet->dp_packet_count = 0; + facet->dp_byte_count = 0; + } + + ofpbuf_use_stack(&key, &keybuf, sizeof keybuf); + odp_flow_key_from_flow(&key, &facet->flow); + + return dpif_flow_put(ofproto->dpif, flags, key.data, key.size, + actions, actions_len, stats); +} + +/* If 'facet' is installable, inserts or re-inserts it into 'p''s datapath. If + * 'zero_stats' is true, clears any existing statistics from the datapath for + * 'facet'. */ +static void +facet_install(struct ofproto_dpif *p, struct facet *facet, bool zero_stats) +{ + struct dpif_flow_stats stats; + + if (facet->may_install + && !facet_put__(p, facet, facet->actions, facet->actions_len, + zero_stats ? &stats : NULL)) { + facet->installed = true; + } +} + +static void +facet_account(struct ofproto_dpif *ofproto, + struct facet *facet, uint64_t extra_bytes) +{ + uint64_t total_bytes, n_bytes; + struct ofbundle *in_bundle; + const struct nlattr *a; + tag_type dummy = 0; + unsigned int left; + int vlan; + + total_bytes = facet->byte_count + extra_bytes; + if (total_bytes <= facet->accounted_bytes) { + return; + } + n_bytes = total_bytes - facet->accounted_bytes; + facet->accounted_bytes = total_bytes; + + /* Test that 'tags' is nonzero to ensure that only flows that include an + * OFPP_NORMAL action are used for learning and bond slave rebalancing. + * This works because OFPP_NORMAL always sets a nonzero tag value. + * + * Feed information from the active flows back into the learning table to + * ensure that table is always in sync with what is actually flowing + * through the datapath. */ + if (!facet->tags + || !is_admissible(ofproto, &facet->flow, false, &dummy, + &vlan, &in_bundle)) { + return; + } + + update_learning_table(ofproto, &facet->flow, vlan, in_bundle); + + if (!ofproto->has_bonded_bundles) { + return; + } + NL_ATTR_FOR_EACH_UNSAFE (a, left, facet->actions, facet->actions_len) { + if (nl_attr_type(a) == ODP_ACTION_ATTR_OUTPUT) { + struct ofport_dpif *port; + + port = get_odp_port(ofproto, nl_attr_get_u32(a)); + if (port && port->bundle && port->bundle->bond) { + bond_account(port->bundle->bond, &facet->flow, vlan, n_bytes); + } + } + } +} + +/* If 'rule' is installed in the datapath, uninstalls it. */ +static void +facet_uninstall(struct ofproto_dpif *p, struct facet *facet) +{ + if (facet->installed) { + struct odputil_keybuf keybuf; + struct dpif_flow_stats stats; + struct ofpbuf key; + + ofpbuf_use_stack(&key, &keybuf, sizeof keybuf); + odp_flow_key_from_flow(&key, &facet->flow); + + if (!dpif_flow_del(p->dpif, key.data, key.size, &stats)) { + facet_update_stats(p, facet, &stats); + } + facet->installed = false; + facet->dp_packet_count = 0; + facet->dp_byte_count = 0; + } else { + assert(facet->dp_packet_count == 0); + assert(facet->dp_byte_count == 0); + } +} + +/* Returns true if the only action for 'facet' is to send to the controller. + * (We don't report NetFlow expiration messages for such facets because they + * are just part of the control logic for the network, not real traffic). */ +static bool +facet_is_controller_flow(struct facet *facet) +{ + return (facet + && facet->rule->up.n_actions == 1 + && action_outputs_to_port(&facet->rule->up.actions[0], + htons(OFPP_CONTROLLER))); +} + +/* Folds all of 'facet''s statistics into its rule. Also updates the + * accounting ofhook and emits a NetFlow expiration if appropriate. All of + * 'facet''s statistics in the datapath should have been zeroed and folded into + * its packet and byte counts before this function is called. */ +static void +facet_flush_stats(struct ofproto_dpif *ofproto, struct facet *facet) +{ + assert(!facet->dp_byte_count); + assert(!facet->dp_packet_count); + + facet_push_stats(facet); + facet_account(ofproto, facet, 0); + + if (ofproto->netflow && !facet_is_controller_flow(facet)) { + struct ofexpired expired; + expired.flow = facet->flow; + expired.packet_count = facet->packet_count; + expired.byte_count = facet->byte_count; + expired.used = facet->used; + netflow_expire(ofproto->netflow, &facet->nf_flow, &expired); + } + + facet->rule->packet_count += facet->packet_count; + facet->rule->byte_count += facet->byte_count; + + /* Reset counters to prevent double counting if 'facet' ever gets + * reinstalled. */ + facet->packet_count = 0; + facet->byte_count = 0; + facet->rs_packet_count = 0; + facet->rs_byte_count = 0; + facet->accounted_bytes = 0; + + netflow_flow_clear(&facet->nf_flow); +} + +/* Searches 'ofproto''s table of facets for one exactly equal to 'flow'. + * Returns it if found, otherwise a null pointer. + * + * The returned facet might need revalidation; use facet_lookup_valid() + * instead if that is important. */ +static struct facet * +facet_find(struct ofproto_dpif *ofproto, const struct flow *flow) +{ + struct facet *facet; + + HMAP_FOR_EACH_WITH_HASH (facet, hmap_node, flow_hash(flow, 0), + &ofproto->facets) { + if (flow_equal(flow, &facet->flow)) { + return facet; + } + } + + return NULL; +} + +/* Searches 'ofproto''s table of facets for one exactly equal to 'flow'. + * Returns it if found, otherwise a null pointer. + * + * The returned facet is guaranteed to be valid. */ +static struct facet * +facet_lookup_valid(struct ofproto_dpif *ofproto, const struct flow *flow) +{ + struct facet *facet = facet_find(ofproto, flow); + + /* The facet we found might not be valid, since we could be in need of + * revalidation. If it is not valid, don't return it. */ + if (facet + && ofproto->need_revalidate + && !facet_revalidate(ofproto, facet)) { + COVERAGE_INC(facet_invalidated); + return NULL; + } + + return facet; +} + +/* Re-searches 'ofproto''s classifier for a rule matching 'facet': + * + * - If the rule found is different from 'facet''s current rule, moves + * 'facet' to the new rule and recompiles its actions. + * + * - If the rule found is the same as 'facet''s current rule, leaves 'facet' + * where it is and recompiles its actions anyway. + * + * - If there is none, destroys 'facet'. + * + * Returns true if 'facet' still exists, false if it has been destroyed. */ +static bool +facet_revalidate(struct ofproto_dpif *ofproto, struct facet *facet) +{ + struct action_xlate_ctx ctx; + struct ofpbuf *odp_actions; + struct rule_dpif *new_rule; + bool actions_changed; + + COVERAGE_INC(facet_revalidate); + + /* Determine the new rule. */ + new_rule = rule_dpif_lookup(ofproto, &facet->flow); + if (!new_rule) { + /* No new rule, so delete the facet. */ + facet_remove(ofproto, facet); + return false; + } + + /* Calculate new ODP actions. + * + * We do not modify any 'facet' state yet, because we might need to, e.g., + * emit a NetFlow expiration and, if so, we need to have the old state + * around to properly compose it. */ + action_xlate_ctx_init(&ctx, ofproto, &facet->flow, NULL); + odp_actions = xlate_actions(&ctx, + new_rule->up.actions, new_rule->up.n_actions); + actions_changed = (facet->actions_len != odp_actions->size + || memcmp(facet->actions, odp_actions->data, + facet->actions_len)); + + /* If the ODP actions changed or the installability changed, then we need + * to talk to the datapath. */ + if (actions_changed || ctx.may_set_up_flow != facet->installed) { + if (ctx.may_set_up_flow) { + struct dpif_flow_stats stats; + + facet_put__(ofproto, facet, + odp_actions->data, odp_actions->size, &stats); + facet_update_stats(ofproto, facet, &stats); + } else { + facet_uninstall(ofproto, facet); + } + + /* The datapath flow is gone or has zeroed stats, so push stats out of + * 'facet' into 'rule'. */ + facet_flush_stats(ofproto, facet); + } + + /* Update 'facet' now that we've taken care of all the old state. */ + facet->tags = ctx.tags; + facet->nf_flow.output_iface = ctx.nf_output_iface; + facet->may_install = ctx.may_set_up_flow; + if (actions_changed) { + free(facet->actions); + facet->actions_len = odp_actions->size; + facet->actions = xmemdup(odp_actions->data, odp_actions->size); + } + if (facet->rule != new_rule) { + COVERAGE_INC(facet_changed_rule); + list_remove(&facet->list_node); + list_push_back(&new_rule->facets, &facet->list_node); + facet->rule = new_rule; + facet->used = new_rule->up.created; + facet->rs_used = facet->used; + } + + ofpbuf_delete(odp_actions); + + return true; +} + +/* Updates 'facet''s used time. Caller is responsible for calling + * facet_push_stats() to update the flows which 'facet' resubmits into. */ +static void +facet_update_time(struct ofproto_dpif *ofproto, struct facet *facet, + long long int used) +{ + if (used > facet->used) { + facet->used = used; + if (used > facet->rule->used) { + facet->rule->used = used; + } + netflow_flow_update_time(ofproto->netflow, &facet->nf_flow, used); + } +} + +/* Folds the statistics from 'stats' into the counters in 'facet'. + * + * Because of the meaning of a facet's counters, it only makes sense to do this + * if 'stats' are not tracked in the datapath, that is, if 'stats' represents a + * packet that was sent by hand or if it represents statistics that have been + * cleared out of the datapath. */ +static void +facet_update_stats(struct ofproto_dpif *ofproto, struct facet *facet, + const struct dpif_flow_stats *stats) +{ + if (stats->n_packets || stats->used > facet->used) { + facet_update_time(ofproto, facet, stats->used); + facet->packet_count += stats->n_packets; + facet->byte_count += stats->n_bytes; + facet_push_stats(facet); + netflow_flow_update_flags(&facet->nf_flow, stats->tcp_flags); + } +} + +static void +facet_push_stats(struct facet *facet) +{ + uint64_t rs_packets, rs_bytes; + + assert(facet->packet_count >= facet->rs_packet_count); + assert(facet->byte_count >= facet->rs_byte_count); + assert(facet->used >= facet->rs_used); + + rs_packets = facet->packet_count - facet->rs_packet_count; + rs_bytes = facet->byte_count - facet->rs_byte_count; + + if (rs_packets || rs_bytes || facet->used > facet->rs_used) { + facet->rs_packet_count = facet->packet_count; + facet->rs_byte_count = facet->byte_count; + facet->rs_used = facet->used; + + flow_push_stats(facet->rule, &facet->flow, + rs_packets, rs_bytes, facet->used); + } +} + +struct ofproto_push { + struct action_xlate_ctx ctx; + uint64_t packets; + uint64_t bytes; + long long int used; +}; + +static void +push_resubmit(struct action_xlate_ctx *ctx, struct rule_dpif *rule) +{ + struct ofproto_push *push = CONTAINER_OF(ctx, struct ofproto_push, ctx); + + if (rule) { + rule->packet_count += push->packets; + rule->byte_count += push->bytes; + rule->used = MAX(push->used, rule->used); + } +} + +/* Pushes flow statistics to the rules which 'flow' resubmits into given + * 'rule''s actions. */ +static void +flow_push_stats(const struct rule_dpif *rule, + struct flow *flow, uint64_t packets, uint64_t bytes, + long long int used) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(rule->up.ofproto); + struct ofproto_push push; + + push.packets = packets; + push.bytes = bytes; + push.used = used; + + action_xlate_ctx_init(&push.ctx, ofproto, flow, NULL); + push.ctx.resubmit_hook = push_resubmit; + ofpbuf_delete(xlate_actions(&push.ctx, + rule->up.actions, rule->up.n_actions)); +} + +/* Rules. */ + +static struct rule_dpif * +rule_dpif_lookup(struct ofproto_dpif *ofproto, const struct flow *flow) +{ + return rule_dpif_cast(ofproto_rule_lookup(&ofproto->up, flow)); +} + +static struct rule * +rule_alloc(void) +{ + struct rule_dpif *rule = xmalloc(sizeof *rule); + return &rule->up; +} + +static void +rule_dealloc(struct rule *rule_) +{ + struct rule_dpif *rule = rule_dpif_cast(rule_); + free(rule); +} + +static int +rule_construct(struct rule *rule_) +{ + struct rule_dpif *rule = rule_dpif_cast(rule_); + struct ofproto_dpif *ofproto = ofproto_dpif_cast(rule->up.ofproto); + struct cls_rule *displaced_rule; + + rule->used = rule->up.created; + rule->packet_count = 0; + rule->byte_count = 0; + list_init(&rule->facets); + + displaced_rule = classifier_insert(&ofproto->up.cls, &rule->up.cr); + if (displaced_rule) { + ofproto_rule_destroy(rule_from_cls_rule(displaced_rule)); + } + ofproto->need_revalidate = true; + + return 0; +} + +static void +rule_destruct(struct rule *rule_) +{ + struct rule_dpif *rule = rule_dpif_cast(rule_); + struct ofproto_dpif *ofproto = ofproto_dpif_cast(rule->up.ofproto); + struct facet *facet, *next_facet; + + ofproto->need_revalidate = true; + LIST_FOR_EACH_SAFE (facet, next_facet, list_node, &rule->facets) { + facet_revalidate(ofproto, facet); + } +} + +static void +rule_remove(struct rule *rule_) +{ + struct rule_dpif *rule = rule_dpif_cast(rule_); + struct ofproto_dpif *ofproto = ofproto_dpif_cast(rule->up.ofproto); + + ofproto->need_revalidate = true; + classifier_remove(&ofproto->up.cls, &rule->up.cr); +} + +static void +rule_get_stats(struct rule *rule_, uint64_t *packets, uint64_t *bytes) +{ + struct rule_dpif *rule = rule_dpif_cast(rule_); + struct facet *facet; + + /* Start from historical data for 'rule' itself that are no longer tracked + * in facets. This counts, for example, facets that have expired. */ + *packets = rule->packet_count; + *bytes = rule->byte_count; + + /* Add any statistics that are tracked by facets. This includes + * statistical data recently updated by ofproto_update_stats() as well as + * stats for packets that were executed "by hand" via dpif_execute(). */ + LIST_FOR_EACH (facet, list_node, &rule->facets) { + *packets += facet->packet_count; + *bytes += facet->byte_count; + } +} + +static void +rule_execute(struct rule *rule_, struct flow *flow, struct ofpbuf *packet) +{ + struct rule_dpif *rule = rule_dpif_cast(rule_); + struct ofproto_dpif *ofproto = ofproto_dpif_cast(rule->up.ofproto); + struct action_xlate_ctx ctx; + struct ofpbuf *odp_actions; + struct facet *facet; + size_t size; + + /* First look for a related facet. If we find one, account it to that. */ + facet = facet_lookup_valid(ofproto, flow); + if (facet && facet->rule == rule) { + facet_execute(ofproto, facet, packet); + return; + } + + /* Otherwise, if 'rule' is in fact the correct rule for 'packet', then + * create a new facet for it and use that. */ + if (rule_dpif_lookup(ofproto, flow) == rule) { + facet = facet_create(rule, flow, packet); + facet_execute(ofproto, facet, packet); + facet_install(ofproto, facet, true); + return; + } + + /* We can't account anything to a facet. If we were to try, then that + * facet would have a non-matching rule, busting our invariants. */ + action_xlate_ctx_init(&ctx, ofproto, flow, packet); + odp_actions = xlate_actions(&ctx, rule->up.actions, rule->up.n_actions); + size = packet->size; + if (execute_odp_actions(ofproto, flow, odp_actions->data, + odp_actions->size, packet)) { + rule->used = time_msec(); + rule->packet_count++; + rule->byte_count += size; + flow_push_stats(rule, flow, 1, size, rule->used); + } + ofpbuf_delete(odp_actions); +} + +static int +rule_modify_actions(struct rule *rule_, + const union ofp_action *actions, size_t n_actions) +{ + struct rule_dpif *rule = rule_dpif_cast(rule_); + struct ofproto_dpif *ofproto = ofproto_dpif_cast(rule->up.ofproto); + int error; + + error = validate_actions(actions, n_actions, &rule->up.cr.flow, + ofproto->max_ports); + if (!error) { + ofproto->need_revalidate = true; + } + return error; +} + +/* Sends 'packet' out of port 'odp_port' within 'ofproto'. If 'vlan_tci' is + * zero the packet will not have any 802.1Q hader; if it is nonzero, then the + * packet will be sent with the VLAN TCI specified by 'vlan_tci & ~VLAN_CFI'. + * + * Returns 0 if successful, otherwise a positive errno value. */ +static int +send_packet(struct ofproto_dpif *ofproto, uint32_t odp_port, uint16_t vlan_tci, + const struct ofpbuf *packet) +{ + struct ofpbuf odp_actions; + int error; + + ofpbuf_init(&odp_actions, 32); + if (vlan_tci != 0) { + nl_msg_put_u32(&odp_actions, ODP_ACTION_ATTR_SET_DL_TCI, + ntohs(vlan_tci & ~VLAN_CFI)); + } + nl_msg_put_u32(&odp_actions, ODP_ACTION_ATTR_OUTPUT, odp_port); + error = dpif_execute(ofproto->dpif, odp_actions.data, odp_actions.size, + packet); + ofpbuf_uninit(&odp_actions); + + if (error) { + VLOG_WARN_RL(&rl, "%s: failed to send packet on port %"PRIu32" (%s)", + ofproto->up.name, odp_port, strerror(error)); + } + return error; +} + +/* OpenFlow to ODP action translation. */ + +static void do_xlate_actions(const union ofp_action *in, size_t n_in, + struct action_xlate_ctx *ctx); +static bool xlate_normal(struct action_xlate_ctx *); + +static void +add_output_action(struct action_xlate_ctx *ctx, uint16_t ofp_port) +{ + const struct ofport_dpif *ofport = get_ofp_port(ctx->ofproto, ofp_port); + uint16_t odp_port = ofp_port_to_odp_port(ofp_port); + + if (ofport) { + if (ofport->up.opp.config & htonl(OFPPC_NO_FWD)) { + /* Forwarding disabled on port. */ + return; + } + } else { + /* + * We don't have an ofport record for this port, but it doesn't hurt to + * allow forwarding to it anyhow. Maybe such a port will appear later + * and we're pre-populating the flow table. + */ + } + + nl_msg_put_u32(ctx->odp_actions, ODP_ACTION_ATTR_OUTPUT, odp_port); + ctx->nf_output_iface = ofp_port; +} + +static void +xlate_table_action(struct action_xlate_ctx *ctx, uint16_t in_port) +{ + if (ctx->recurse < MAX_RESUBMIT_RECURSION) { + struct rule_dpif *rule; + uint16_t old_in_port; + + /* Look up a flow with 'in_port' as the input port. Then restore the + * original input port (otherwise OFPP_NORMAL and OFPP_IN_PORT will + * have surprising behavior). */ + old_in_port = ctx->flow.in_port; + ctx->flow.in_port = in_port; + rule = rule_dpif_lookup(ctx->ofproto, &ctx->flow); + ctx->flow.in_port = old_in_port; + + if (ctx->resubmit_hook) { + ctx->resubmit_hook(ctx, rule); + } + + if (rule) { + ctx->recurse++; + do_xlate_actions(rule->up.actions, rule->up.n_actions, ctx); + ctx->recurse--; + } + } else { + static struct vlog_rate_limit recurse_rl = VLOG_RATE_LIMIT_INIT(1, 1); + + VLOG_ERR_RL(&recurse_rl, "NXAST_RESUBMIT recursed over %d times", + MAX_RESUBMIT_RECURSION); + } +} + +static void +flood_packets(struct ofproto_dpif *ofproto, + uint16_t ofp_in_port, ovs_be32 mask, + uint16_t *nf_output_iface, struct ofpbuf *odp_actions) +{ + struct ofport_dpif *ofport; + + HMAP_FOR_EACH (ofport, up.hmap_node, &ofproto->up.ports) { + uint16_t ofp_port = ofport->up.ofp_port; + if (ofp_port != ofp_in_port && !(ofport->up.opp.config & mask)) { + nl_msg_put_u32(odp_actions, ODP_ACTION_ATTR_OUTPUT, + ofport->odp_port); + } + } + *nf_output_iface = NF_OUT_FLOOD; +} + +static void +xlate_output_action__(struct action_xlate_ctx *ctx, + uint16_t port, uint16_t max_len) +{ + uint16_t prev_nf_output_iface = ctx->nf_output_iface; + + ctx->nf_output_iface = NF_OUT_DROP; + + switch (port) { + case OFPP_IN_PORT: + add_output_action(ctx, ctx->flow.in_port); + break; + case OFPP_TABLE: + xlate_table_action(ctx, ctx->flow.in_port); + break; + case OFPP_NORMAL: + xlate_normal(ctx); + break; + case OFPP_FLOOD: + flood_packets(ctx->ofproto, ctx->flow.in_port, htonl(OFPPC_NO_FLOOD), + &ctx->nf_output_iface, ctx->odp_actions); + break; + case OFPP_ALL: + flood_packets(ctx->ofproto, ctx->flow.in_port, htonl(0), + &ctx->nf_output_iface, ctx->odp_actions); + break; + case OFPP_CONTROLLER: + nl_msg_put_u64(ctx->odp_actions, ODP_ACTION_ATTR_CONTROLLER, max_len); + break; + case OFPP_LOCAL: + add_output_action(ctx, OFPP_LOCAL); + break; + default: + if (port != ctx->flow.in_port) { + add_output_action(ctx, port); + } + break; + } + + if (prev_nf_output_iface == NF_OUT_FLOOD) { + ctx->nf_output_iface = NF_OUT_FLOOD; + } else if (ctx->nf_output_iface == NF_OUT_DROP) { + ctx->nf_output_iface = prev_nf_output_iface; + } else if (prev_nf_output_iface != NF_OUT_DROP && + ctx->nf_output_iface != NF_OUT_FLOOD) { + ctx->nf_output_iface = NF_OUT_MULTI; + } +} + +static void +xlate_output_action(struct action_xlate_ctx *ctx, + const struct ofp_action_output *oao) +{ + xlate_output_action__(ctx, ntohs(oao->port), ntohs(oao->max_len)); +} + +/* If the final ODP action in 'ctx' is "pop priority", drop it, as an + * optimization, because we're going to add another action that sets the + * priority immediately after, or because there are no actions following the + * pop. */ +static void +remove_pop_action(struct action_xlate_ctx *ctx) +{ + if (ctx->odp_actions->size == ctx->last_pop_priority) { + ctx->odp_actions->size -= NLA_ALIGN(NLA_HDRLEN); + ctx->last_pop_priority = -1; + } +} + +static void +add_pop_action(struct action_xlate_ctx *ctx) +{ + if (ctx->odp_actions->size != ctx->last_pop_priority) { + nl_msg_put_flag(ctx->odp_actions, ODP_ACTION_ATTR_POP_PRIORITY); + ctx->last_pop_priority = ctx->odp_actions->size; + } +} + +static void +xlate_enqueue_action(struct action_xlate_ctx *ctx, + const struct ofp_action_enqueue *oae) +{ + uint16_t ofp_port, odp_port; + uint32_t priority; + int error; + + error = dpif_queue_to_priority(ctx->ofproto->dpif, ntohl(oae->queue_id), + &priority); + if (error) { + /* Fall back to ordinary output action. */ + xlate_output_action__(ctx, ntohs(oae->port), 0); + return; + } + + /* Figure out ODP output port. */ + ofp_port = ntohs(oae->port); + if (ofp_port == OFPP_IN_PORT) { + ofp_port = ctx->flow.in_port; + } + odp_port = ofp_port_to_odp_port(ofp_port); + + /* Add ODP actions. */ + remove_pop_action(ctx); + nl_msg_put_u32(ctx->odp_actions, ODP_ACTION_ATTR_SET_PRIORITY, priority); + add_output_action(ctx, odp_port); + add_pop_action(ctx); + + /* Update NetFlow output port. */ + if (ctx->nf_output_iface == NF_OUT_DROP) { + ctx->nf_output_iface = odp_port; + } else if (ctx->nf_output_iface != NF_OUT_FLOOD) { + ctx->nf_output_iface = NF_OUT_MULTI; + } +} + +static void +xlate_set_queue_action(struct action_xlate_ctx *ctx, + const struct nx_action_set_queue *nasq) +{ + uint32_t priority; + int error; + + error = dpif_queue_to_priority(ctx->ofproto->dpif, ntohl(nasq->queue_id), + &priority); + if (error) { + /* Couldn't translate queue to a priority, so ignore. A warning + * has already been logged. */ + return; + } + + remove_pop_action(ctx); + nl_msg_put_u32(ctx->odp_actions, ODP_ACTION_ATTR_SET_PRIORITY, priority); +} + +static void +xlate_set_dl_tci(struct action_xlate_ctx *ctx) +{ + ovs_be16 tci = ctx->flow.vlan_tci; + if (!(tci & htons(VLAN_CFI))) { + nl_msg_put_flag(ctx->odp_actions, ODP_ACTION_ATTR_STRIP_VLAN); + } else { + nl_msg_put_be16(ctx->odp_actions, ODP_ACTION_ATTR_SET_DL_TCI, + tci & ~htons(VLAN_CFI)); + } +} + +struct xlate_reg_state { + ovs_be16 vlan_tci; + ovs_be64 tun_id; +}; + +static void +save_reg_state(const struct action_xlate_ctx *ctx, + struct xlate_reg_state *state) +{ + state->vlan_tci = ctx->flow.vlan_tci; + state->tun_id = ctx->flow.tun_id; +} + +static void +update_reg_state(struct action_xlate_ctx *ctx, + const struct xlate_reg_state *state) +{ + if (ctx->flow.vlan_tci != state->vlan_tci) { + xlate_set_dl_tci(ctx); + } + if (ctx->flow.tun_id != state->tun_id) { + nl_msg_put_be64(ctx->odp_actions, + ODP_ACTION_ATTR_SET_TUNNEL, ctx->flow.tun_id); + } +} + +static void +xlate_autopath(struct action_xlate_ctx *ctx, + const struct nx_action_autopath *naa) +{ + uint16_t ofp_port = ntohl(naa->id); + struct ofport_dpif *port = get_ofp_port(ctx->ofproto, ofp_port); + + if (!port || !port->bundle) { + ofp_port = OFPP_NONE; + } else if (port->bundle->bond) { + /* Autopath does not support VLAN hashing. */ + struct ofport_dpif *slave = bond_choose_output_slave( + port->bundle->bond, &ctx->flow, OFP_VLAN_NONE, &ctx->tags); + if (slave) { + ofp_port = slave->up.ofp_port; + } + } + autopath_execute(naa, &ctx->flow, ofp_port); +} + +static void +xlate_nicira_action(struct action_xlate_ctx *ctx, + const struct nx_action_header *nah) +{ + const struct nx_action_resubmit *nar; + const struct nx_action_set_tunnel *nast; + const struct nx_action_set_queue *nasq; + const struct nx_action_multipath *nam; + const struct nx_action_autopath *naa; + enum nx_action_subtype subtype = ntohs(nah->subtype); + struct xlate_reg_state state; + ovs_be64 tun_id; + + assert(nah->vendor == htonl(NX_VENDOR_ID)); + switch (subtype) { + case NXAST_RESUBMIT: + nar = (const struct nx_action_resubmit *) nah; + xlate_table_action(ctx, ntohs(nar->in_port)); + break; + + case NXAST_SET_TUNNEL: + nast = (const struct nx_action_set_tunnel *) nah; + tun_id = htonll(ntohl(nast->tun_id)); + nl_msg_put_be64(ctx->odp_actions, ODP_ACTION_ATTR_SET_TUNNEL, tun_id); + ctx->flow.tun_id = tun_id; + break; + + case NXAST_DROP_SPOOFED_ARP: + if (ctx->flow.dl_type == htons(ETH_TYPE_ARP)) { + nl_msg_put_flag(ctx->odp_actions, + ODP_ACTION_ATTR_DROP_SPOOFED_ARP); + } + break; + + case NXAST_SET_QUEUE: + nasq = (const struct nx_action_set_queue *) nah; + xlate_set_queue_action(ctx, nasq); + break; + + case NXAST_POP_QUEUE: + add_pop_action(ctx); + break; + + case NXAST_REG_MOVE: + save_reg_state(ctx, &state); + nxm_execute_reg_move((const struct nx_action_reg_move *) nah, + &ctx->flow); + update_reg_state(ctx, &state); + break; + + case NXAST_REG_LOAD: + save_reg_state(ctx, &state); + nxm_execute_reg_load((const struct nx_action_reg_load *) nah, + &ctx->flow); + update_reg_state(ctx, &state); + break; + + case NXAST_NOTE: + /* Nothing to do. */ + break; + + case NXAST_SET_TUNNEL64: + tun_id = ((const struct nx_action_set_tunnel64 *) nah)->tun_id; + nl_msg_put_be64(ctx->odp_actions, ODP_ACTION_ATTR_SET_TUNNEL, tun_id); + ctx->flow.tun_id = tun_id; + break; + + case NXAST_MULTIPATH: + nam = (const struct nx_action_multipath *) nah; + multipath_execute(nam, &ctx->flow); + break; + + case NXAST_AUTOPATH: + naa = (const struct nx_action_autopath *) nah; + xlate_autopath(ctx, naa); + break; + + /* If you add a new action here that modifies flow data, don't forget to + * update the flow key in ctx->flow at the same time. */ + + case NXAST_SNAT__OBSOLETE: + default: + VLOG_DBG_RL(&rl, "unknown Nicira action type %d", (int) subtype); + break; + } +} + +static void +do_xlate_actions(const union ofp_action *in, size_t n_in, + struct action_xlate_ctx *ctx) +{ + const struct ofport_dpif *port; + struct actions_iterator iter; + const union ofp_action *ia; + + port = get_ofp_port(ctx->ofproto, ctx->flow.in_port); + if (port + && port->up.opp.config & htonl(OFPPC_NO_RECV | OFPPC_NO_RECV_STP) && + port->up.opp.config & (eth_addr_equals(ctx->flow.dl_dst, eth_addr_stp) + ? htonl(OFPPC_NO_RECV_STP) + : htonl(OFPPC_NO_RECV))) { + /* Drop this flow. */ + return; + } + + for (ia = actions_first(&iter, in, n_in); ia; ia = actions_next(&iter)) { + enum ofp_action_type type = ntohs(ia->type); + const struct ofp_action_dl_addr *oada; + + switch (type) { + case OFPAT_OUTPUT: + xlate_output_action(ctx, &ia->output); + break; + + case OFPAT_SET_VLAN_VID: + ctx->flow.vlan_tci &= ~htons(VLAN_VID_MASK); + ctx->flow.vlan_tci |= ia->vlan_vid.vlan_vid | htons(VLAN_CFI); + xlate_set_dl_tci(ctx); + break; + + case OFPAT_SET_VLAN_PCP: + ctx->flow.vlan_tci &= ~htons(VLAN_PCP_MASK); + ctx->flow.vlan_tci |= htons( + (ia->vlan_pcp.vlan_pcp << VLAN_PCP_SHIFT) | VLAN_CFI); + xlate_set_dl_tci(ctx); + break; + + case OFPAT_STRIP_VLAN: + ctx->flow.vlan_tci = htons(0); + xlate_set_dl_tci(ctx); + break; + + case OFPAT_SET_DL_SRC: + oada = ((struct ofp_action_dl_addr *) ia); + nl_msg_put_unspec(ctx->odp_actions, ODP_ACTION_ATTR_SET_DL_SRC, + oada->dl_addr, ETH_ADDR_LEN); + memcpy(ctx->flow.dl_src, oada->dl_addr, ETH_ADDR_LEN); + break; + + case OFPAT_SET_DL_DST: + oada = ((struct ofp_action_dl_addr *) ia); + nl_msg_put_unspec(ctx->odp_actions, ODP_ACTION_ATTR_SET_DL_DST, + oada->dl_addr, ETH_ADDR_LEN); + memcpy(ctx->flow.dl_dst, oada->dl_addr, ETH_ADDR_LEN); + break; + + case OFPAT_SET_NW_SRC: + nl_msg_put_be32(ctx->odp_actions, ODP_ACTION_ATTR_SET_NW_SRC, + ia->nw_addr.nw_addr); + ctx->flow.nw_src = ia->nw_addr.nw_addr; + break; + + case OFPAT_SET_NW_DST: + nl_msg_put_be32(ctx->odp_actions, ODP_ACTION_ATTR_SET_NW_DST, + ia->nw_addr.nw_addr); + ctx->flow.nw_dst = ia->nw_addr.nw_addr; + break; + + case OFPAT_SET_NW_TOS: + nl_msg_put_u8(ctx->odp_actions, ODP_ACTION_ATTR_SET_NW_TOS, + ia->nw_tos.nw_tos); + ctx->flow.nw_tos = ia->nw_tos.nw_tos; + break; + + case OFPAT_SET_TP_SRC: + nl_msg_put_be16(ctx->odp_actions, ODP_ACTION_ATTR_SET_TP_SRC, + ia->tp_port.tp_port); + ctx->flow.tp_src = ia->tp_port.tp_port; + break; + + case OFPAT_SET_TP_DST: + nl_msg_put_be16(ctx->odp_actions, ODP_ACTION_ATTR_SET_TP_DST, + ia->tp_port.tp_port); + ctx->flow.tp_dst = ia->tp_port.tp_port; + break; + + case OFPAT_VENDOR: + xlate_nicira_action(ctx, (const struct nx_action_header *) ia); + break; + + case OFPAT_ENQUEUE: + xlate_enqueue_action(ctx, (const struct ofp_action_enqueue *) ia); + break; + + default: + VLOG_DBG_RL(&rl, "unknown action type %d", (int) type); + break; + } + } +} + +static void +action_xlate_ctx_init(struct action_xlate_ctx *ctx, + struct ofproto_dpif *ofproto, const struct flow *flow, + const struct ofpbuf *packet) +{ + ctx->ofproto = ofproto; + ctx->flow = *flow; + ctx->packet = packet; + ctx->resubmit_hook = NULL; + ctx->check_special = true; +} + +static struct ofpbuf * +xlate_actions(struct action_xlate_ctx *ctx, + const union ofp_action *in, size_t n_in) +{ + COVERAGE_INC(ofproto_dpif_xlate); + + ctx->odp_actions = ofpbuf_new(512); + ctx->tags = 0; + ctx->may_set_up_flow = true; + ctx->nf_output_iface = NF_OUT_DROP; + ctx->recurse = 0; + ctx->last_pop_priority = -1; + + if (ctx->check_special + && process_special(ctx->ofproto, &ctx->flow, ctx->packet)) { + ctx->may_set_up_flow = false; + } else { + do_xlate_actions(in, n_in, ctx); + } + + remove_pop_action(ctx); + + /* Check with in-band control to see if we're allowed to set up this + * flow. */ + if (!connmgr_may_set_up_flow(ctx->ofproto->up.connmgr, &ctx->flow, + ctx->odp_actions->data, + ctx->odp_actions->size)) { + ctx->may_set_up_flow = false; + } + + return ctx->odp_actions; +} + +/* OFPP_NORMAL implementation. */ + +struct dst { + struct ofport_dpif *port; + uint16_t vlan; +}; + +struct dst_set { + struct dst builtin[32]; + struct dst *dsts; + size_t n, allocated; +}; + +static void dst_set_init(struct dst_set *); +static void dst_set_add(struct dst_set *, const struct dst *); +static void dst_set_free(struct dst_set *); + +static struct ofport_dpif *ofbundle_get_a_port(const struct ofbundle *); + +static bool +set_dst(struct action_xlate_ctx *ctx, struct dst *dst, + const struct ofbundle *in_bundle, const struct ofbundle *out_bundle) +{ + dst->vlan = (out_bundle->vlan >= 0 ? OFP_VLAN_NONE + : in_bundle->vlan >= 0 ? in_bundle->vlan + : ctx->flow.vlan_tci == 0 ? OFP_VLAN_NONE + : vlan_tci_to_vid(ctx->flow.vlan_tci)); + + dst->port = (!out_bundle->bond + ? ofbundle_get_a_port(out_bundle) + : bond_choose_output_slave(out_bundle->bond, &ctx->flow, + dst->vlan, &ctx->tags)); + + return dst->port != NULL; +} + +static int +mirror_mask_ffs(mirror_mask_t mask) +{ + BUILD_ASSERT_DECL(sizeof(unsigned int) >= sizeof(mask)); + return ffs(mask); +} + +static void +dst_set_init(struct dst_set *set) +{ + set->dsts = set->builtin; + set->n = 0; + set->allocated = ARRAY_SIZE(set->builtin); +} + +static void +dst_set_add(struct dst_set *set, const struct dst *dst) +{ + if (set->n >= set->allocated) { + size_t new_allocated; + struct dst *new_dsts; + + new_allocated = set->allocated * 2; + new_dsts = xmalloc(new_allocated * sizeof *new_dsts); + memcpy(new_dsts, set->dsts, set->n * sizeof *new_dsts); + + dst_set_free(set); + + set->dsts = new_dsts; + set->allocated = new_allocated; + } + set->dsts[set->n++] = *dst; +} + +static void +dst_set_free(struct dst_set *set) +{ + if (set->dsts != set->builtin) { + free(set->dsts); + } +} + +static bool +dst_is_duplicate(const struct dst_set *set, const struct dst *test) +{ + size_t i; + for (i = 0; i < set->n; i++) { + if (set->dsts[i].vlan == test->vlan + && set->dsts[i].port == test->port) { + return true; + } + } + return false; +} + +static bool +ofbundle_trunks_vlan(const struct ofbundle *bundle, uint16_t vlan) +{ + return bundle->vlan < 0 && vlan_bitmap_contains(bundle->trunks, vlan); +} + +static bool +ofbundle_includes_vlan(const struct ofbundle *bundle, uint16_t vlan) +{ + return vlan == bundle->vlan || ofbundle_trunks_vlan(bundle, vlan); +} + +/* Returns an arbitrary interface within 'bundle'. */ +static struct ofport_dpif * +ofbundle_get_a_port(const struct ofbundle *bundle) +{ + return CONTAINER_OF(list_front(&bundle->ports), + struct ofport_dpif, bundle_node); +} + +static void +compose_dsts(struct action_xlate_ctx *ctx, uint16_t vlan, + const struct ofbundle *in_bundle, + const struct ofbundle *out_bundle, struct dst_set *set) +{ + struct dst dst; + + if (out_bundle == OFBUNDLE_FLOOD) { + struct ofbundle *bundle; + + HMAP_FOR_EACH (bundle, hmap_node, &ctx->ofproto->bundles) { + if (bundle != in_bundle + && ofbundle_includes_vlan(bundle, vlan) + && bundle->floodable + && !bundle->mirror_out + && set_dst(ctx, &dst, in_bundle, bundle)) { + dst_set_add(set, &dst); + } + } + ctx->nf_output_iface = NF_OUT_FLOOD; + } else if (out_bundle && set_dst(ctx, &dst, in_bundle, out_bundle)) { + dst_set_add(set, &dst); + ctx->nf_output_iface = dst.port->odp_port; + } +} + +static bool +vlan_is_mirrored(const struct ofmirror *m, int vlan) +{ + return vlan_bitmap_contains(m->vlans, vlan); +} + +static void +compose_mirror_dsts(struct action_xlate_ctx *ctx, + uint16_t vlan, const struct ofbundle *in_bundle, + struct dst_set *set) +{ + struct ofproto_dpif *ofproto = ctx->ofproto; + mirror_mask_t mirrors; + int flow_vlan; + size_t i; + + mirrors = in_bundle->src_mirrors; + for (i = 0; i < set->n; i++) { + mirrors |= set->dsts[i].port->bundle->dst_mirrors; + } + + if (!mirrors) { + return; + } + + flow_vlan = vlan_tci_to_vid(ctx->flow.vlan_tci); + if (flow_vlan == 0) { + flow_vlan = OFP_VLAN_NONE; + } + + while (mirrors) { + struct ofmirror *m = ofproto->mirrors[mirror_mask_ffs(mirrors) - 1]; + if (vlan_is_mirrored(m, vlan)) { + struct dst dst; + + if (m->out) { + if (set_dst(ctx, &dst, in_bundle, m->out) + && !dst_is_duplicate(set, &dst)) { + dst_set_add(set, &dst); + } + } else { + struct ofbundle *bundle; + + HMAP_FOR_EACH (bundle, hmap_node, &ofproto->bundles) { + if (ofbundle_includes_vlan(bundle, m->out_vlan) + && set_dst(ctx, &dst, in_bundle, bundle)) + { + if (bundle->vlan < 0) { + dst.vlan = m->out_vlan; + } + if (dst_is_duplicate(set, &dst)) { + continue; + } + + /* Use the vlan tag on the original flow instead of + * the one passed in the vlan parameter. This ensures + * that we compare the vlan from before any implicit + * tagging tags place. This is necessary because + * dst->vlan is the final vlan, after removing implicit + * tags. */ + if (bundle == in_bundle && dst.vlan == flow_vlan) { + /* Don't send out input port on same VLAN. */ + continue; + } + dst_set_add(set, &dst); + } + } + } + } + mirrors &= mirrors - 1; + } +} + +static void +compose_actions(struct action_xlate_ctx *ctx, uint16_t vlan, + const struct ofbundle *in_bundle, + const struct ofbundle *out_bundle) +{ + uint16_t initial_vlan, cur_vlan; + const struct dst *dst; + struct dst_set set; + + dst_set_init(&set); + compose_dsts(ctx, vlan, in_bundle, out_bundle, &set); + compose_mirror_dsts(ctx, vlan, in_bundle, &set); + + /* Output all the packets we can without having to change the VLAN. */ + initial_vlan = vlan_tci_to_vid(ctx->flow.vlan_tci); + if (initial_vlan == 0) { + initial_vlan = OFP_VLAN_NONE; + } + for (dst = set.dsts; dst < &set.dsts[set.n]; dst++) { + if (dst->vlan != initial_vlan) { + continue; + } + nl_msg_put_u32(ctx->odp_actions, + ODP_ACTION_ATTR_OUTPUT, dst->port->odp_port); + } + + /* Then output the rest. */ + cur_vlan = initial_vlan; + for (dst = set.dsts; dst < &set.dsts[set.n]; dst++) { + if (dst->vlan == initial_vlan) { + continue; + } + if (dst->vlan != cur_vlan) { + if (dst->vlan == OFP_VLAN_NONE) { + nl_msg_put_flag(ctx->odp_actions, ODP_ACTION_ATTR_STRIP_VLAN); + } else { + ovs_be16 tci; + tci = htons(dst->vlan & VLAN_VID_MASK); + tci |= ctx->flow.vlan_tci & htons(VLAN_PCP_MASK); + nl_msg_put_be16(ctx->odp_actions, + ODP_ACTION_ATTR_SET_DL_TCI, tci); + } + cur_vlan = dst->vlan; + } + nl_msg_put_u32(ctx->odp_actions, + ODP_ACTION_ATTR_OUTPUT, dst->port->odp_port); + } + + dst_set_free(&set); +} + +/* Returns the effective vlan of a packet, taking into account both the + * 802.1Q header and implicitly tagged ports. A value of 0 indicates that + * the packet is untagged and -1 indicates it has an invalid header and + * should be dropped. */ +static int +flow_get_vlan(struct ofproto_dpif *ofproto, const struct flow *flow, + struct ofbundle *in_bundle, bool have_packet) +{ + int vlan = vlan_tci_to_vid(flow->vlan_tci); + if (in_bundle->vlan >= 0) { + if (vlan) { + if (have_packet) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_WARN_RL(&rl, "bridge %s: dropping VLAN %d tagged " + "packet received on port %s configured with " + "implicit VLAN %"PRIu16, + ofproto->up.name, vlan, + in_bundle->name, in_bundle->vlan); + } + return -1; + } + vlan = in_bundle->vlan; + } else { + if (!ofbundle_includes_vlan(in_bundle, vlan)) { + if (have_packet) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_WARN_RL(&rl, "bridge %s: dropping VLAN %d tagged " + "packet received on port %s not configured for " + "trunking VLAN %d", + ofproto->up.name, vlan, in_bundle->name, vlan); + } + return -1; + } + } + + return vlan; +} + +/* A VM broadcasts a gratuitous ARP to indicate that it has resumed after + * migration. Older Citrix-patched Linux DomU used gratuitous ARP replies to + * indicate this; newer upstream kernels use gratuitous ARP requests. */ +static bool +is_gratuitous_arp(const struct flow *flow) +{ + return (flow->dl_type == htons(ETH_TYPE_ARP) + && eth_addr_is_broadcast(flow->dl_dst) + && (flow->nw_proto == ARP_OP_REPLY + || (flow->nw_proto == ARP_OP_REQUEST + && flow->nw_src == flow->nw_dst))); +} + +static void +update_learning_table(struct ofproto_dpif *ofproto, + const struct flow *flow, int vlan, + struct ofbundle *in_bundle) +{ + struct mac_entry *mac; + + if (!mac_learning_may_learn(ofproto->ml, flow->dl_src, vlan)) { + return; + } + + mac = mac_learning_insert(ofproto->ml, flow->dl_src, vlan); + if (is_gratuitous_arp(flow)) { + /* We don't want to learn from gratuitous ARP packets that are + * reflected back over bond slaves so we lock the learning table. */ + if (!in_bundle->bond) { + mac_entry_set_grat_arp_lock(mac); + } else if (mac_entry_is_grat_arp_locked(mac)) { + return; + } + } + + if (mac_entry_is_new(mac) || mac->port.p != in_bundle) { + /* The log messages here could actually be useful in debugging, + * so keep the rate limit relatively high. */ + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(30, 300); + VLOG_DBG_RL(&rl, "bridge %s: learned that "ETH_ADDR_FMT" is " + "on port %s in VLAN %d", + ofproto->up.name, ETH_ADDR_ARGS(flow->dl_src), + in_bundle->name, vlan); + + mac->port.p = in_bundle; + tag_set_add(&ofproto->revalidate_set, + mac_learning_changed(ofproto->ml, mac)); + } +} + +/* Determines whether packets in 'flow' within 'br' should be forwarded or + * dropped. Returns true if they may be forwarded, false if they should be + * dropped. + * + * If 'have_packet' is true, it indicates that the caller is processing a + * received packet. If 'have_packet' is false, then the caller is just + * revalidating an existing flow because configuration has changed. Either + * way, 'have_packet' only affects logging (there is no point in logging errors + * during revalidation). + * + * Sets '*in_portp' to the input port. This will be a null pointer if + * flow->in_port does not designate a known input port (in which case + * is_admissible() returns false). + * + * When returning true, sets '*vlanp' to the effective VLAN of the input + * packet, as returned by flow_get_vlan(). + * + * May also add tags to '*tags', although the current implementation only does + * so in one special case. + */ +static bool +is_admissible(struct ofproto_dpif *ofproto, const struct flow *flow, + bool have_packet, + tag_type *tags, int *vlanp, struct ofbundle **in_bundlep) +{ + struct ofport_dpif *in_port; + struct ofbundle *in_bundle; + int vlan; + + /* Find the port and bundle for the received packet. */ + in_port = get_ofp_port(ofproto, flow->in_port); + *in_bundlep = in_bundle = in_port->bundle; + if (!in_port || !in_bundle) { + /* No interface? Something fishy... */ + if (have_packet) { + /* Odd. A few possible reasons here: + * + * - We deleted a port but there are still a few packets queued up + * from it. + * + * - Someone externally added a port (e.g. "ovs-dpctl add-if") that + * we don't know about. + * + * - Packet arrived on the local port but the local port is not + * part of a bundle. + */ + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + + VLOG_WARN_RL(&rl, "bridge %s: received packet on unknown " + "port %"PRIu16, + ofproto->up.name, flow->in_port); + } + return false; + } + *vlanp = vlan = flow_get_vlan(ofproto, flow, in_bundle, have_packet); + if (vlan < 0) { + return false; + } + + /* Drop frames for reserved multicast addresses. */ + if (eth_addr_is_reserved(flow->dl_dst)) { + return false; + } + + /* Drop frames on bundles reserved for mirroring. */ + if (in_bundle->mirror_out) { + if (have_packet) { + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_WARN_RL(&rl, "bridge %s: dropping packet received on port " + "%s, which is reserved exclusively for mirroring", + ofproto->up.name, in_bundle->name); + } + return false; + } + + if (in_bundle->bond) { + struct mac_entry *mac; + + switch (bond_check_admissibility(in_bundle->bond, in_port, + flow->dl_dst, tags)) { + case BV_ACCEPT: + break; + + case BV_DROP: + return false; + + case BV_DROP_IF_MOVED: + mac = mac_learning_lookup(ofproto->ml, flow->dl_src, vlan, NULL); + if (mac && mac->port.p != in_bundle && + (!is_gratuitous_arp(flow) + || mac_entry_is_grat_arp_locked(mac))) { + return false; + } + break; + } + } + + return true; +} + +/* If the composed actions may be applied to any packet in the given 'flow', + * returns true. Otherwise, the actions should only be applied to 'packet', or + * not at all, if 'packet' was NULL. */ +static bool +xlate_normal(struct action_xlate_ctx *ctx) +{ + struct ofbundle *in_bundle; + struct ofbundle *out_bundle; + struct mac_entry *mac; + int vlan; + + /* Check whether we should drop packets in this flow. */ + if (!is_admissible(ctx->ofproto, &ctx->flow, ctx->packet != NULL, + &ctx->tags, &vlan, &in_bundle)) { + out_bundle = NULL; + goto done; + } + + /* Learn source MAC (but don't try to learn from revalidation). */ + if (ctx->packet) { + update_learning_table(ctx->ofproto, &ctx->flow, vlan, in_bundle); + } + + /* Determine output bundle. */ + mac = mac_learning_lookup(ctx->ofproto->ml, ctx->flow.dl_dst, vlan, + &ctx->tags); + if (mac) { + out_bundle = mac->port.p; + } else if (!ctx->packet && !eth_addr_is_multicast(ctx->flow.dl_dst)) { + /* If we are revalidating but don't have a learning entry then eject + * the flow. Installing a flow that floods packets opens up a window + * of time where we could learn from a packet reflected on a bond and + * blackhole packets before the learning table is updated to reflect + * the correct port. */ + return false; + } else { + out_bundle = OFBUNDLE_FLOOD; + } + + /* Don't send packets out their input bundles. */ + if (in_bundle == out_bundle) { + out_bundle = NULL; + } + +done: + if (in_bundle) { + compose_actions(ctx, vlan, in_bundle, out_bundle); + } + + return true; +} + +static bool +get_drop_frags(struct ofproto *ofproto_) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + bool drop_frags; + + dpif_get_drop_frags(ofproto->dpif, &drop_frags); + return drop_frags; +} + +static void +set_drop_frags(struct ofproto *ofproto_, bool drop_frags) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + + dpif_set_drop_frags(ofproto->dpif, drop_frags); +} + +static int +packet_out(struct ofproto *ofproto_, struct ofpbuf *packet, + const struct flow *flow, + const union ofp_action *ofp_actions, size_t n_ofp_actions) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + int error; + + error = validate_actions(ofp_actions, n_ofp_actions, flow, + ofproto->max_ports); + if (!error) { + struct action_xlate_ctx ctx; + struct ofpbuf *odp_actions; + + action_xlate_ctx_init(&ctx, ofproto, flow, packet); + odp_actions = xlate_actions(&ctx, ofp_actions, n_ofp_actions); + dpif_execute(ofproto->dpif, odp_actions->data, odp_actions->size, + packet); + ofpbuf_delete(odp_actions); + } + return error; +} + +static void +get_netflow_ids(const struct ofproto *ofproto_, + uint8_t *engine_type, uint8_t *engine_id) +{ + struct ofproto_dpif *ofproto = ofproto_dpif_cast(ofproto_); + + dpif_get_netflow_ids(ofproto->dpif, engine_type, engine_id); +} + +static struct ofproto_dpif * +ofproto_dpif_lookup(const char *name) +{ + struct ofproto *ofproto = ofproto_lookup(name); + return (ofproto && ofproto->ofproto_class == &ofproto_dpif_class + ? ofproto_dpif_cast(ofproto) + : NULL); +} + +static void +ofproto_unixctl_fdb_show(struct unixctl_conn *conn, + const char *args, void *aux OVS_UNUSED) +{ + struct ds ds = DS_EMPTY_INITIALIZER; + const struct ofproto_dpif *ofproto; + const struct mac_entry *e; + + ofproto = ofproto_dpif_lookup(args); + if (!ofproto) { + unixctl_command_reply(conn, 501, "no such bridge"); + return; + } + + ds_put_cstr(&ds, " port VLAN MAC Age\n"); + LIST_FOR_EACH (e, lru_node, &ofproto->ml->lrus) { + struct ofbundle *bundle = e->port.p; + ds_put_format(&ds, "%5d %4d "ETH_ADDR_FMT" %3d\n", + ofbundle_get_a_port(bundle)->odp_port, + e->vlan, ETH_ADDR_ARGS(e->mac), mac_entry_age(e)); + } + unixctl_command_reply(conn, 200, ds_cstr(&ds)); + ds_destroy(&ds); +} + +struct ofproto_trace { + struct action_xlate_ctx ctx; + struct flow flow; + struct ds *result; +}; + +static void +trace_format_rule(struct ds *result, int level, const struct rule *rule) +{ + ds_put_char_multiple(result, '\t', level); + if (!rule) { + ds_put_cstr(result, "No match\n"); + return; + } + + ds_put_format(result, "Rule: cookie=%#"PRIx64" ", + ntohll(rule->flow_cookie)); + cls_rule_format(&rule->cr, result); + ds_put_char(result, '\n'); + + ds_put_char_multiple(result, '\t', level); + ds_put_cstr(result, "OpenFlow "); + ofp_print_actions(result, (const struct ofp_action_header *) rule->actions, + rule->n_actions * sizeof *rule->actions); + ds_put_char(result, '\n'); +} + +static void +trace_format_flow(struct ds *result, int level, const char *title, + struct ofproto_trace *trace) +{ + ds_put_char_multiple(result, '\t', level); + ds_put_format(result, "%s: ", title); + if (flow_equal(&trace->ctx.flow, &trace->flow)) { + ds_put_cstr(result, "unchanged"); + } else { + flow_format(result, &trace->ctx.flow); + trace->flow = trace->ctx.flow; + } + ds_put_char(result, '\n'); +} + +static void +trace_resubmit(struct action_xlate_ctx *ctx, struct rule_dpif *rule) +{ + struct ofproto_trace *trace = CONTAINER_OF(ctx, struct ofproto_trace, ctx); + struct ds *result = trace->result; + + ds_put_char(result, '\n'); + trace_format_flow(result, ctx->recurse + 1, "Resubmitted flow", trace); + trace_format_rule(result, ctx->recurse + 1, &rule->up); +} + +static void +ofproto_unixctl_trace(struct unixctl_conn *conn, const char *args_, + void *aux OVS_UNUSED) +{ + char *dpname, *in_port_s, *tun_id_s, *packet_s; + char *args = xstrdup(args_); + char *save_ptr = NULL; + struct ofproto_dpif *ofproto; + struct ofpbuf packet; + struct rule_dpif *rule; + struct ds result; + struct flow flow; + uint16_t in_port; + ovs_be64 tun_id; + char *s; + + ofpbuf_init(&packet, strlen(args) / 2); + ds_init(&result); + + dpname = strtok_r(args, " ", &save_ptr); + tun_id_s = strtok_r(NULL, " ", &save_ptr); + in_port_s = strtok_r(NULL, " ", &save_ptr); + packet_s = strtok_r(NULL, "", &save_ptr); /* Get entire rest of line. */ + if (!dpname || !in_port_s || !packet_s) { + unixctl_command_reply(conn, 501, "Bad command syntax"); + goto exit; + } + + ofproto = ofproto_dpif_lookup(dpname); + if (!ofproto) { + unixctl_command_reply(conn, 501, "Unknown ofproto (use ofproto/list " + "for help)"); + goto exit; + } + + tun_id = htonll(strtoull(tun_id_s, NULL, 0)); + in_port = ofp_port_to_odp_port(atoi(in_port_s)); + + packet_s = ofpbuf_put_hex(&packet, packet_s, NULL); + packet_s += strspn(packet_s, " "); + if (*packet_s != '\0') { + unixctl_command_reply(conn, 501, "Trailing garbage in command"); + goto exit; + } + if (packet.size < ETH_HEADER_LEN) { + unixctl_command_reply(conn, 501, "Packet data too short for Ethernet"); + goto exit; + } + + ds_put_cstr(&result, "Packet: "); + s = ofp_packet_to_string(packet.data, packet.size, packet.size); + ds_put_cstr(&result, s); + free(s); + + flow_extract(&packet, tun_id, in_port, &flow); + ds_put_cstr(&result, "Flow: "); + flow_format(&result, &flow); + ds_put_char(&result, '\n'); + + rule = rule_dpif_lookup(ofproto, &flow); + trace_format_rule(&result, 0, &rule->up); + if (rule) { + struct ofproto_trace trace; + struct ofpbuf *odp_actions; + + trace.result = &result; + trace.flow = flow; + action_xlate_ctx_init(&trace.ctx, ofproto, &flow, &packet); + trace.ctx.resubmit_hook = trace_resubmit; + odp_actions = xlate_actions(&trace.ctx, + rule->up.actions, rule->up.n_actions); + + ds_put_char(&result, '\n'); + trace_format_flow(&result, 0, "Final flow", &trace); + ds_put_cstr(&result, "Datapath actions: "); + format_odp_actions(&result, odp_actions->data, odp_actions->size); + ofpbuf_delete(odp_actions); + } + + unixctl_command_reply(conn, 200, ds_cstr(&result)); + +exit: + ds_destroy(&result); + ofpbuf_uninit(&packet); + free(args); +} + +static void +ofproto_dpif_unixctl_init(void) +{ + static bool registered; + if (registered) { + return; + } + registered = true; + + unixctl_command_register("ofproto/trace", ofproto_unixctl_trace, NULL); + unixctl_command_register("fdb/show", ofproto_unixctl_fdb_show, NULL); +} + +const struct ofproto_class ofproto_dpif_class = { + enumerate_types, + enumerate_names, + del, + alloc, + construct, + destruct, + dealloc, + run, + wait, + flush, + port_alloc, + port_construct, + port_destruct, + port_dealloc, + port_modified, + port_reconfigured, + port_query_by_name, + port_add, + port_del, + port_dump_start, + port_dump_next, + port_dump_done, + port_poll, + port_poll_wait, + port_is_lacp_current, + rule_alloc, + rule_construct, + rule_destruct, + rule_dealloc, + rule_remove, + rule_get_stats, + rule_execute, + rule_modify_actions, + get_drop_frags, + set_drop_frags, + packet_out, + set_netflow, + get_netflow_ids, + set_sflow, + set_cfm, + get_cfm, + bundle_set, + bundle_remove, + mirror_set, + set_flood_vlans, + is_mirror_output_bundle, +}; diff --git a/ofproto/ofproto.c b/ofproto/ofproto.c index dadc546b2..8ec5a124e 100644 --- a/ofproto/ofproto.c +++ b/ofproto/ofproto.c @@ -32,7 +32,6 @@ #include "classifier.h" #include "connmgr.h" #include "coverage.h" -#include "dpif.h" #include "dynamic-string.h" #include "fail-open.h" #include "hash.h" @@ -74,13 +73,10 @@ VLOG_DEFINE_THIS_MODULE(ofproto); -COVERAGE_DEFINE(facet_changed_rule); -COVERAGE_DEFINE(facet_revalidate); COVERAGE_DEFINE(odp_overflow); COVERAGE_DEFINE(ofproto_agg_request); COVERAGE_DEFINE(ofproto_costly_flags); COVERAGE_DEFINE(ofproto_ctlr_action); -COVERAGE_DEFINE(ofproto_del_rule); COVERAGE_DEFINE(ofproto_error); COVERAGE_DEFINE(ofproto_expiration); COVERAGE_DEFINE(ofproto_expired); @@ -98,323 +94,137 @@ COVERAGE_DEFINE(ofproto_unexpected_rule); COVERAGE_DEFINE(ofproto_uninstallable); COVERAGE_DEFINE(ofproto_update_port); -/* Maximum depth of flow table recursion (due to NXAST_RESUBMIT actions) in a - * flow translation. */ -#define MAX_RESUBMIT_RECURSION 16 - -struct rule; - -#define MAX_MIRRORS 32 -typedef uint32_t mirror_mask_t; -#define MIRROR_MASK_C(X) UINT32_C(X) -BUILD_ASSERT_DECL(sizeof(mirror_mask_t) * CHAR_BIT >= MAX_MIRRORS); -struct ofmirror { - struct ofproto *ofproto; /* Owning ofproto. */ - size_t idx; /* In ofproto's "mirrors" array. */ - void *aux; /* Key supplied by ofproto's client. */ - char *name; /* Identifier for log messages. */ - - /* Selection criteria. */ - struct hmapx srcs; /* Contains "struct ofbundle *"s. */ - struct hmapx dsts; /* Contains "struct ofbundle *"s. */ - unsigned long *vlans; /* Bitmap of chosen VLANs, NULL selects all. */ - - /* Output (mutually exclusive). */ - struct ofbundle *out; /* Output port or NULL. */ - int out_vlan; /* Output VLAN or -1. */ -}; - -static void ofproto_mirror_destroy(struct ofmirror *); - -/* A group of one or more OpenFlow ports. */ -#define OFBUNDLE_FLOOD ((struct ofbundle *) 1) -struct ofbundle { - struct ofproto *ofproto; /* Owning ofproto. */ - struct hmap_node hmap_node; /* In struct ofproto's "bundles" hmap. */ - void *aux; /* Key supplied by ofproto's client. */ - char *name; /* Identifier for log messages. */ - - /* Configuration. */ - struct list ports; /* Contains "struct ofport"s. */ - int vlan; /* -1=trunk port, else a 12-bit VLAN ID. */ - unsigned long *trunks; /* Bitmap of trunked VLANs, if 'vlan' == -1. - * NULL if all VLANs are trunked. */ - struct lacp *lacp; /* LACP if LACP is enabled, otherwise NULL. */ - struct bond *bond; /* Bonding setup if more than one port, - * otherwise NULL. */ - - /* Status. */ - bool floodable; /* True if no port has OFPPC_NO_FLOOD set. */ - - /* Port mirroring info. */ - mirror_mask_t src_mirrors; /* Mirrors triggered when packet received. */ - mirror_mask_t dst_mirrors; /* Mirrors triggered when packet sent. */ - mirror_mask_t mirror_out; /* Mirrors that output to this bundle. */ -}; - -/* An OpenFlow port. */ -struct ofport { - struct ofproto *ofproto; /* Owning ofproto. */ - struct hmap_node hmap_node; /* In struct ofproto's "ports" hmap. */ - struct netdev *netdev; - struct ofp_phy_port opp; - uint16_t odp_port; - - /* Bridging. */ - struct ofbundle *bundle; /* Bundle that contains this port, if any. */ - struct list bundle_node; /* In struct ofbundle's "ports" list. */ - struct cfm *cfm; /* Connectivity Fault Management, if any. */ - tag_type tag; /* Tag associated with this port. */ -}; +static void ofport_destroy__(struct ofport *); +static void ofport_destroy(struct ofport *); -static void ofport_free(struct ofport *); -static void ofport_run(struct ofport *); -static void ofport_wait(struct ofport *); +static int rule_create(struct ofproto *, const struct cls_rule *, + const union ofp_action *, size_t n_actions, + uint16_t idle_timeout, uint16_t hard_timeout, + ovs_be64 flow_cookie, bool send_flow_removed, + struct rule **rulep); -struct action_xlate_ctx { -/* action_xlate_ctx_init() initializes these members. */ - - /* The ofproto. */ - struct ofproto *ofproto; - - /* Flow to which the OpenFlow actions apply. xlate_actions() will modify - * this flow when actions change header fields. */ - struct flow flow; +static uint64_t pick_datapath_id(const struct ofproto *); +static uint64_t pick_fallback_dpid(void); - /* The packet corresponding to 'flow', or a null pointer if we are - * revalidating without a packet to refer to. */ - const struct ofpbuf *packet; +static void ofproto_destroy__(struct ofproto *); +static void ofproto_flush_flows__(struct ofproto *); - /* If nonnull, called just before executing a resubmit action. - * - * This is normally null so the client has to set it manually after - * calling action_xlate_ctx_init(). */ - void (*resubmit_hook)(struct action_xlate_ctx *, struct rule *); +static void ofproto_rule_destroy__(struct rule *); +static void ofproto_rule_send_removed(struct rule *, uint8_t reason); +static void ofproto_rule_remove(struct rule *); - /* If true, the speciality of 'flow' should be checked before executing - * its actions. If special_cb returns false on 'flow' rendered - * uninstallable and no actions will be executed. */ - bool check_special; +static void handle_openflow(struct ofconn *, struct ofpbuf *); -/* xlate_actions() initializes and uses these members. The client might want - * to look at them after it returns. */ +static void update_port(struct ofproto *, const char *devname); +static int init_ports(struct ofproto *); +static void reinit_ports(struct ofproto *); - struct ofpbuf *odp_actions; /* Datapath actions. */ - tag_type tags; /* Tags associated with OFPP_NORMAL actions. */ - bool may_set_up_flow; /* True ordinarily; false if the actions must - * be reassessed for every packet. */ - uint16_t nf_output_iface; /* Output interface index for NetFlow. */ +static void ofproto_unixctl_init(void); -/* xlate_actions() initializes and uses these members, but the client has no - * reason to look at them. */ +/* All registered ofproto classes, in probe order. */ +static const struct ofproto_class **ofproto_classes; +static size_t n_ofproto_classes; +static size_t allocated_ofproto_classes; - int recurse; /* Recursion level, via xlate_table_action. */ - int last_pop_priority; /* Offset in 'odp_actions' just past most - * recent ODP_ACTION_ATTR_SET_PRIORITY. */ -}; +/* Map from dpif name to struct ofproto, for use by unixctl commands. */ +static struct hmap all_ofprotos = HMAP_INITIALIZER(&all_ofprotos); -static void action_xlate_ctx_init(struct action_xlate_ctx *, - struct ofproto *, const struct flow *, - const struct ofpbuf *); -static struct ofpbuf *xlate_actions(struct action_xlate_ctx *, - const union ofp_action *in, size_t n_in); - -/* An OpenFlow flow. */ -struct rule { - long long int used; /* Time last used; time created if not used. */ - long long int created; /* Creation time. */ - - /* These statistics: - * - * - Do include packets and bytes from facets that have been deleted or - * whose own statistics have been folded into the rule. - * - * - Do include packets and bytes sent "by hand" that were accounted to - * the rule without any facet being involved (this is a rare corner - * case in rule_execute()). - * - * - Do not include packet or bytes that can be obtained from any facet's - * packet_count or byte_count member or that can be obtained from the - * datapath by, e.g., dpif_flow_get() for any facet. - */ - uint64_t packet_count; /* Number of packets received. */ - uint64_t byte_count; /* Number of bytes received. */ - - ovs_be64 flow_cookie; /* Controller-issued identifier. */ - - struct cls_rule cr; /* In owning ofproto's classifier. */ - uint16_t idle_timeout; /* In seconds from time of last use. */ - uint16_t hard_timeout; /* In seconds from time of creation. */ - bool send_flow_removed; /* Send a flow removed message? */ - int n_actions; /* Number of elements in actions[]. */ - union ofp_action *actions; /* OpenFlow actions. */ - struct list facets; /* List of "struct facet"s. */ -}; +static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); -static struct rule *rule_from_cls_rule(const struct cls_rule *); -static bool rule_is_hidden(const struct rule *); - -static struct rule *rule_create(const struct cls_rule *, - const union ofp_action *, size_t n_actions, - uint16_t idle_timeout, uint16_t hard_timeout, - ovs_be64 flow_cookie, bool send_flow_removed); -static void rule_destroy(struct ofproto *, struct rule *); -static void rule_free(struct rule *); - -static struct rule *rule_lookup(struct ofproto *, const struct flow *); -static void rule_insert(struct ofproto *, struct rule *); -static void rule_remove(struct ofproto *, struct rule *); - -static void rule_send_removed(struct ofproto *, struct rule *, uint8_t reason); -static void rule_get_stats(const struct rule *, uint64_t *packets, - uint64_t *bytes); - -/* An exact-match instantiation of an OpenFlow flow. */ -struct facet { - long long int used; /* Time last used; time created if not used. */ - - /* These statistics: - * - * - Do include packets and bytes sent "by hand", e.g. with - * dpif_execute(). - * - * - Do include packets and bytes that were obtained from the datapath - * when a flow was deleted (e.g. dpif_flow_del()) or when its - * statistics were reset (e.g. dpif_flow_put() with - * DPIF_FP_ZERO_STATS). - * - * - Do not include any packets or bytes that can currently be obtained - * from the datapath by, e.g., dpif_flow_get(). - */ - uint64_t packet_count; /* Number of packets received. */ - uint64_t byte_count; /* Number of bytes received. */ - - uint64_t dp_packet_count; /* Last known packet count in the datapath. */ - uint64_t dp_byte_count; /* Last known byte count in the datapath. */ - - uint64_t rs_packet_count; /* Packets pushed to resubmit children. */ - uint64_t rs_byte_count; /* Bytes pushed to resubmit children. */ - long long int rs_used; /* Used time pushed to resubmit children. */ - - /* Number of bytes passed to account_cb. This may include bytes that can - * currently obtained from the datapath (thus, it can be greater than - * byte_count). */ - uint64_t accounted_bytes; - - struct hmap_node hmap_node; /* In owning ofproto's 'facets' hmap. */ - struct list list_node; /* In owning rule's 'facets' list. */ - struct rule *rule; /* Owning rule. */ - struct flow flow; /* Exact-match flow. */ - bool installed; /* Installed in datapath? */ - bool may_install; /* True ordinarily; false if actions must - * be reassessed for every packet. */ - size_t actions_len; /* Number of bytes in actions[]. */ - struct nlattr *actions; /* Datapath actions. */ - tag_type tags; /* Tags. */ - struct netflow_flow nf_flow; /* Per-flow NetFlow tracking data. */ -}; +static void +ofproto_initialize(void) +{ + static bool inited; -static struct facet *facet_create(struct ofproto *, struct rule *, - const struct flow *, - const struct ofpbuf *packet); -static void facet_remove(struct ofproto *, struct facet *); -static void facet_free(struct facet *); - -static struct facet *facet_lookup_valid(struct ofproto *, const struct flow *); -static bool facet_revalidate(struct ofproto *, struct facet *); - -static void facet_install(struct ofproto *, struct facet *, bool zero_stats); -static void facet_uninstall(struct ofproto *, struct facet *); -static void facet_flush_stats(struct ofproto *, struct facet *); - -static void facet_make_actions(struct ofproto *, struct facet *, - const struct ofpbuf *packet); -static void facet_update_stats(struct ofproto *, struct facet *, - const struct dpif_flow_stats *); -static void facet_push_stats(struct ofproto *, struct facet *); - -static void send_packet_in(struct ofproto *, struct dpif_upcall *, - const struct flow *, bool clone); - -struct ofproto { - char *name; /* Datapath name. */ - struct hmap_node hmap_node; /* In global 'all_ofprotos' hmap. */ - - /* Settings. */ - uint64_t datapath_id; /* Datapath ID. */ - uint64_t fallback_dpid; /* Datapath ID if no better choice found. */ - char *mfr_desc; /* Manufacturer. */ - char *hw_desc; /* Hardware. */ - char *sw_desc; /* Software version. */ - char *serial_desc; /* Serial number. */ - char *dp_desc; /* Datapath description. */ - - /* Datapath. */ - struct dpif *dpif; - struct netdev_monitor *netdev_monitor; - struct hmap ports; /* Contains "struct ofport"s. */ - struct shash port_by_name; - uint32_t max_ports; - - /* Bridging. */ - struct netflow *netflow; - struct ofproto_sflow *sflow; - struct hmap bundles; /* Contains "struct ofbundle"s. */ - struct mac_learning *ml; - struct ofmirror *mirrors[MAX_MIRRORS]; - bool has_bonded_bundles; - - /* Flow table. */ - struct classifier cls; - struct timer next_expiration; - - /* Facets. */ - struct hmap facets; - bool need_revalidate; - struct tag_set revalidate_set; - - /* OpenFlow connections. */ - struct connmgr *connmgr; -}; + if (!inited) { + inited = true; + ofproto_class_register(&ofproto_dpif_class); + } +} -/* Map from dpif name to struct ofproto, for use by unixctl commands. */ -static struct hmap all_ofprotos = HMAP_INITIALIZER(&all_ofprotos); +/* 'type' should be a normalized datapath type, as returned by + * ofproto_normalize_type(). Returns the corresponding ofproto_class + * structure, or a null pointer if there is none registered for 'type'. */ +static const struct ofproto_class * +ofproto_class_find__(const char *type) +{ + size_t i; -static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); + ofproto_initialize(); + for (i = 0; i < n_ofproto_classes; i++) { + const struct ofproto_class *class = ofproto_classes[i]; + struct sset types; + bool found; -static uint64_t pick_datapath_id(const struct ofproto *); -static uint64_t pick_fallback_dpid(void); + sset_init(&types); + class->enumerate_types(&types); + found = sset_contains(&types, type); + sset_destroy(&types); -static void ofproto_flush_flows__(struct ofproto *); -static int ofproto_expire(struct ofproto *); -static void flow_push_stats(struct ofproto *, const struct rule *, - struct flow *, uint64_t packets, uint64_t bytes, - long long int used); + if (found) { + return class; + } + } + VLOG_WARN("unknown datapath type %s", type); + return NULL; +} -static void handle_upcall(struct ofproto *, struct dpif_upcall *); +/* Registers a new ofproto class. After successful registration, new ofprotos + * of that type can be created using ofproto_create(). */ +int +ofproto_class_register(const struct ofproto_class *new_class) +{ + size_t i; -static void handle_openflow(struct ofconn *, struct ofpbuf *); + for (i = 0; i < n_ofproto_classes; i++) { + if (ofproto_classes[i] == new_class) { + return EEXIST; + } + } -static struct ofport *get_port(const struct ofproto *, uint16_t odp_port); -static void update_port(struct ofproto *, const char *devname); -static int init_ports(struct ofproto *); -static void reinit_ports(struct ofproto *); + if (n_ofproto_classes >= allocated_ofproto_classes) { + ofproto_classes = x2nrealloc(ofproto_classes, + &allocated_ofproto_classes, + sizeof *ofproto_classes); + } + ofproto_classes[n_ofproto_classes++] = new_class; + return 0; +} -static void update_learning_table(struct ofproto *, - const struct flow *, int vlan, - struct ofbundle *); -static bool is_admissible(struct ofproto *, const struct flow *, - bool have_packet, tag_type *, int *vlanp, - struct ofbundle **in_bundlep); +/* Unregisters a datapath provider. 'type' must have been previously + * registered and not currently be in use by any ofprotos. After + * unregistration new datapaths of that type cannot be opened using + * ofproto_create(). */ +int +ofproto_class_unregister(const struct ofproto_class *class) +{ + size_t i; -static void ofproto_unixctl_init(void); + for (i = 0; i < n_ofproto_classes; i++) { + if (ofproto_classes[i] == class) { + for (i++; i < n_ofproto_classes; i++) { + ofproto_classes[i - 1] = ofproto_classes[i]; + } + n_ofproto_classes--; + return 0; + } + } + VLOG_WARN("attempted to unregister an ofproto class that is not " + "registered"); + return EAFNOSUPPORT; +} /* Clears 'types' and enumerates all registered ofproto types into it. The * caller must first initialize the sset. */ void ofproto_enumerate_types(struct sset *types) { - dp_enumerate_types(types); + size_t i; + + ofproto_initialize(); + for (i = 0; i < n_ofproto_classes; i++) { + ofproto_classes[i]->enumerate_types(types); + } } /* Returns the fully spelled out name for the given ofproto 'type'. @@ -424,7 +234,7 @@ ofproto_enumerate_types(struct sset *types) const char * ofproto_normalize_type(const char *type) { - return dpif_normalize_type(type); + return type && type[0] ? type : "system"; } /* Clears 'names' and enumerates the names of all known created ofprotos with @@ -436,98 +246,71 @@ ofproto_normalize_type(const char *type) int ofproto_enumerate_names(const char *type, struct sset *names) { - return dp_enumerate_names(type, names); -} + const struct ofproto_class *class = ofproto_class_find__(type); + return class ? class->enumerate_names(type, names) : EAFNOSUPPORT; + } int -ofproto_create(const char *datapath, const char *datapath_type, +ofproto_create(const char *datapath_name, const char *datapath_type, struct ofproto **ofprotop) { - char local_name[IF_NAMESIZE]; - struct ofproto *p; - struct dpif *dpif; + const struct ofproto_class *class; + struct ofproto *ofproto; int error; - int i; *ofprotop = NULL; + ofproto_initialize(); ofproto_unixctl_init(); - /* Connect to datapath and start listening for messages. */ - error = dpif_create_and_open(datapath, datapath_type, &dpif); - if (error) { - VLOG_ERR("failed to open datapath %s: %s", datapath, strerror(error)); - return error; - } - error = dpif_recv_set_mask(dpif, - ((1u << DPIF_UC_MISS) | - (1u << DPIF_UC_ACTION) | - (1u << DPIF_UC_SAMPLE))); - if (error) { - VLOG_ERR("failed to listen on datapath %s: %s", - datapath, strerror(error)); - dpif_close(dpif); - return error; + datapath_type = ofproto_normalize_type(datapath_type); + class = ofproto_class_find__(datapath_type); + if (!class) { + VLOG_WARN("could not create datapath %s of unknown type %s", + datapath_name, datapath_type); + return EAFNOSUPPORT; } - dpif_flow_flush(dpif); - dpif_recv_purge(dpif); - error = dpif_port_get_name(dpif, ODPP_LOCAL, - local_name, sizeof local_name); + ofproto = class->alloc(); + if (!ofproto) { + VLOG_ERR("failed to allocate datapath %s of type %s", + datapath_name, datapath_type); + return ENOMEM; + } + + /* Initialize. */ + memset(ofproto, 0, sizeof *ofproto); + ofproto->ofproto_class = class; + ofproto->name = xstrdup(datapath_name); + ofproto->type = xstrdup(datapath_type); + hmap_insert(&all_ofprotos, &ofproto->hmap_node, + hash_string(ofproto->name, 0)); + ofproto->datapath_id = 0; + ofproto->fallback_dpid = pick_fallback_dpid(); + ofproto->mfr_desc = xstrdup(DEFAULT_MFR_DESC); + ofproto->hw_desc = xstrdup(DEFAULT_HW_DESC); + ofproto->sw_desc = xstrdup(DEFAULT_SW_DESC); + ofproto->serial_desc = xstrdup(DEFAULT_SERIAL_DESC); + ofproto->dp_desc = xstrdup(DEFAULT_DP_DESC); + ofproto->netdev_monitor = netdev_monitor_create(); + hmap_init(&ofproto->ports); + shash_init(&ofproto->port_by_name); + classifier_init(&ofproto->cls); + ofproto->connmgr = connmgr_create(ofproto, datapath_name, datapath_name); + + error = ofproto->ofproto_class->construct(ofproto); if (error) { - VLOG_ERR("%s: cannot get name of datapath local port (%s)", - datapath, strerror(error)); + VLOG_ERR("failed to open datapath %s: %s", + datapath_name, strerror(error)); + ofproto_destroy__(ofproto); return error; } - /* Initialize settings. */ - p = xzalloc(sizeof *p); - p->name = xstrdup(dpif_name(dpif)); - hmap_insert(&all_ofprotos, &p->hmap_node, hash_string(p->name, 0)); - p->fallback_dpid = pick_fallback_dpid(); - p->datapath_id = p->fallback_dpid; - p->mfr_desc = xstrdup(DEFAULT_MFR_DESC); - p->hw_desc = xstrdup(DEFAULT_HW_DESC); - p->sw_desc = xstrdup(DEFAULT_SW_DESC); - p->serial_desc = xstrdup(DEFAULT_SERIAL_DESC); - p->dp_desc = xstrdup(DEFAULT_DP_DESC); - - /* Initialize datapath. */ - p->dpif = dpif; - p->netdev_monitor = netdev_monitor_create(); - hmap_init(&p->ports); - shash_init(&p->port_by_name); - p->max_ports = dpif_get_max_ports(dpif); - - /* Initialize bridging. */ - p->netflow = NULL; - p->sflow = NULL; - hmap_init(&p->bundles); - p->ml = mac_learning_create(); - for (i = 0; i < MAX_MIRRORS; i++) { - p->mirrors[i] = NULL; - } - p->has_bonded_bundles = false; - - /* Initialize flow table. */ - classifier_init(&p->cls); - timer_set_duration(&p->next_expiration, 1000); - - /* Initialize facet table. */ - hmap_init(&p->facets); - p->need_revalidate = false; - tag_set_init(&p->revalidate_set); - - /* Pick final datapath ID. */ - p->datapath_id = pick_datapath_id(p); - VLOG_INFO("using datapath ID %016"PRIx64, p->datapath_id); - - /* Initialize OpenFlow connections. */ - p->connmgr = connmgr_create(p, datapath, local_name); - - init_ports(p); + ofproto->datapath_id = pick_datapath_id(ofproto); + VLOG_INFO("using datapath ID %016"PRIx64, ofproto->datapath_id); + init_ports(ofproto); - *ofprotop = p; + *ofprotop = ofproto; return 0; } @@ -647,37 +430,29 @@ int ofproto_set_netflow(struct ofproto *ofproto, const struct netflow_options *nf_options) { - if (nf_options && !sset_is_empty(&nf_options->collectors)) { - if (!ofproto->netflow) { - ofproto->netflow = netflow_create(); - } - return netflow_set_options(ofproto->netflow, nf_options); + if (nf_options && sset_is_empty(&nf_options->collectors)) { + nf_options = NULL; + } + + if (ofproto->ofproto_class->set_netflow) { + return ofproto->ofproto_class->set_netflow(ofproto, nf_options); } else { - netflow_destroy(ofproto->netflow); - ofproto->netflow = NULL; - return 0; + return nf_options ? EOPNOTSUPP : 0; } } -void +int ofproto_set_sflow(struct ofproto *ofproto, const struct ofproto_sflow_options *oso) { - struct ofproto_sflow *os = ofproto->sflow; - if (oso) { - if (!os) { - struct ofport *ofport; + if (oso && sset_is_empty(&oso->targets)) { + oso = NULL; + } - os = ofproto->sflow = ofproto_sflow_create(ofproto->dpif); - HMAP_FOR_EACH (ofport, hmap_node, &ofproto->ports) { - ofproto_sflow_add_port(os, ofport->odp_port, - netdev_get_name(ofport->netdev)); - } - } - ofproto_sflow_set_options(os, oso); + if (ofproto->ofproto_class->set_sflow) { + return ofproto->ofproto_class->set_sflow(ofproto, oso); } else { - ofproto_sflow_destroy(os); - ofproto->sflow = NULL; + return oso ? EOPNOTSUPP : 0; } } @@ -687,10 +462,9 @@ ofproto_set_sflow(struct ofproto *ofproto, void ofproto_port_clear_cfm(struct ofproto *ofproto, uint16_t ofp_port) { - struct ofport *ofport = get_port(ofproto, ofp_port_to_odp_port(ofp_port)); - if (ofport && ofport->cfm){ - cfm_destroy(ofport->cfm); - ofport->cfm = NULL; + struct ofport *ofport = ofproto_get_port(ofproto, ofp_port); + if (ofport && ofproto->ofproto_class->set_cfm) { + ofproto->ofproto_class->set_cfm(ofport, NULL, NULL, 0); } } @@ -706,30 +480,23 @@ ofproto_port_set_cfm(struct ofproto *ofproto, uint16_t ofp_port, const uint16_t *remote_mps, size_t n_remote_mps) { struct ofport *ofport; + int error; - ofport = get_port(ofproto, ofp_port_to_odp_port(ofp_port)); + ofport = ofproto_get_port(ofproto, ofp_port); if (!ofport) { VLOG_WARN("%s: cannot configure CFM on nonexistent port %"PRIu16, ofproto->name, ofp_port); return; } - if (!ofport->cfm) { - ofport->cfm = cfm_create(); - } - - ofport->cfm->mpid = cfm->mpid; - ofport->cfm->interval = cfm->interval; - memcpy(ofport->cfm->maid, cfm->maid, CCM_MAID_LEN); - - cfm_update_remote_mps(ofport->cfm, remote_mps, n_remote_mps); - - if (!cfm_configure(ofport->cfm)) { - VLOG_WARN("%s: CFM configuration on port %"PRIu16" (%s) failed", - ofproto->name, ofp_port, - netdev_get_name(ofport->netdev)); - cfm_destroy(ofport->cfm); - ofport->cfm = NULL; + error = (ofproto->ofproto_class->set_cfm + ? ofproto->ofproto_class->set_cfm(ofport, cfm, + remote_mps, n_remote_mps) + : EOPNOTSUPP); + if (error) { + VLOG_WARN("%s: CFM configuration on port %"PRIu16" (%s) failed (%s)", + ofproto->name, ofp_port, netdev_get_name(ofport->netdev), + strerror(error)); } } @@ -740,8 +507,13 @@ ofproto_port_set_cfm(struct ofproto *ofproto, uint16_t ofp_port, const struct cfm * ofproto_port_get_cfm(struct ofproto *ofproto, uint16_t ofp_port) { - struct ofport *ofport = get_port(ofproto, ofp_port_to_odp_port(ofp_port)); - return ofport ? ofport->cfm : NULL; + struct ofport *ofport; + const struct cfm *cfm; + + ofport = ofproto_get_port(ofproto, ofp_port); + return (ofport + && ofproto->ofproto_class->get_cfm + && !ofproto->ofproto_class->get_cfm(ofport, &cfm)) ? cfm : NULL; } /* Checks the status of LACP negotiation for 'ofp_port' within ofproto. @@ -751,766 +523,210 @@ ofproto_port_get_cfm(struct ofproto *ofproto, uint16_t ofp_port) int ofproto_port_is_lacp_current(struct ofproto *ofproto, uint16_t ofp_port) { - struct ofport *ofport = get_port(ofproto, ofp_port_to_odp_port(ofp_port)); - return (ofport && ofport->bundle && ofport->bundle->lacp - ? lacp_slave_is_current(ofport->bundle->lacp, ofport) + struct ofport *ofport = ofproto_get_port(ofproto, ofp_port); + return (ofport && ofproto->ofproto_class->port_is_lacp_current + ? ofproto->ofproto_class->port_is_lacp_current(ofport) : -1); } /* Bundles. */ -/* Expires all MAC learning entries associated with 'port' and forces ofproto - * to revalidate every flow. */ -static void -ofproto_bundle_flush_macs(struct ofbundle *bundle) +/* Registers a "bundle" associated with client data pointer 'aux' in 'ofproto'. + * A bundle is the same concept as a Port in OVSDB, that is, it consists of one + * or more "slave" devices (Interfaces, in OVSDB) along with a VLAN + * configuration plus, if there is more than one slave, a bonding + * configuration. + * + * If 'aux' is already registered then this function updates its configuration + * to 's'. Otherwise, this function registers a new bundle. + * + * Bundles only affect the NXAST_AUTOPATH action and output to the OFPP_NORMAL + * port. */ +int +ofproto_bundle_register(struct ofproto *ofproto, void *aux, + const struct ofproto_bundle_settings *s) { - struct ofproto *ofproto = bundle->ofproto; - struct mac_learning *ml = ofproto->ml; - struct mac_entry *mac, *next_mac; - - ofproto->need_revalidate = true; - LIST_FOR_EACH_SAFE (mac, next_mac, lru_node, &ml->lrus) { - if (mac->port.p == bundle) { - mac_learning_expire(ml, mac); - } - } + return (ofproto->ofproto_class->bundle_set + ? ofproto->ofproto_class->bundle_set(ofproto, aux, s) + : EOPNOTSUPP); } -static struct ofbundle * -ofproto_bundle_lookup(const struct ofproto *ofproto, void *aux) +/* Unregisters the bundle registered on 'ofproto' with auxiliary data 'aux'. + * If no such bundle has been registered, this has no effect. */ +int +ofproto_bundle_unregister(struct ofproto *ofproto, void *aux) { - struct ofbundle *bundle; - - HMAP_FOR_EACH_IN_BUCKET (bundle, hmap_node, hash_pointer(aux, 0), - &ofproto->bundles) { - if (bundle->aux == aux) { - return bundle; - } - } - return NULL; + return ofproto_bundle_register(ofproto, aux, NULL); } -/* Looks up each of the 'n_auxes' pointers in 'auxes' as bundles and adds the - * ones that are found to 'bundles'. */ -static void -ofproto_bundle_lookup_multiple(struct ofproto *ofproto, - void **auxes, size_t n_auxes, - struct hmapx *bundles) + +/* Registers a mirror associated with client data pointer 'aux' in 'ofproto'. + * If 'aux' is already registered then this function updates its configuration + * to 's'. Otherwise, this function registers a new mirror. + * + * Mirrors affect only the treatment of packets output to the OFPP_NORMAL + * port. */ +int +ofproto_mirror_register(struct ofproto *ofproto, void *aux, + const struct ofproto_mirror_settings *s) { - size_t i; - - hmapx_init(bundles); - for (i = 0; i < n_auxes; i++) { - struct ofbundle *bundle = ofproto_bundle_lookup(ofproto, auxes[i]); - if (bundle) { - hmapx_add(bundles, bundle); - } - } + return (ofproto->ofproto_class->mirror_set + ? ofproto->ofproto_class->mirror_set(ofproto, aux, s) + : EOPNOTSUPP); } -static void -ofproto_bundle_del_port(struct ofport *port) +/* Unregisters the mirror registered on 'ofproto' with auxiliary data 'aux'. + * If no mirror has been registered, this has no effect. */ +int +ofproto_mirror_unregister(struct ofproto *ofproto, void *aux) { - struct ofbundle *bundle = port->bundle; - - list_remove(&port->bundle_node); - port->bundle = NULL; - - if (bundle->lacp) { - lacp_slave_unregister(bundle->lacp, port); - } - if (bundle->bond) { - bond_slave_unregister(bundle->bond, port); - } - - bundle->floodable = true; - LIST_FOR_EACH (port, bundle_node, &bundle->ports) { - if (port->opp.config & htonl(OFPPC_NO_FLOOD)) { - bundle->floodable = false; - } - } + return ofproto_mirror_register(ofproto, aux, NULL); } -static bool -ofproto_bundle_add_port(struct ofbundle *bundle, uint32_t ofp_port, - struct lacp_slave_settings *lacp) +/* Configures the VLANs whose bits are set to 1 in 'flood_vlans' as VLANs on + * which all packets are flooded, instead of using MAC learning. If + * 'flood_vlans' is NULL, then MAC learning applies to all VLANs. + * + * Flood VLANs affect only the treatment of packets output to the OFPP_NORMAL + * port. */ +int +ofproto_set_flood_vlans(struct ofproto *ofproto, unsigned long *flood_vlans) { - struct ofport *port; - - port = get_port(bundle->ofproto, ofp_port_to_odp_port(ofp_port)); - if (!port) { - return false; - } - - if (port->bundle != bundle) { - if (port->bundle) { - ofproto_bundle_del_port(port); - } - - port->bundle = bundle; - list_push_back(&bundle->ports, &port->bundle_node); - if (port->opp.config & htonl(OFPPC_NO_FLOOD)) { - bundle->floodable = false; - } - } - - if (lacp) { - lacp_slave_register(bundle->lacp, port, lacp); - } + return (ofproto->ofproto_class->set_flood_vlans + ? ofproto->ofproto_class->set_flood_vlans(ofproto, flood_vlans) + : EOPNOTSUPP); +} - return true; +/* Returns true if 'aux' is a registered bundle that is currently in use as the + * output for a mirror. */ +bool +ofproto_is_mirror_output_bundle(struct ofproto *ofproto, void *aux) +{ + return (ofproto->ofproto_class->is_mirror_output_bundle + ? ofproto->ofproto_class->is_mirror_output_bundle(ofproto, aux) + : false); +} + +bool +ofproto_has_snoops(const struct ofproto *ofproto) +{ + return connmgr_has_snoops(ofproto->connmgr); } void -ofproto_bundle_register(struct ofproto *ofproto, void *aux, - const struct ofproto_bundle_settings *s) +ofproto_get_snoops(const struct ofproto *ofproto, struct sset *snoops) { - bool need_flush = false; - const unsigned long *trunks; - struct ofbundle *bundle; - struct ofport *port; - size_t i; - bool ok; - - assert(s->n_slaves == 1 || s->bond != NULL); - assert((s->lacp != NULL) == (s->lacp_slaves != NULL)); - - bundle = ofproto_bundle_lookup(ofproto, aux); - if (!bundle) { - bundle = xmalloc(sizeof *bundle); - - bundle->ofproto = ofproto; - hmap_insert(&ofproto->bundles, &bundle->hmap_node, - hash_pointer(aux, 0)); - bundle->aux = aux; - bundle->name = NULL; - - list_init(&bundle->ports); - bundle->vlan = -1; - bundle->trunks = NULL; - bundle->bond = NULL; - bundle->lacp = NULL; - - bundle->floodable = true; - - bundle->src_mirrors = 0; - bundle->dst_mirrors = 0; - bundle->mirror_out = 0; - } + connmgr_get_snoops(ofproto->connmgr, snoops); +} - if (!bundle->name || strcmp(s->name, bundle->name)) { - free(bundle->name); - bundle->name = xstrdup(s->name); - } +static void +ofproto_destroy__(struct ofproto *ofproto) +{ + connmgr_destroy(ofproto->connmgr); - /* LACP. */ - if (s->lacp) { - if (!bundle->lacp) { - bundle->lacp = lacp_create(); - } - lacp_configure(bundle->lacp, s->lacp); - } else { - lacp_destroy(bundle->lacp); - bundle->lacp = NULL; - } + hmap_remove(&all_ofprotos, &ofproto->hmap_node); + free(ofproto->name); + free(ofproto->mfr_desc); + free(ofproto->hw_desc); + free(ofproto->sw_desc); + free(ofproto->serial_desc); + free(ofproto->dp_desc); + netdev_monitor_destroy(ofproto->netdev_monitor); + hmap_destroy(&ofproto->ports); + shash_destroy(&ofproto->port_by_name); + classifier_destroy(&ofproto->cls); - /* Update set of ports. */ - ok = true; - for (i = 0; i < s->n_slaves; i++) { - if (!ofproto_bundle_add_port(bundle, s->slaves[i], - s->lacp ? &s->lacp_slaves[i] : NULL)) { - ok = false; - } - } - if (!ok || list_size(&bundle->ports) != s->n_slaves) { - struct ofport *next_port; - - LIST_FOR_EACH_SAFE (port, next_port, bundle_node, &bundle->ports) { - for (i = 0; i < s->n_slaves; i++) { - if (s->slaves[i] == odp_port_to_ofp_port(port->odp_port)) { - goto found; - } - } + ofproto->ofproto_class->dealloc(ofproto); +} - ofproto_bundle_del_port(port); - found: ; - } - } - assert(list_size(&bundle->ports) <= s->n_slaves); +void +ofproto_destroy(struct ofproto *p) +{ + struct ofport *ofport, *next_ofport; - if (list_is_empty(&bundle->ports)) { - ofproto_bundle_unregister(ofproto, aux); + if (!p) { return; } - /* Set VLAN tag. */ - if (s->vlan != bundle->vlan) { - bundle->vlan = s->vlan; - need_flush = true; - } - - /* Get trunked VLANs. */ - trunks = s->vlan == -1 ? NULL : s->trunks; - if (!vlan_bitmap_equal(trunks, bundle->trunks)) { - free(bundle->trunks); - bundle->trunks = vlan_bitmap_clone(trunks); - need_flush = true; + ofproto_flush_flows__(p); + HMAP_FOR_EACH_SAFE (ofport, next_ofport, hmap_node, &p->ports) { + hmap_remove(&p->ports, &ofport->hmap_node); + ofport_destroy(ofport); } - /* Bonding. */ - if (!list_is_short(&bundle->ports)) { - bundle->ofproto->has_bonded_bundles = true; - if (bundle->bond) { - if (bond_reconfigure(bundle->bond, s->bond)) { - ofproto->need_revalidate = true; - } - } else { - bundle->bond = bond_create(s->bond); - } + p->ofproto_class->destruct(p); + ofproto_destroy__(p); +} - LIST_FOR_EACH (port, bundle_node, &bundle->ports) { - uint16_t stable_id = (bundle->lacp - ? lacp_slave_get_port_id(bundle->lacp, port) - : port->odp_port); - bond_slave_register(bundle->bond, port, stable_id, port->netdev); - } - } else { - bond_destroy(bundle->bond); - bundle->bond = NULL; - } +/* Destroys the datapath with the respective 'name' and 'type'. With the Linux + * kernel datapath, for example, this destroys the datapath in the kernel, and + * with the netdev-based datapath, it tears down the data structures that + * represent the datapath. + * + * The datapath should not be currently open as an ofproto. */ +int +ofproto_delete(const char *name, const char *type) +{ + const struct ofproto_class *class = ofproto_class_find__(type); + return (!class ? EAFNOSUPPORT + : !class->del ? EACCES + : class->del(type, name)); +} - /* If we changed something that would affect MAC learning, un-learn - * everything on this port and force flow revalidation. */ - if (need_flush) { - ofproto_bundle_flush_macs(bundle); +static void +process_port_change(struct ofproto *ofproto, int error, char *devname) +{ + if (error == ENOBUFS) { + reinit_ports(ofproto); + } else if (!error) { + update_port(ofproto, devname); + free(devname); } } -static void -ofproto_bundle_destroy(struct ofbundle *bundle) +int +ofproto_run(struct ofproto *p) { - struct ofproto *ofproto; - struct ofport *port, *next_port; - int i; + char *devname; + int error; - if (!bundle) { - return; + error = p->ofproto_class->run(p); + if (error == ENODEV) { + /* Someone destroyed the datapath behind our back. The caller + * better destroy us and give up, because we're just going to + * spin from here on out. */ + static struct vlog_rate_limit rl2 = VLOG_RATE_LIMIT_INIT(1, 5); + VLOG_ERR_RL(&rl2, "%s: datapath was destroyed externally", + p->name); + return ENODEV; } - ofproto = bundle->ofproto; - for (i = 0; i < MAX_MIRRORS; i++) { - struct ofmirror *m = ofproto->mirrors[i]; - if (m) { - if (m->out == bundle) { - ofproto_mirror_destroy(m); - } else if (hmapx_find_and_delete(&m->srcs, bundle) - || hmapx_find_and_delete(&m->dsts, bundle)) { - ofproto->need_revalidate = true; - } - } + while ((error = p->ofproto_class->port_poll(p, &devname)) != EAGAIN) { + process_port_change(p, error, devname); } - - LIST_FOR_EACH_SAFE (port, next_port, bundle_node, &bundle->ports) { - ofproto_bundle_del_port(port); + while ((error = netdev_monitor_poll(p->netdev_monitor, + &devname)) != EAGAIN) { + process_port_change(p, error, devname); } - ofproto_bundle_flush_macs(bundle); - hmap_remove(&ofproto->bundles, &bundle->hmap_node); - free(bundle->name); - free(bundle->trunks); - bond_destroy(bundle->bond); - lacp_destroy(bundle->lacp); - free(bundle); + connmgr_run(p->connmgr, handle_openflow); + + return 0; } void -ofproto_bundle_unregister(struct ofproto *ofproto, void *aux) +ofproto_wait(struct ofproto *p) { - ofproto_bundle_destroy(ofproto_bundle_lookup(ofproto, aux)); + p->ofproto_class->wait(p); + p->ofproto_class->port_poll_wait(p); + netdev_monitor_poll_wait(p->netdev_monitor); + connmgr_wait(p->connmgr); } -static void -send_pdu_cb(void *port_, const struct lacp_pdu *pdu) +bool +ofproto_is_alive(const struct ofproto *p) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 10); - struct ofport *port = port_; - uint8_t ea[ETH_ADDR_LEN]; - int error; - - error = netdev_get_etheraddr(port->netdev, ea); - if (!error) { - struct lacp_pdu *packet_pdu; - struct ofpbuf packet; - - ofpbuf_init(&packet, 0); - packet_pdu = eth_compose(&packet, eth_addr_lacp, ea, ETH_TYPE_LACP, - sizeof *packet_pdu); - *packet_pdu = *pdu; - error = netdev_send(port->netdev, &packet); - if (error) { - VLOG_WARN_RL(&rl, "port %s: sending LACP PDU on iface %s failed " - "(%s)", port->bundle->name, - netdev_get_name(port->netdev), strerror(error)); - } - ofpbuf_uninit(&packet); - } else { - VLOG_ERR_RL(&rl, "port %s: cannot obtain Ethernet address of iface " - "%s (%s)", port->bundle->name, - netdev_get_name(port->netdev), strerror(error)); - } -} - -static void -ofproto_bundle_send_learning_packets(struct ofbundle *bundle) -{ - struct ofproto *ofproto = bundle->ofproto; - int error, n_packets, n_errors; - struct mac_entry *e; - - error = n_packets = n_errors = 0; - LIST_FOR_EACH (e, lru_node, &ofproto->ml->lrus) { - if (e->port.p != bundle) { - int ret = bond_send_learning_packet(bundle->bond, e->mac, e->vlan); - if (ret) { - error = ret; - n_errors++; - } - n_packets++; - } - } - - if (n_errors) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, "bond %s: %d errors sending %d gratuitous learning " - "packets, last error was: %s", - bundle->name, n_errors, n_packets, strerror(error)); - } else { - VLOG_DBG("bond %s: sent %d gratuitous learning packets", - bundle->name, n_packets); - } -} - -static void -ofproto_bundle_run(struct ofbundle *bundle) -{ - if (bundle->lacp) { - lacp_run(bundle->lacp, send_pdu_cb); - } - if (bundle->bond) { - struct ofport *port; - - LIST_FOR_EACH (port, bundle_node, &bundle->ports) { - bool may_enable = lacp_slave_may_enable(bundle->lacp, port); - bond_slave_set_lacp_may_enable(bundle->bond, port, may_enable); - } - - bond_run(bundle->bond, &bundle->ofproto->revalidate_set, - lacp_negotiated(bundle->lacp)); - if (bond_should_send_learning_packets(bundle->bond)) { - ofproto_bundle_send_learning_packets(bundle); - } - } -} - -static void -ofproto_bundle_wait(struct ofbundle *bundle) -{ - if (bundle->lacp) { - lacp_wait(bundle->lacp); - } - if (bundle->bond) { - bond_wait(bundle->bond); - } -} - -static int -ofproto_mirror_scan(struct ofproto *ofproto) -{ - int idx; - - for (idx = 0; idx < MAX_MIRRORS; idx++) { - if (!ofproto->mirrors[idx]) { - return idx; - } - } - return -1; -} - -static struct ofmirror * -ofproto_mirror_lookup(struct ofproto *ofproto, void *aux) -{ - int i; - - for (i = 0; i < MAX_MIRRORS; i++) { - struct ofmirror *mirror = ofproto->mirrors[i]; - if (mirror && mirror->aux == aux) { - return mirror; - } - } - - return NULL; -} - -void -ofproto_mirror_register(struct ofproto *ofproto, void *aux, - const struct ofproto_mirror_settings *s) -{ - mirror_mask_t mirror_bit; - struct ofbundle *bundle; - struct ofmirror *mirror; - struct ofbundle *out; - struct hmapx srcs; /* Contains "struct ofbundle *"s. */ - struct hmapx dsts; /* Contains "struct ofbundle *"s. */ - int out_vlan; - - mirror = ofproto_mirror_lookup(ofproto, aux); - if (!mirror) { - int idx; - - idx = ofproto_mirror_scan(ofproto); - if (idx < 0) { - VLOG_WARN("bridge %s: maximum of %d port mirrors reached, " - "cannot create %s", - ofproto->name, MAX_MIRRORS, s->name); - return; - } - - mirror = ofproto->mirrors[idx] = xzalloc(sizeof *mirror); - mirror->ofproto = ofproto; - mirror->idx = idx; - mirror->out_vlan = -1; - mirror->name = NULL; - } - - if (!mirror->name || strcmp(s->name, mirror->name)) { - free(mirror->name); - mirror->name = xstrdup(s->name); - } - - /* Get the new configuration. */ - if (s->out_bundle) { - out = ofproto_bundle_lookup(ofproto, s->out_bundle); - if (!out) { - ofproto_mirror_destroy(mirror); - return; - } - out_vlan = -1; - } else { - out = NULL; - out_vlan = s->out_vlan; - } - ofproto_bundle_lookup_multiple(ofproto, s->srcs, s->n_srcs, &srcs); - ofproto_bundle_lookup_multiple(ofproto, s->dsts, s->n_dsts, &dsts); - - /* If the configuration has not changed, do nothing. */ - if (hmapx_equals(&srcs, &mirror->srcs) - && hmapx_equals(&dsts, &mirror->dsts) - && vlan_bitmap_equal(mirror->vlans, s->src_vlans) - && mirror->out == out - && mirror->out_vlan == out_vlan) - { - hmapx_destroy(&srcs); - hmapx_destroy(&dsts); - return; - } - - hmapx_swap(&srcs, &mirror->srcs); - hmapx_destroy(&srcs); - - hmapx_swap(&dsts, &mirror->dsts); - hmapx_destroy(&dsts); - - free(mirror->vlans); - mirror->vlans = vlan_bitmap_clone(s->src_vlans); - - mirror->out = out; - mirror->out_vlan = out_vlan; - - /* Update bundles. */ - mirror_bit = MIRROR_MASK_C(1) << mirror->idx; - HMAP_FOR_EACH (bundle, hmap_node, &mirror->ofproto->bundles) { - if (hmapx_contains(&mirror->srcs, bundle)) { - bundle->src_mirrors |= mirror_bit; - } else { - bundle->src_mirrors &= ~mirror_bit; - } - - if (hmapx_contains(&mirror->dsts, bundle)) { - bundle->dst_mirrors |= mirror_bit; - } else { - bundle->dst_mirrors &= ~mirror_bit; - } - - if (mirror->out == bundle) { - bundle->mirror_out |= mirror_bit; - } else { - bundle->mirror_out &= ~mirror_bit; - } - } - - ofproto->need_revalidate = true; - mac_learning_flush(ofproto->ml); -} - -static void -ofproto_mirror_destroy(struct ofmirror *mirror) -{ - mirror_mask_t mirror_bit; - struct ofbundle *bundle; - struct ofproto *ofproto; - - if (!mirror) { - return; - } - - ofproto = mirror->ofproto; - ofproto->need_revalidate = true; - mac_learning_flush(ofproto->ml); - - mirror_bit = MIRROR_MASK_C(1) << mirror->idx; - HMAP_FOR_EACH (bundle, hmap_node, &ofproto->bundles) { - bundle->src_mirrors &= ~mirror_bit; - bundle->dst_mirrors &= ~mirror_bit; - bundle->mirror_out &= ~mirror_bit; - } - - hmapx_destroy(&mirror->srcs); - hmapx_destroy(&mirror->dsts); - free(mirror->vlans); - - ofproto->mirrors[mirror->idx] = NULL; - free(mirror->name); - free(mirror); -} - -void -ofproto_mirror_unregister(struct ofproto *ofproto, void *aux) -{ - ofproto_mirror_destroy(ofproto_mirror_lookup(ofproto, aux)); -} - -void -ofproto_set_flood_vlans(struct ofproto *ofproto, unsigned long *flood_vlans) -{ - if (mac_learning_set_flood_vlans(ofproto->ml, flood_vlans)) { - ofproto->need_revalidate = true; - mac_learning_flush(ofproto->ml); - } -} - -bool -ofproto_is_mirror_output_bundle(struct ofproto *ofproto, void *aux) -{ - struct ofbundle *bundle = ofproto_bundle_lookup(ofproto, aux); - return bundle && bundle->mirror_out != 0; -} - -bool -ofproto_has_snoops(const struct ofproto *ofproto) -{ - return connmgr_has_snoops(ofproto->connmgr); -} - -void -ofproto_get_snoops(const struct ofproto *ofproto, struct sset *snoops) -{ - connmgr_get_snoops(ofproto->connmgr, snoops); -} - -void -ofproto_destroy(struct ofproto *p) -{ - struct ofport *ofport, *next_ofport; - int i; - - if (!p) { - return; - } - - hmap_remove(&all_ofprotos, &p->hmap_node); - - for (i = 0; i < MAX_MIRRORS; i++) { - ofproto_mirror_destroy(p->mirrors[i]); - } - ofproto_flush_flows__(p); - connmgr_destroy(p->connmgr); - classifier_destroy(&p->cls); - hmap_destroy(&p->facets); - - dpif_close(p->dpif); - - netdev_monitor_destroy(p->netdev_monitor); - HMAP_FOR_EACH_SAFE (ofport, next_ofport, hmap_node, &p->ports) { - hmap_remove(&p->ports, &ofport->hmap_node); - ofport_free(ofport); - } - shash_destroy(&p->port_by_name); - - netflow_destroy(p->netflow); - ofproto_sflow_destroy(p->sflow); - - free(p->mfr_desc); - free(p->hw_desc); - free(p->sw_desc); - free(p->serial_desc); - free(p->dp_desc); - - hmap_destroy(&p->ports); - - free(p->name); - free(p); -} - -/* Destroys the datapath with the respective 'name' and 'type'. With the Linux - * kernel datapath, for example, this destroys the datapath in the kernel, and - * with the netdev-based datapath, it tears down the data structures that - * represent the datapath. - * - * The datapath should not be currently open as an ofproto. */ -int -ofproto_delete(const char *name, const char *type) -{ - struct dpif *dpif; - int error; - - error = dpif_open(name, type, &dpif); - if (!error) { - error = dpif_delete(dpif); - dpif_close(dpif); - } - return error; -} - -static void -process_port_change(struct ofproto *ofproto, int error, char *devname) -{ - if (error == ENOBUFS) { - reinit_ports(ofproto); - } else if (!error) { - update_port(ofproto, devname); - free(devname); - } -} - -int -ofproto_run(struct ofproto *p) -{ - struct ofbundle *bundle; - struct ofport *ofport; - char *devname; - int error; - int i; - - dpif_run(p->dpif); - - for (i = 0; i < 50; i++) { - struct dpif_upcall packet; - - error = dpif_recv(p->dpif, &packet); - if (error) { - if (error == ENODEV) { - /* Someone destroyed the datapath behind our back. The caller - * better destroy us and give up, because we're just going to - * spin from here on out. */ - static struct vlog_rate_limit rl2 = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_ERR_RL(&rl2, "%s: datapath was destroyed externally", - p->name); - return ENODEV; - } - break; - } - - handle_upcall(p, &packet); - } - - while ((error = dpif_port_poll(p->dpif, &devname)) != EAGAIN) { - process_port_change(p, error, devname); - } - while ((error = netdev_monitor_poll(p->netdev_monitor, - &devname)) != EAGAIN) { - process_port_change(p, error, devname); - } - - HMAP_FOR_EACH (ofport, hmap_node, &p->ports) { - ofport_run(ofport); - } - - HMAP_FOR_EACH (bundle, hmap_node, &p->bundles) { - ofproto_bundle_run(bundle); - } - - connmgr_run(p->connmgr, handle_openflow); - - if (timer_expired(&p->next_expiration)) { - int delay = ofproto_expire(p); - timer_set_duration(&p->next_expiration, delay); - COVERAGE_INC(ofproto_expiration); - } - - if (p->netflow) { - netflow_run(p->netflow); - } - if (p->sflow) { - ofproto_sflow_run(p->sflow); - } - - /* Now revalidate if there's anything to do. */ - if (p->need_revalidate || !tag_set_is_empty(&p->revalidate_set)) { - struct tag_set revalidate_set = p->revalidate_set; - bool revalidate_all = p->need_revalidate; - struct facet *facet, *next; - - /* Clear the revalidation flags. */ - tag_set_init(&p->revalidate_set); - p->need_revalidate = false; - - HMAP_FOR_EACH_SAFE (facet, next, hmap_node, &p->facets) { - if (revalidate_all - || tag_set_intersects(&revalidate_set, facet->tags)) { - facet_revalidate(p, facet); - } - } - } - - return 0; -} - -void -ofproto_wait(struct ofproto *p) -{ - struct ofbundle *bundle; - struct ofport *ofport; - - dpif_wait(p->dpif); - HMAP_FOR_EACH (ofport, hmap_node, &p->ports) { - ofport_wait(ofport); - } - HMAP_FOR_EACH (bundle, hmap_node, &p->bundles) { - ofproto_bundle_wait(bundle); - } - dpif_recv_wait(p->dpif); - dpif_port_poll_wait(p->dpif); - netdev_monitor_poll_wait(p->netdev_monitor); - if (p->sflow) { - ofproto_sflow_wait(p->sflow); - } - if (!tag_set_is_empty(&p->revalidate_set)) { - poll_immediate_wake(); - } - if (p->need_revalidate) { - /* Shouldn't happen, but if it does just go around again. */ - VLOG_DBG_RL(&rl, "need revalidate in ofproto_wait_cb()"); - poll_immediate_wake(); - } else { - timer_wait(&p->next_expiration); - } - connmgr_wait(p->connmgr); -} - -bool -ofproto_is_alive(const struct ofproto *p) -{ - return connmgr_has_controllers(p->connmgr); + return connmgr_has_controllers(p->connmgr); } void @@ -1556,20 +772,6 @@ ofproto_port_destroy(struct ofproto_port *ofproto_port) free(ofproto_port->type); } -/* Converts a dpif_port into an ofproto_port. - * - * This only makes a shallow copy, so make sure that the dpif_port doesn't get - * freed while the ofproto_port is still in use. You can choose to free the - * ofproto_port instead of the dpif_port. */ -static void -ofproto_port_from_dpif_port(struct ofproto_port *ofproto_port, - struct dpif_port *dpif_port) -{ - ofproto_port->name = dpif_port->name; - ofproto_port->type = dpif_port->type; - ofproto_port->ofp_port = odp_port_to_ofp_port(dpif_port->port_no); -} - /* Initializes 'dump' to begin dumping the ports in an ofproto. * * This function provides no status indication. An error status for the entire @@ -1580,10 +782,9 @@ void ofproto_port_dump_start(struct ofproto_port_dump *dump, const struct ofproto *ofproto) { - struct dpif_port_dump *dpif_dump; - - dump->state = dpif_dump = xmalloc(sizeof *dpif_dump); - dpif_port_dump_start(dpif_dump, ofproto->dpif); + dump->ofproto = ofproto; + dump->error = ofproto->ofproto_class->port_dump_start(ofproto, + &dump->state); } /* Attempts to retrieve another port from 'dump', which must have been created @@ -1601,15 +802,19 @@ bool ofproto_port_dump_next(struct ofproto_port_dump *dump, struct ofproto_port *port) { - struct dpif_port_dump *dpif_dump = dump->state; - struct dpif_port dpif_port; - bool ok; + const struct ofproto *ofproto = dump->ofproto; + + if (dump->error) { + return false; + } - ok = dpif_port_dump_next(dpif_dump, &dpif_port); - if (ok) { - ofproto_port_from_dpif_port(port, &dpif_port); + dump->error = ofproto->ofproto_class->port_dump_next(ofproto, dump->state, + port); + if (dump->error) { + ofproto->ofproto_class->port_dump_done(ofproto, dump->state); + return false; } - return ok; + return true; } /* Completes port table dump operation 'dump', which must have been created @@ -1618,10 +823,12 @@ ofproto_port_dump_next(struct ofproto_port_dump *dump, int ofproto_port_dump_done(struct ofproto_port_dump *dump) { - struct dpif_port_dump *dpif_dump = dump->state; - int error = dpif_port_dump_done(dpif_dump); - free(dpif_dump); - return error; + const struct ofproto *ofproto = dump->ofproto; + if (!dump->error) { + dump->error = ofproto->ofproto_class->port_dump_done(ofproto, + dump->state); + } + return dump->error == EOF ? 0 : dump->error; } /* Attempts to add 'netdev' as a port on 'ofproto'. If successful, returns 0 @@ -1632,15 +839,15 @@ int ofproto_port_add(struct ofproto *ofproto, struct netdev *netdev, uint16_t *ofp_portp) { - uint16_t odp_port; + uint16_t ofp_port; int error; - error = dpif_port_add(ofproto->dpif, netdev, &odp_port); + error = ofproto->ofproto_class->port_add(ofproto, netdev, &ofp_port); if (!error) { update_port(ofproto, netdev_get_name(netdev)); } if (ofp_portp) { - *ofp_portp = error ? OFPP_NONE : odp_port_to_ofp_port(odp_port); + *ofp_portp = error ? OFPP_NONE : ofp_port; } return error; } @@ -1649,18 +856,17 @@ ofproto_port_add(struct ofproto *ofproto, struct netdev *netdev, * initializes '*port' appropriately; on failure, returns a positive errno * value. * - * The caller owns the data in 'port' and must free it with + * The caller owns the data in 'ofproto_port' and must free it with * ofproto_port_destroy() when it is no longer needed. */ int ofproto_port_query_by_name(const struct ofproto *ofproto, const char *devname, struct ofproto_port *port) { - struct dpif_port dpif_port; int error; - error = dpif_port_query_by_name(ofproto->dpif, devname, &dpif_port); - if (!error) { - ofproto_port_from_dpif_port(port, &dpif_port); + error = ofproto->ofproto_class->port_query_by_name(ofproto, devname, port); + if (error) { + memset(port, 0, sizeof *port); } return error; } @@ -1670,12 +876,11 @@ ofproto_port_query_by_name(const struct ofproto *ofproto, const char *devname, int ofproto_port_del(struct ofproto *ofproto, uint16_t ofp_port) { - uint32_t odp_port = ofp_port_to_odp_port(ofp_port); - struct ofport *ofport = get_port(ofproto, odp_port); + struct ofport *ofport = ofproto_get_port(ofproto, ofp_port); const char *name = ofport ? netdev_get_name(ofport->netdev) : ""; int error; - error = dpif_port_del(ofproto->dpif, odp_port); + error = ofproto->ofproto_class->port_del(ofproto, ofp_port); if (!error && ofport) { /* 'name' is the netdev's name and update_port() is going to close the * netdev. Just in case update_port() refers to 'name' after it @@ -1688,36 +893,6 @@ ofproto_port_del(struct ofproto *ofproto, uint16_t ofp_port) return error; } -/* Sends 'packet' out of port 'port_no' within 'p'. If 'vlan_tci' is zero the - * packet will not have any 802.1Q hader; if it is nonzero, then the packet - * will be sent with the VLAN TCI specified by 'vlan_tci & ~VLAN_CFI'. - * - * Returns 0 if successful, otherwise a positive errno value. */ -static int -ofproto_send_packet(struct ofproto *ofproto, - uint32_t port_no, uint16_t vlan_tci, - const struct ofpbuf *packet) -{ - struct ofpbuf odp_actions; - int error; - - ofpbuf_init(&odp_actions, 32); - if (vlan_tci != 0) { - nl_msg_put_u32(&odp_actions, ODP_ACTION_ATTR_SET_DL_TCI, - ntohs(vlan_tci & ~VLAN_CFI)); - } - nl_msg_put_u32(&odp_actions, ODP_ACTION_ATTR_OUTPUT, port_no); - error = dpif_execute(ofproto->dpif, odp_actions.data, odp_actions.size, - packet); - ofpbuf_uninit(&odp_actions); - - if (error) { - VLOG_WARN_RL(&rl, "%s: failed to send packet on port %"PRIu32" (%s)", - ofproto->name, port_no, strerror(error)); - } - return error; -} - /* Adds a flow to the OpenFlow flow table in 'p' that matches 'cls_rule' and * performs the 'n_actions' actions in 'actions'. The new flow will not * timeout. @@ -1732,8 +907,7 @@ ofproto_add_flow(struct ofproto *p, const struct cls_rule *cls_rule, const union ofp_action *actions, size_t n_actions) { struct rule *rule; - rule = rule_create(cls_rule, actions, n_actions, 0, 0, 0, false); - rule_insert(p, rule); + rule_create(p, cls_rule, actions, n_actions, 0, 0, 0, false, &rule); } void @@ -1744,36 +918,26 @@ ofproto_delete_flow(struct ofproto *ofproto, const struct cls_rule *target) rule = rule_from_cls_rule(classifier_find_rule_exactly(&ofproto->cls, target)); if (rule) { - rule_remove(ofproto, rule); + ofproto_rule_remove(rule); } } static void ofproto_flush_flows__(struct ofproto *ofproto) { - struct facet *facet, *next_facet; struct rule *rule, *next_rule; struct cls_cursor cursor; COVERAGE_INC(ofproto_flush); - HMAP_FOR_EACH_SAFE (facet, next_facet, hmap_node, &ofproto->facets) { - /* Mark the facet as not installed so that facet_remove() doesn't - * bother trying to uninstall it. There is no point in uninstalling it - * individually since we are about to blow away all the facets with - * dpif_flow_flush(). */ - facet->installed = false; - facet->dp_packet_count = 0; - facet->dp_byte_count = 0; - facet_remove(ofproto, facet); + if (ofproto->ofproto_class->flush) { + ofproto->ofproto_class->flush(ofproto); } cls_cursor_init(&cursor, &ofproto->cls, NULL); CLS_CURSOR_FOR_EACH_SAFE (rule, next_rule, cr, &cursor) { - rule_remove(ofproto, rule); + ofproto_rule_remove(rule); } - - dpif_flow_flush(ofproto->dpif); } void @@ -1786,10 +950,10 @@ ofproto_flush_flows(struct ofproto *ofproto) static void reinit_ports(struct ofproto *p) { - struct dpif_port_dump dump; + struct ofproto_port_dump dump; struct sset devnames; struct ofport *ofport; - struct dpif_port dpif_port; + struct ofproto_port ofproto_port; const char *devname; COVERAGE_INC(ofproto_reinit_ports); @@ -1798,8 +962,8 @@ reinit_ports(struct ofproto *p) HMAP_FOR_EACH (ofport, hmap_node, &p->ports) { sset_add(&devnames, netdev_get_name(ofport->netdev)); } - DPIF_PORT_FOR_EACH (&dpif_port, &dump, p->dpif) { - sset_add(&devnames, dpif_port.name); + OFPROTO_PORT_FOR_EACH (&ofproto_port, &dump, p) { + sset_add(&devnames, ofproto_port.name); } SSET_FOR_EACH (devname, &devnames) { @@ -1808,10 +972,10 @@ reinit_ports(struct ofproto *p) sset_destroy(&devnames); } -/* Opens and returns a netdev for 'dpif_port', or a null pointer if the netdev - * cannot be opened. On success, also fills in 'opp'. */ +/* Opens and returns a netdev for 'ofproto_port', or a null pointer if the + * netdev cannot be opened. On success, also fills in 'opp'. */ static struct netdev * -ofport_open(const struct dpif_port *dpif_port, struct ofp_phy_port *opp) +ofport_open(const struct ofproto_port *ofproto_port, struct ofp_phy_port *opp) { uint32_t curr, advertised, supported, peer; struct netdev_options netdev_options; @@ -1820,25 +984,25 @@ ofport_open(const struct dpif_port *dpif_port, struct ofp_phy_port *opp) int error; memset(&netdev_options, 0, sizeof netdev_options); - netdev_options.name = dpif_port->name; - netdev_options.type = dpif_port->type; + netdev_options.name = ofproto_port->name; + netdev_options.type = ofproto_port->type; netdev_options.ethertype = NETDEV_ETH_TYPE_NONE; error = netdev_open(&netdev_options, &netdev); if (error) { VLOG_WARN_RL(&rl, "ignoring port %s (%"PRIu16") because netdev %s " "cannot be opened (%s)", - dpif_port->name, dpif_port->port_no, - dpif_port->name, strerror(error)); + ofproto_port->name, ofproto_port->ofp_port, + ofproto_port->name, strerror(error)); return NULL; } netdev_get_flags(netdev, &flags); netdev_get_features(netdev, &curr, &advertised, &supported, &peer); - opp->port_no = htons(odp_port_to_ofp_port(dpif_port->port_no)); + opp->port_no = htons(ofproto_port->ofp_port); netdev_get_etheraddr(netdev, opp->hw_addr); - ovs_strzcpy(opp->name, dpif_port->name, sizeof opp->name); + ovs_strzcpy(opp->name, ofproto_port->name, sizeof opp->name); opp->config = flags & NETDEV_UP ? 0 : htonl(OFPPC_PORT_DOWN); opp->state = netdev_get_carrier(netdev) ? 0 : htonl(OFPPS_LINK_DOWN); opp->curr = htonl(curr); @@ -1849,27 +1013,11 @@ ofport_open(const struct dpif_port *dpif_port, struct ofp_phy_port *opp) return netdev; } +/* Returns true if most fields of 'a' and 'b' are equal. Differences in name, + * port number, and 'config' bits other than OFPPC_PORT_DOWN are + * disregarded. */ static bool -ofport_conflicts(const struct ofproto *p, const struct dpif_port *dpif_port) -{ - if (get_port(p, dpif_port->port_no)) { - VLOG_WARN_RL(&rl, "ignoring duplicate port %"PRIu16" in datapath", - dpif_port->port_no); - return true; - } else if (shash_find(&p->port_by_name, dpif_port->name)) { - VLOG_WARN_RL(&rl, "ignoring duplicate device %s in datapath", - dpif_port->name); - return true; - } else { - return false; - } -} - -/* Returns true if most fields of 'a' and 'b' are equal. Differences in name, - * port number, and 'config' bits other than OFPPC_PORT_DOWN are - * disregarded. */ -static bool -ofport_equal(const struct ofp_phy_port *a, const struct ofp_phy_port *b) +ofport_equal(const struct ofp_phy_port *a, const struct ofp_phy_port *b) { BUILD_ASSERT_DECL(sizeof *a == 48); /* Detect ofp_phy_port changes. */ return (!memcmp(a->hw_addr, b->hw_addr, sizeof a->hw_addr) @@ -1890,25 +1038,39 @@ ofport_install(struct ofproto *p, { const char *netdev_name = netdev_get_name(netdev); struct ofport *ofport; - - connmgr_send_port_status(p->connmgr, opp, OFPPR_ADD); + int error; /* Create ofport. */ - ofport = xmalloc(sizeof *ofport); + ofport = p->ofproto_class->port_alloc(); + if (!ofport) { + error = ENOMEM; + goto error; + } ofport->ofproto = p; ofport->netdev = netdev; ofport->opp = *opp; - ofport->odp_port = ofp_port_to_odp_port(ntohs(opp->port_no)); - ofport->bundle = NULL; - ofport->cfm = NULL; - ofport->tag = tag_create_random(); + ofport->ofp_port = ntohs(opp->port_no); /* Add port to 'p'. */ netdev_monitor_add(p->netdev_monitor, ofport->netdev); - hmap_insert(&p->ports, &ofport->hmap_node, hash_int(ofport->odp_port, 0)); + hmap_insert(&p->ports, &ofport->hmap_node, hash_int(ofport->ofp_port, 0)); shash_add(&p->port_by_name, netdev_name, ofport); - if (p->sflow) { - ofproto_sflow_add_port(p->sflow, ofport->odp_port, netdev_name); + + /* Let the ofproto_class initialize its private data. */ + error = p->ofproto_class->port_construct(ofport); + if (error) { + goto error; + } + connmgr_send_port_status(p->connmgr, opp, OFPPR_ADD); + return; + +error: + VLOG_WARN_RL(&rl, "%s: could not add port %s (%s)", + p->name, netdev_name, strerror(error)); + if (ofport) { + ofport_destroy__(ofport); + } else { + netdev_close(netdev); } } @@ -1918,7 +1080,7 @@ ofport_remove(struct ofport *ofport) { connmgr_send_port_status(ofport->ofproto->connmgr, &ofport->opp, OFPPR_DELETE); - ofport_free(ofport); + ofport_destroy(ofport); } /* If 'ofproto' contains an ofport named 'name', removes it from 'ofproto' and @@ -1937,15 +1099,8 @@ ofport_remove_with_name(struct ofproto *ofproto, const char *name) * Does not handle a name or port number change. The caller must implement * such a change as a delete followed by an add. */ static void -ofport_modified(struct ofport *port, - struct netdev *netdev, struct ofp_phy_port *opp) +ofport_modified(struct ofport *port, struct ofp_phy_port *opp) { - struct ofproto *ofproto = port->ofproto; - - if (port->bundle && port->bundle->bond) { - bond_slave_set_netdev(port->bundle->bond, port, netdev); - } - memcpy(port->opp.hw_addr, opp->hw_addr, ETH_ADDR_LEN); port->opp.config = ((port->opp.config & ~htonl(OFPPC_PORT_DOWN)) | (opp->config & htonl(OFPPC_PORT_DOWN))); @@ -1955,101 +1110,55 @@ ofport_modified(struct ofport *port, port->opp.supported = opp->supported; port->opp.peer = opp->peer; - netdev_monitor_remove(ofproto->netdev_monitor, port->netdev); - netdev_monitor_add(ofproto->netdev_monitor, netdev); - - netdev_close(port->netdev); - port->netdev = netdev; - - connmgr_send_port_status(ofproto->connmgr, &port->opp, OFPPR_MODIFY); + connmgr_send_port_status(port->ofproto->connmgr, &port->opp, OFPPR_MODIFY); } -static void -ofport_run(struct ofport *ofport) +void +ofproto_port_unregister(struct ofproto *ofproto, uint16_t ofp_port) { - if (ofport->cfm) { - cfm_run(ofport->cfm); - - if (cfm_should_send_ccm(ofport->cfm)) { - struct ofpbuf packet; - struct ccm *ccm; - - ofpbuf_init(&packet, 0); - ccm = eth_compose(&packet, eth_addr_ccm, ofport->opp.hw_addr, - ETH_TYPE_CFM, sizeof *ccm); - cfm_compose_ccm(ofport->cfm, ccm); - ofproto_send_packet(ofport->ofproto, ofport->odp_port, 0, &packet); - ofpbuf_uninit(&packet); + struct ofport *port = ofproto_get_port(ofproto, ofp_port); + if (port) { + if (port->ofproto->ofproto_class->set_cfm) { + port->ofproto->ofproto_class->set_cfm(port, NULL, NULL, 0); + } + if (port->ofproto->ofproto_class->bundle_remove) { + port->ofproto->ofproto_class->bundle_remove(port); } } } static void -ofport_wait(struct ofport *ofport) -{ - if (ofport->cfm) { - cfm_wait(ofport->cfm); - } -} - -static void -ofport_unregister(struct ofport *port) +ofport_destroy__(struct ofport *port) { - struct ofbundle *bundle = port->bundle; - - if (bundle) { - ofproto_bundle_del_port(port); - if (list_is_empty(&bundle->ports)) { - ofproto_bundle_destroy(bundle); - } else if (list_is_short(&bundle->ports)) { - bond_destroy(bundle->bond); - bundle->bond = NULL; - } - } + struct ofproto *ofproto = port->ofproto; + const char *name = netdev_get_name(port->netdev); - cfm_destroy(port->cfm); - port->cfm = NULL; -} + netdev_monitor_remove(ofproto->netdev_monitor, port->netdev); + hmap_remove(&ofproto->ports, &port->hmap_node); + shash_delete(&ofproto->port_by_name, + shash_find(&ofproto->port_by_name, name)); -void -ofproto_port_unregister(struct ofproto *ofproto, uint16_t ofp_port) -{ - struct ofport *port = get_port(ofproto, ofp_port_to_odp_port(ofp_port)); - if (port) { - ofport_unregister(port); - } + netdev_close(port->netdev); + ofproto->ofproto_class->port_dealloc(port); } static void -ofport_free(struct ofport *port) +ofport_destroy(struct ofport *port) { if (port) { - struct ofproto *ofproto = port->ofproto; - const char *name = netdev_get_name(port->netdev); - - ofport_unregister(port); - - netdev_monitor_remove(ofproto->netdev_monitor, port->netdev); - hmap_remove(&ofproto->ports, &port->hmap_node); - shash_delete(&ofproto->port_by_name, - shash_find(&ofproto->port_by_name, name)); - if (ofproto->sflow) { - ofproto_sflow_del_port(ofproto->sflow, port->odp_port); - } - - netdev_close(port->netdev); - free(port); - } + port->ofproto->ofproto_class->port_destruct(port); + ofport_destroy__(port); + } } -static struct ofport * -get_port(const struct ofproto *ofproto, uint16_t odp_port) +struct ofport * +ofproto_get_port(const struct ofproto *ofproto, uint16_t ofp_port) { struct ofport *port; HMAP_FOR_EACH_IN_BUCKET (port, hmap_node, - hash_int(odp_port, 0), &ofproto->ports) { - if (port->odp_port == odp_port) { + hash_int(ofp_port, 0), &ofproto->ports) { + if (port->ofp_port == ofp_port) { return port; } } @@ -2059,7 +1168,7 @@ get_port(const struct ofproto *ofproto, uint16_t odp_port) static void update_port(struct ofproto *ofproto, const char *name) { - struct dpif_port dpif_port; + struct ofproto_port ofproto_port; struct ofp_phy_port opp; struct netdev *netdev; struct ofport *port; @@ -2067,17 +1176,26 @@ update_port(struct ofproto *ofproto, const char *name) COVERAGE_INC(ofproto_update_port); /* Fetch 'name''s location and properties from the datapath. */ - netdev = (!dpif_port_query_by_name(ofproto->dpif, name, &dpif_port) - ? ofport_open(&dpif_port, &opp) + netdev = (!ofproto_port_query_by_name(ofproto, name, &ofproto_port) + ? ofport_open(&ofproto_port, &opp) : NULL); if (netdev) { - port = get_port(ofproto, dpif_port.port_no); + port = ofproto_get_port(ofproto, ofproto_port.ofp_port); if (port && !strcmp(netdev_get_name(port->netdev), name)) { /* 'name' hasn't changed location. Any properties changed? */ if (!ofport_equal(&port->opp, &opp)) { - ofport_modified(port, netdev, &opp); - } else { - netdev_close(netdev); + ofport_modified(port, &opp); + } + + /* Install the newly opened netdev in case it has changed. */ + netdev_monitor_remove(ofproto->netdev_monitor, port->netdev); + netdev_monitor_add(ofproto->netdev_monitor, netdev); + + netdev_close(port->netdev); + port->netdev = netdev; + + if (port->ofproto->ofproto_class->port_modified) { + port->ofproto->ofproto_class->port_modified(port); } } else { /* If 'port' is nonnull then its name differs from 'name' and thus @@ -2093,21 +1211,28 @@ update_port(struct ofproto *ofproto, const char *name) /* Any port named 'name' is gone now. */ ofport_remove_with_name(ofproto, name); } - dpif_port_destroy(&dpif_port); + ofproto_port_destroy(&ofproto_port); } static int init_ports(struct ofproto *p) { - struct dpif_port_dump dump; - struct dpif_port dpif_port; - - DPIF_PORT_FOR_EACH (&dpif_port, &dump, p->dpif) { - if (!ofport_conflicts(p, &dpif_port)) { + struct ofproto_port_dump dump; + struct ofproto_port ofproto_port; + + OFPROTO_PORT_FOR_EACH (&ofproto_port, &dump, p) { + uint16_t ofp_port = ofproto_port.ofp_port; + if (ofproto_get_port(p, ofp_port)) { + VLOG_WARN_RL(&rl, "ignoring duplicate port %"PRIu16" in datapath", + ofp_port); + } else if (shash_find(&p->port_by_name, ofproto_port.name)) { + VLOG_WARN_RL(&rl, "ignoring duplicate device %s in datapath", + ofproto_port.name); + } else { struct ofp_phy_port opp; struct netdev *netdev; - netdev = ofport_open(&dpif_port, &opp); + netdev = ofport_open(&ofproto_port, &opp); if (netdev) { ofport_install(p, netdev, &opp); } @@ -2117,68 +1242,71 @@ init_ports(struct ofproto *p) return 0; } -/* Returns true if 'rule' should be hidden from the controller. - * - * Rules with priority higher than UINT16_MAX are set up by ofproto itself - * (e.g. by in-band control) and are intentionally hidden from the - * controller. */ -static bool -rule_is_hidden(const struct rule *rule) -{ - return rule->cr.priority > UINT16_MAX; -} - -/* Creates and returns a new rule initialized as specified. - * - * The caller is responsible for inserting the rule into the classifier (with - * rule_insert()). */ -static struct rule * -rule_create(const struct cls_rule *cls_rule, +/* Creates a new rule initialized as specified, inserts it into 'ofproto''s + * flow table, and stores the new rule into '*rulep'. Returns 0 on success, + * otherwise a positive errno value or OpenFlow error code. */ +static int +rule_create(struct ofproto *ofproto, const struct cls_rule *cls_rule, const union ofp_action *actions, size_t n_actions, uint16_t idle_timeout, uint16_t hard_timeout, - ovs_be64 flow_cookie, bool send_flow_removed) + ovs_be64 flow_cookie, bool send_flow_removed, + struct rule **rulep) { - struct rule *rule = xzalloc(sizeof *rule); + struct rule *rule; + int error; + + rule = ofproto->ofproto_class->rule_alloc(); + if (!rule) { + error = ENOMEM; + goto error; + } + + rule->ofproto = ofproto; rule->cr = *cls_rule; + rule->flow_cookie = flow_cookie; + rule->created = time_msec(); rule->idle_timeout = idle_timeout; rule->hard_timeout = hard_timeout; - rule->flow_cookie = flow_cookie; - rule->used = rule->created = time_msec(); rule->send_flow_removed = send_flow_removed; - list_init(&rule->facets); if (n_actions > 0) { - rule->n_actions = n_actions; rule->actions = xmemdup(actions, n_actions * sizeof *actions); + } else { + rule->actions = NULL; } + rule->n_actions = n_actions; - return rule; -} + error = ofproto->ofproto_class->rule_construct(rule); + if (error) { + ofproto_rule_destroy__(rule); + goto error; + } -static struct rule * -rule_from_cls_rule(const struct cls_rule *cls_rule) -{ - return cls_rule ? CONTAINER_OF(cls_rule, struct rule, cr) : NULL; + *rulep = rule; + return 0; + +error: + VLOG_WARN_RL(&rl, "%s: failed to create rule (%s)", + ofproto->name, strerror(error)); + *rulep = NULL; + return error; } static void -rule_free(struct rule *rule) +ofproto_rule_destroy__(struct rule *rule) { free(rule->actions); - free(rule); + rule->ofproto->ofproto_class->rule_dealloc(rule); } /* Destroys 'rule' and iterates through all of its facets and revalidates them, * destroying any that no longer has a rule (which is probably all of them). * * The caller must have already removed 'rule' from the classifier. */ -static void -rule_destroy(struct ofproto *ofproto, struct rule *rule) +void +ofproto_rule_destroy(struct rule *rule) { - struct facet *facet, *next_facet; - LIST_FOR_EACH_SAFE (facet, next_facet, list_node, &rule->facets) { - facet_revalidate(ofproto, facet); - } - rule_free(rule); + rule->ofproto->ofproto_class->rule_destruct(rule); + ofproto_rule_destroy__(rule); } /* Returns true if 'rule' has an OpenFlow OFPAT_OUTPUT or OFPAT_ENQUEUE action @@ -2202,69 +1330,10 @@ rule_has_out_port(const struct rule *rule, ovs_be16 out_port) return false; } -/* Executes, within 'ofproto', the 'n_actions' actions in 'actions' on - * 'packet', which arrived on 'in_port'. - * - * Takes ownership of 'packet'. */ -static bool -execute_odp_actions(struct ofproto *ofproto, const struct flow *flow, - const struct nlattr *odp_actions, size_t actions_len, - struct ofpbuf *packet) -{ - if (actions_len == NLA_ALIGN(NLA_HDRLEN + sizeof(uint64_t)) - && odp_actions->nla_type == ODP_ACTION_ATTR_CONTROLLER) { - /* As an optimization, avoid a round-trip from userspace to kernel to - * userspace. This also avoids possibly filling up kernel packet - * buffers along the way. */ - struct dpif_upcall upcall; - - upcall.type = DPIF_UC_ACTION; - upcall.packet = packet; - upcall.key = NULL; - upcall.key_len = 0; - upcall.userdata = nl_attr_get_u64(odp_actions); - upcall.sample_pool = 0; - upcall.actions = NULL; - upcall.actions_len = 0; - - send_packet_in(ofproto, &upcall, flow, false); - - return true; - } else { - int error; - - error = dpif_execute(ofproto->dpif, odp_actions, actions_len, packet); - ofpbuf_delete(packet); - return !error; - } -} - -/* Executes the actions indicated by 'facet' on 'packet' and credits 'facet''s - * statistics appropriately. 'packet' must have at least sizeof(struct - * ofp_packet_in) bytes of headroom. - * - * For correct results, 'packet' must actually be in 'facet''s flow; that is, - * applying flow_extract() to 'packet' would yield the same flow as - * 'facet->flow'. - * - * 'facet' must have accurately composed ODP actions; that is, it must not be - * in need of revalidation. - * - * Takes ownership of 'packet'. */ -static void -facet_execute(struct ofproto *ofproto, struct facet *facet, - struct ofpbuf *packet) +struct rule * +ofproto_rule_lookup(struct ofproto *ofproto, const struct flow *flow) { - struct dpif_flow_stats stats; - - assert(ofpbuf_headroom(packet) >= sizeof(struct ofp_packet_in)); - - flow_extract_stats(&facet->flow, packet, &stats); - stats.used = time_msec(); - if (execute_odp_actions(ofproto, &facet->flow, - facet->actions, facet->actions_len, packet)) { - facet_update_stats(ofproto, facet, &stats); - } + return rule_from_cls_rule(classifier_lookup(&ofproto->cls, flow)); } /* Executes the actions indicated by 'rule' on 'packet' and credits 'rule''s @@ -2276,931 +1345,39 @@ facet_execute(struct ofproto *ofproto, struct facet *facet, * * Takes ownership of 'packet'. */ static void -rule_execute(struct ofproto *ofproto, struct rule *rule, uint16_t in_port, - struct ofpbuf *packet) +rule_execute(struct rule *rule, uint16_t in_port, struct ofpbuf *packet) { - struct action_xlate_ctx ctx; - struct ofpbuf *odp_actions; - struct facet *facet; struct flow flow; - size_t size; - - assert(ofpbuf_headroom(packet) >= sizeof(struct ofp_packet_in)); - - flow_extract(packet, 0, in_port, &flow); - - /* First look for a related facet. If we find one, account it to that. */ - facet = facet_lookup_valid(ofproto, &flow); - if (facet && facet->rule == rule) { - facet_execute(ofproto, facet, packet); - return; - } - - /* Otherwise, if 'rule' is in fact the correct rule for 'packet', then - * create a new facet for it and use that. */ - if (rule_lookup(ofproto, &flow) == rule) { - facet = facet_create(ofproto, rule, &flow, packet); - facet_execute(ofproto, facet, packet); - facet_install(ofproto, facet, true); - return; - } - - /* We can't account anything to a facet. If we were to try, then that - * facet would have a non-matching rule, busting our invariants. */ - action_xlate_ctx_init(&ctx, ofproto, &flow, packet); - odp_actions = xlate_actions(&ctx, rule->actions, rule->n_actions); - size = packet->size; - if (execute_odp_actions(ofproto, &flow, odp_actions->data, - odp_actions->size, packet)) { - rule->used = time_msec(); - rule->packet_count++; - rule->byte_count += size; - flow_push_stats(ofproto, rule, &flow, 1, size, rule->used); - } - ofpbuf_delete(odp_actions); -} - -/* Inserts 'rule' into 'p''s flow table. */ -static void -rule_insert(struct ofproto *p, struct rule *rule) -{ - struct rule *displaced_rule; - - displaced_rule = rule_from_cls_rule(classifier_insert(&p->cls, &rule->cr)); - if (displaced_rule) { - rule_destroy(p, displaced_rule); - } - p->need_revalidate = true; -} - -/* Creates and returns a new facet within 'ofproto' owned by 'rule', given a - * 'flow' and an example 'packet' within that flow. - * - * The caller must already have determined that no facet with an identical - * 'flow' exists in 'ofproto' and that 'flow' is the best match for 'rule' in - * 'ofproto''s classifier table. */ -static struct facet * -facet_create(struct ofproto *ofproto, struct rule *rule, - const struct flow *flow, const struct ofpbuf *packet) -{ - struct facet *facet; - - facet = xzalloc(sizeof *facet); - facet->used = time_msec(); - hmap_insert(&ofproto->facets, &facet->hmap_node, flow_hash(flow, 0)); - list_push_back(&rule->facets, &facet->list_node); - facet->rule = rule; - facet->flow = *flow; - netflow_flow_init(&facet->nf_flow); - netflow_flow_update_time(ofproto->netflow, &facet->nf_flow, facet->used); - - facet_make_actions(ofproto, facet, packet); - - return facet; -} - -static void -facet_free(struct facet *facet) -{ - free(facet->actions); - free(facet); -} - -/* Remove 'rule' from 'ofproto' and free up the associated memory: - * - * - Removes 'rule' from the classifier. - * - * - If 'rule' has facets, revalidates them (and possibly uninstalls and - * destroys them), via rule_destroy(). - */ -static void -rule_remove(struct ofproto *ofproto, struct rule *rule) -{ - COVERAGE_INC(ofproto_del_rule); - ofproto->need_revalidate = true; - classifier_remove(&ofproto->cls, &rule->cr); - rule_destroy(ofproto, rule); -} - -/* Remove 'facet' from 'ofproto' and free up the associated memory: - * - * - If 'facet' was installed in the datapath, uninstalls it and updates its - * rule's statistics, via facet_uninstall(). - * - * - Removes 'facet' from its rule and from ofproto->facets. - */ -static void -facet_remove(struct ofproto *ofproto, struct facet *facet) -{ - facet_uninstall(ofproto, facet); - facet_flush_stats(ofproto, facet); - hmap_remove(&ofproto->facets, &facet->hmap_node); - list_remove(&facet->list_node); - facet_free(facet); -} - -/* Composes the ODP actions for 'facet' based on its rule's actions. */ -static void -facet_make_actions(struct ofproto *p, struct facet *facet, - const struct ofpbuf *packet) -{ - const struct rule *rule = facet->rule; - struct ofpbuf *odp_actions; - struct action_xlate_ctx ctx; - - action_xlate_ctx_init(&ctx, p, &facet->flow, packet); - odp_actions = xlate_actions(&ctx, rule->actions, rule->n_actions); - facet->tags = ctx.tags; - facet->may_install = ctx.may_set_up_flow; - facet->nf_flow.output_iface = ctx.nf_output_iface; - - if (facet->actions_len != odp_actions->size - || memcmp(facet->actions, odp_actions->data, odp_actions->size)) { - free(facet->actions); - facet->actions_len = odp_actions->size; - facet->actions = xmemdup(odp_actions->data, odp_actions->size); - } - ofpbuf_delete(odp_actions); -} - -static int -facet_put__(struct ofproto *ofproto, struct facet *facet, - const struct nlattr *actions, size_t actions_len, - struct dpif_flow_stats *stats) -{ - struct odputil_keybuf keybuf; - enum dpif_flow_put_flags flags; - struct ofpbuf key; - - flags = DPIF_FP_CREATE | DPIF_FP_MODIFY; - if (stats) { - flags |= DPIF_FP_ZERO_STATS; - facet->dp_packet_count = 0; - facet->dp_byte_count = 0; - } - - ofpbuf_use_stack(&key, &keybuf, sizeof keybuf); - odp_flow_key_from_flow(&key, &facet->flow); - - return dpif_flow_put(ofproto->dpif, flags, key.data, key.size, - actions, actions_len, stats); -} - -/* If 'facet' is installable, inserts or re-inserts it into 'p''s datapath. If - * 'zero_stats' is true, clears any existing statistics from the datapath for - * 'facet'. */ -static void -facet_install(struct ofproto *p, struct facet *facet, bool zero_stats) -{ - struct dpif_flow_stats stats; - - if (facet->may_install - && !facet_put__(p, facet, facet->actions, facet->actions_len, - zero_stats ? &stats : NULL)) { - facet->installed = true; - } -} - -static void -facet_account(struct ofproto *ofproto, - struct facet *facet, uint64_t extra_bytes) -{ - uint64_t total_bytes, n_bytes; - struct ofbundle *in_bundle; - const struct nlattr *a; - tag_type dummy = 0; - unsigned int left; - int vlan; - - total_bytes = facet->byte_count + extra_bytes; - if (total_bytes <= facet->accounted_bytes) { - return; - } - n_bytes = total_bytes - facet->accounted_bytes; - facet->accounted_bytes = total_bytes; - - /* Test that 'tags' is nonzero to ensure that only flows that include an - * OFPP_NORMAL action are used for learning and bond slave rebalancing. - * This works because OFPP_NORMAL always sets a nonzero tag value. - * - * Feed information from the active flows back into the learning table to - * ensure that table is always in sync with what is actually flowing - * through the datapath. */ - if (!facet->tags - || !is_admissible(ofproto, &facet->flow, false, &dummy, - &vlan, &in_bundle)) { - return; - } - - update_learning_table(ofproto, &facet->flow, vlan, in_bundle); - - if (!ofproto->has_bonded_bundles) { - return; - } - NL_ATTR_FOR_EACH_UNSAFE (a, left, facet->actions, facet->actions_len) { - if (nl_attr_type(a) == ODP_ACTION_ATTR_OUTPUT) { - struct ofport *port = get_port(ofproto, nl_attr_get_u32(a)); - if (port && port->bundle && port->bundle->bond) { - bond_account(port->bundle->bond, &facet->flow, vlan, n_bytes); - } - } - } -} - -/* If 'rule' is installed in the datapath, uninstalls it. */ -static void -facet_uninstall(struct ofproto *p, struct facet *facet) -{ - if (facet->installed) { - struct odputil_keybuf keybuf; - struct dpif_flow_stats stats; - struct ofpbuf key; - - ofpbuf_use_stack(&key, &keybuf, sizeof keybuf); - odp_flow_key_from_flow(&key, &facet->flow); - - if (!dpif_flow_del(p->dpif, key.data, key.size, &stats)) { - facet_update_stats(p, facet, &stats); - } - facet->installed = false; - facet->dp_packet_count = 0; - facet->dp_byte_count = 0; - } else { - assert(facet->dp_packet_count == 0); - assert(facet->dp_byte_count == 0); - } -} - -/* Returns true if the only action for 'facet' is to send to the controller. - * (We don't report NetFlow expiration messages for such facets because they - * are just part of the control logic for the network, not real traffic). */ -static bool -facet_is_controller_flow(struct facet *facet) -{ - return (facet - && facet->rule->n_actions == 1 - && action_outputs_to_port(&facet->rule->actions[0], - htons(OFPP_CONTROLLER))); -} - -/* Folds all of 'facet''s statistics into its rule. Also updates the - * accounting ofhook and emits a NetFlow expiration if appropriate. All of - * 'facet''s statistics in the datapath should have been zeroed and folded into - * its packet and byte counts before this function is called. */ -static void -facet_flush_stats(struct ofproto *ofproto, struct facet *facet) -{ - assert(!facet->dp_byte_count); - assert(!facet->dp_packet_count); - - facet_push_stats(ofproto, facet); - facet_account(ofproto, facet, 0); - - if (ofproto->netflow && !facet_is_controller_flow(facet)) { - struct ofexpired expired; - expired.flow = facet->flow; - expired.packet_count = facet->packet_count; - expired.byte_count = facet->byte_count; - expired.used = facet->used; - netflow_expire(ofproto->netflow, &facet->nf_flow, &expired); - } - - facet->rule->packet_count += facet->packet_count; - facet->rule->byte_count += facet->byte_count; - - /* Reset counters to prevent double counting if 'facet' ever gets - * reinstalled. */ - facet->packet_count = 0; - facet->byte_count = 0; - facet->rs_packet_count = 0; - facet->rs_byte_count = 0; - facet->accounted_bytes = 0; - - netflow_flow_clear(&facet->nf_flow); -} - -/* Searches 'ofproto''s table of facets for one exactly equal to 'flow'. - * Returns it if found, otherwise a null pointer. - * - * The returned facet might need revalidation; use facet_lookup_valid() - * instead if that is important. */ -static struct facet * -facet_find(struct ofproto *ofproto, const struct flow *flow) -{ - struct facet *facet; - - HMAP_FOR_EACH_WITH_HASH (facet, hmap_node, flow_hash(flow, 0), - &ofproto->facets) { - if (flow_equal(flow, &facet->flow)) { - return facet; - } - } - - return NULL; -} - -/* Searches 'ofproto''s table of facets for one exactly equal to 'flow'. - * Returns it if found, otherwise a null pointer. - * - * The returned facet is guaranteed to be valid. */ -static struct facet * -facet_lookup_valid(struct ofproto *ofproto, const struct flow *flow) -{ - struct facet *facet = facet_find(ofproto, flow); - - /* The facet we found might not be valid, since we could be in need of - * revalidation. If it is not valid, don't return it. */ - if (facet - && ofproto->need_revalidate - && !facet_revalidate(ofproto, facet)) { - COVERAGE_INC(ofproto_invalidated); - return NULL; - } - - return facet; -} - -/* Re-searches 'ofproto''s classifier for a rule matching 'facet': - * - * - If the rule found is different from 'facet''s current rule, moves - * 'facet' to the new rule and recompiles its actions. - * - * - If the rule found is the same as 'facet''s current rule, leaves 'facet' - * where it is and recompiles its actions anyway. - * - * - If there is none, destroys 'facet'. - * - * Returns true if 'facet' still exists, false if it has been destroyed. */ -static bool -facet_revalidate(struct ofproto *ofproto, struct facet *facet) -{ - struct action_xlate_ctx ctx; - struct ofpbuf *odp_actions; - struct rule *new_rule; - bool actions_changed; - - COVERAGE_INC(facet_revalidate); - - /* Determine the new rule. */ - new_rule = rule_lookup(ofproto, &facet->flow); - if (!new_rule) { - /* No new rule, so delete the facet. */ - facet_remove(ofproto, facet); - return false; - } - - /* Calculate new ODP actions. - * - * We do not modify any 'facet' state yet, because we might need to, e.g., - * emit a NetFlow expiration and, if so, we need to have the old state - * around to properly compose it. */ - action_xlate_ctx_init(&ctx, ofproto, &facet->flow, NULL); - odp_actions = xlate_actions(&ctx, new_rule->actions, new_rule->n_actions); - actions_changed = (facet->actions_len != odp_actions->size - || memcmp(facet->actions, odp_actions->data, - facet->actions_len)); - - /* If the ODP actions changed or the installability changed, then we need - * to talk to the datapath. */ - if (actions_changed || ctx.may_set_up_flow != facet->installed) { - if (ctx.may_set_up_flow) { - struct dpif_flow_stats stats; - - facet_put__(ofproto, facet, - odp_actions->data, odp_actions->size, &stats); - facet_update_stats(ofproto, facet, &stats); - } else { - facet_uninstall(ofproto, facet); - } - - /* The datapath flow is gone or has zeroed stats, so push stats out of - * 'facet' into 'rule'. */ - facet_flush_stats(ofproto, facet); - } - - /* Update 'facet' now that we've taken care of all the old state. */ - facet->tags = ctx.tags; - facet->nf_flow.output_iface = ctx.nf_output_iface; - facet->may_install = ctx.may_set_up_flow; - if (actions_changed) { - free(facet->actions); - facet->actions_len = odp_actions->size; - facet->actions = xmemdup(odp_actions->data, odp_actions->size); - } - if (facet->rule != new_rule) { - COVERAGE_INC(facet_changed_rule); - list_remove(&facet->list_node); - list_push_back(&new_rule->facets, &facet->list_node); - facet->rule = new_rule; - facet->used = new_rule->created; - facet->rs_used = facet->used; - } - - ofpbuf_delete(odp_actions); - - return true; -} - -/* Bridge packet processing functions. */ - -struct dst { - struct ofport *port; - uint16_t vlan; -}; - -struct dst_set { - struct dst builtin[32]; - struct dst *dsts; - size_t n, allocated; -}; - -static void dst_set_init(struct dst_set *); -static void dst_set_add(struct dst_set *, const struct dst *); -static void dst_set_free(struct dst_set *); - -static struct ofport *ofbundle_get_a_port(const struct ofbundle *); - -static bool -set_dst(struct action_xlate_ctx *ctx, struct dst *dst, - const struct ofbundle *in_bundle, const struct ofbundle *out_bundle) -{ - dst->vlan = (out_bundle->vlan >= 0 ? OFP_VLAN_NONE - : in_bundle->vlan >= 0 ? in_bundle->vlan - : ctx->flow.vlan_tci == 0 ? OFP_VLAN_NONE - : vlan_tci_to_vid(ctx->flow.vlan_tci)); - - dst->port = (!out_bundle->bond - ? ofbundle_get_a_port(out_bundle) - : bond_choose_output_slave(out_bundle->bond, &ctx->flow, - dst->vlan, &ctx->tags)); - - return dst->port != NULL; -} - -static int -mirror_mask_ffs(mirror_mask_t mask) -{ - BUILD_ASSERT_DECL(sizeof(unsigned int) >= sizeof(mask)); - return ffs(mask); -} - -static void -dst_set_init(struct dst_set *set) -{ - set->dsts = set->builtin; - set->n = 0; - set->allocated = ARRAY_SIZE(set->builtin); -} - -static void -dst_set_add(struct dst_set *set, const struct dst *dst) -{ - if (set->n >= set->allocated) { - size_t new_allocated; - struct dst *new_dsts; - - new_allocated = set->allocated * 2; - new_dsts = xmalloc(new_allocated * sizeof *new_dsts); - memcpy(new_dsts, set->dsts, set->n * sizeof *new_dsts); - - dst_set_free(set); - - set->dsts = new_dsts; - set->allocated = new_allocated; - } - set->dsts[set->n++] = *dst; -} - -static void -dst_set_free(struct dst_set *set) -{ - if (set->dsts != set->builtin) { - free(set->dsts); - } -} - -static bool -dst_is_duplicate(const struct dst_set *set, const struct dst *test) -{ - size_t i; - for (i = 0; i < set->n; i++) { - if (set->dsts[i].vlan == test->vlan - && set->dsts[i].port == test->port) { - return true; - } - } - return false; -} - -static bool -ofbundle_trunks_vlan(const struct ofbundle *bundle, uint16_t vlan) -{ - return bundle->vlan < 0 && vlan_bitmap_contains(bundle->trunks, vlan); -} - -static bool -ofbundle_includes_vlan(const struct ofbundle *bundle, uint16_t vlan) -{ - return vlan == bundle->vlan || ofbundle_trunks_vlan(bundle, vlan); -} - -/* Returns an arbitrary interface within 'bundle'. */ -static struct ofport * -ofbundle_get_a_port(const struct ofbundle *bundle) -{ - return CONTAINER_OF(list_front(&bundle->ports), - struct ofport, bundle_node); -} - -static void -compose_dsts(struct action_xlate_ctx *ctx, uint16_t vlan, - const struct ofbundle *in_bundle, - const struct ofbundle *out_bundle, struct dst_set *set) -{ - struct dst dst; - - if (out_bundle == OFBUNDLE_FLOOD) { - struct ofbundle *bundle; - - HMAP_FOR_EACH (bundle, hmap_node, &ctx->ofproto->bundles) { - if (bundle != in_bundle - && ofbundle_includes_vlan(bundle, vlan) - && bundle->floodable - && !bundle->mirror_out - && set_dst(ctx, &dst, in_bundle, bundle)) { - dst_set_add(set, &dst); - } - } - ctx->nf_output_iface = NF_OUT_FLOOD; - } else if (out_bundle && set_dst(ctx, &dst, in_bundle, out_bundle)) { - dst_set_add(set, &dst); - ctx->nf_output_iface = dst.port->odp_port; - } -} - -static bool -vlan_is_mirrored(const struct ofmirror *m, int vlan) -{ - return vlan_bitmap_contains(m->vlans, vlan); -} - -static void -compose_mirror_dsts(struct action_xlate_ctx *ctx, - uint16_t vlan, const struct ofbundle *in_bundle, - struct dst_set *set) -{ - struct ofproto *ofproto = ctx->ofproto; - mirror_mask_t mirrors; - int flow_vlan; - size_t i; - - mirrors = in_bundle->src_mirrors; - for (i = 0; i < set->n; i++) { - mirrors |= set->dsts[i].port->bundle->dst_mirrors; - } - - if (!mirrors) { - return; - } - - flow_vlan = vlan_tci_to_vid(ctx->flow.vlan_tci); - if (flow_vlan == 0) { - flow_vlan = OFP_VLAN_NONE; - } - - while (mirrors) { - struct ofmirror *m = ofproto->mirrors[mirror_mask_ffs(mirrors) - 1]; - if (vlan_is_mirrored(m, vlan)) { - struct dst dst; - - if (m->out) { - if (set_dst(ctx, &dst, in_bundle, m->out) - && !dst_is_duplicate(set, &dst)) { - dst_set_add(set, &dst); - } - } else { - struct ofbundle *bundle; - - HMAP_FOR_EACH (bundle, hmap_node, &ofproto->bundles) { - if (ofbundle_includes_vlan(bundle, m->out_vlan) - && set_dst(ctx, &dst, in_bundle, bundle)) - { - if (bundle->vlan < 0) { - dst.vlan = m->out_vlan; - } - if (dst_is_duplicate(set, &dst)) { - continue; - } - - /* Use the vlan tag on the original flow instead of - * the one passed in the vlan parameter. This ensures - * that we compare the vlan from before any implicit - * tagging tags place. This is necessary because - * dst->vlan is the final vlan, after removing implicit - * tags. */ - if (bundle == in_bundle && dst.vlan == flow_vlan) { - /* Don't send out input port on same VLAN. */ - continue; - } - dst_set_add(set, &dst); - } - } - } - } - mirrors &= mirrors - 1; - } -} - -static void -compose_actions(struct action_xlate_ctx *ctx, uint16_t vlan, - const struct ofbundle *in_bundle, - const struct ofbundle *out_bundle) -{ - uint16_t initial_vlan, cur_vlan; - const struct dst *dst; - struct dst_set set; - - dst_set_init(&set); - compose_dsts(ctx, vlan, in_bundle, out_bundle, &set); - compose_mirror_dsts(ctx, vlan, in_bundle, &set); - - /* Output all the packets we can without having to change the VLAN. */ - initial_vlan = vlan_tci_to_vid(ctx->flow.vlan_tci); - if (initial_vlan == 0) { - initial_vlan = OFP_VLAN_NONE; - } - for (dst = set.dsts; dst < &set.dsts[set.n]; dst++) { - if (dst->vlan != initial_vlan) { - continue; - } - nl_msg_put_u32(ctx->odp_actions, - ODP_ACTION_ATTR_OUTPUT, dst->port->odp_port); - } - - /* Then output the rest. */ - cur_vlan = initial_vlan; - for (dst = set.dsts; dst < &set.dsts[set.n]; dst++) { - if (dst->vlan == initial_vlan) { - continue; - } - if (dst->vlan != cur_vlan) { - if (dst->vlan == OFP_VLAN_NONE) { - nl_msg_put_flag(ctx->odp_actions, ODP_ACTION_ATTR_STRIP_VLAN); - } else { - ovs_be16 tci; - tci = htons(dst->vlan & VLAN_VID_MASK); - tci |= ctx->flow.vlan_tci & htons(VLAN_PCP_MASK); - nl_msg_put_be16(ctx->odp_actions, - ODP_ACTION_ATTR_SET_DL_TCI, tci); - } - cur_vlan = dst->vlan; - } - nl_msg_put_u32(ctx->odp_actions, - ODP_ACTION_ATTR_OUTPUT, dst->port->odp_port); - } - - dst_set_free(&set); -} - -/* Returns the effective vlan of a packet, taking into account both the - * 802.1Q header and implicitly tagged ports. A value of 0 indicates that - * the packet is untagged and -1 indicates it has an invalid header and - * should be dropped. */ -static int -flow_get_vlan(struct ofproto *ofproto, const struct flow *flow, - struct ofbundle *in_bundle, bool have_packet) -{ - int vlan = vlan_tci_to_vid(flow->vlan_tci); - if (in_bundle->vlan >= 0) { - if (vlan) { - if (have_packet) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, "bridge %s: dropping VLAN %d tagged " - "packet received on port %s configured with " - "implicit VLAN %"PRIu16, - ofproto->name, vlan, - in_bundle->name, in_bundle->vlan); - } - return -1; - } - vlan = in_bundle->vlan; - } else { - if (!ofbundle_includes_vlan(in_bundle, vlan)) { - if (have_packet) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, "bridge %s: dropping VLAN %d tagged " - "packet received on port %s not configured for " - "trunking VLAN %d", - ofproto->name, vlan, in_bundle->name, vlan); - } - return -1; - } - } - - return vlan; -} - -/* A VM broadcasts a gratuitous ARP to indicate that it has resumed after - * migration. Older Citrix-patched Linux DomU used gratuitous ARP replies to - * indicate this; newer upstream kernels use gratuitous ARP requests. */ -static bool -is_gratuitous_arp(const struct flow *flow) -{ - return (flow->dl_type == htons(ETH_TYPE_ARP) - && eth_addr_is_broadcast(flow->dl_dst) - && (flow->nw_proto == ARP_OP_REPLY - || (flow->nw_proto == ARP_OP_REQUEST - && flow->nw_src == flow->nw_dst))); -} - -static void -update_learning_table(struct ofproto *ofproto, - const struct flow *flow, int vlan, - struct ofbundle *in_bundle) -{ - struct mac_entry *mac; - - if (!mac_learning_may_learn(ofproto->ml, flow->dl_src, vlan)) { - return; - } - - mac = mac_learning_insert(ofproto->ml, flow->dl_src, vlan); - if (is_gratuitous_arp(flow)) { - /* We don't want to learn from gratuitous ARP packets that are - * reflected back over bond slaves so we lock the learning table. */ - if (!in_bundle->bond) { - mac_entry_set_grat_arp_lock(mac); - } else if (mac_entry_is_grat_arp_locked(mac)) { - return; - } - } - - if (mac_entry_is_new(mac) || mac->port.p != in_bundle) { - /* The log messages here could actually be useful in debugging, - * so keep the rate limit relatively high. */ - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(30, 300); - VLOG_DBG_RL(&rl, "bridge %s: learned that "ETH_ADDR_FMT" is " - "on port %s in VLAN %d", - ofproto->name, ETH_ADDR_ARGS(flow->dl_src), - in_bundle->name, vlan); - - mac->port.p = in_bundle; - tag_set_add(&ofproto->revalidate_set, - mac_learning_changed(ofproto->ml, mac)); - } -} - -/* Determines whether packets in 'flow' within 'br' should be forwarded or - * dropped. Returns true if they may be forwarded, false if they should be - * dropped. - * - * If 'have_packet' is true, it indicates that the caller is processing a - * received packet. If 'have_packet' is false, then the caller is just - * revalidating an existing flow because configuration has changed. Either - * way, 'have_packet' only affects logging (there is no point in logging errors - * during revalidation). - * - * Sets '*in_portp' to the input port. This will be a null pointer if - * flow->in_port does not designate a known input port (in which case - * is_admissible() returns false). - * - * When returning true, sets '*vlanp' to the effective VLAN of the input - * packet, as returned by flow_get_vlan(). - * - * May also add tags to '*tags', although the current implementation only does - * so in one special case. - */ -static bool -is_admissible(struct ofproto *ofproto, const struct flow *flow, - bool have_packet, - tag_type *tags, int *vlanp, struct ofbundle **in_bundlep) -{ - struct ofport *in_port; - struct ofbundle *in_bundle; - int vlan; - - /* Find the port and bundle for the received packet. */ - in_port = get_port(ofproto, flow->in_port); - *in_bundlep = in_bundle = in_port->bundle; - if (!in_port || !in_bundle) { - /* No interface? Something fishy... */ - if (have_packet) { - /* Odd. A few possible reasons here: - * - * - We deleted a port but there are still a few packets queued up - * from it. - * - * - Someone externally added a port (e.g. "ovs-dpctl add-if") that - * we don't know about. - * - * - Packet arrived on the local port but the local port is not - * part of a bundle. - */ - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - - VLOG_WARN_RL(&rl, "bridge %s: received packet on unknown " - "port %"PRIu16, - ofproto->name, flow->in_port); - } - return false; - } - *vlanp = vlan = flow_get_vlan(ofproto, flow, in_bundle, have_packet); - if (vlan < 0) { - return false; - } - - /* Drop frames for reserved multicast addresses. */ - if (eth_addr_is_reserved(flow->dl_dst)) { - return false; - } - - /* Drop frames on bundles reserved for mirroring. */ - if (in_bundle->mirror_out) { - if (have_packet) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5); - VLOG_WARN_RL(&rl, "bridge %s: dropping packet received on port " - "%s, which is reserved exclusively for mirroring", - ofproto->name, in_bundle->name); - } - return false; - } - - if (in_bundle->bond) { - struct mac_entry *mac; - - switch (bond_check_admissibility(in_bundle->bond, in_port, - flow->dl_dst, tags)) { - case BV_ACCEPT: - break; - - case BV_DROP: - return false; - - case BV_DROP_IF_MOVED: - mac = mac_learning_lookup(ofproto->ml, flow->dl_src, vlan, NULL); - if (mac && mac->port.p != in_bundle && - (!is_gratuitous_arp(flow) - || mac_entry_is_grat_arp_locked(mac))) { - return false; - } - break; - } - } - - return true; -} - -/* If the composed actions may be applied to any packet in the given 'flow', - * returns true. Otherwise, the actions should only be applied to 'packet', or - * not at all, if 'packet' was NULL. */ -static bool -xlate_normal(struct action_xlate_ctx *ctx) -{ - struct ofbundle *in_bundle; - struct ofbundle *out_bundle; - struct mac_entry *mac; - int vlan; - - /* Check whether we should drop packets in this flow. */ - if (!is_admissible(ctx->ofproto, &ctx->flow, ctx->packet != NULL, - &ctx->tags, &vlan, &in_bundle)) { - out_bundle = NULL; - goto done; - } - - /* Learn source MAC (but don't try to learn from revalidation). */ - if (ctx->packet) { - update_learning_table(ctx->ofproto, &ctx->flow, vlan, in_bundle); - } - - /* Determine output bundle. */ - mac = mac_learning_lookup(ctx->ofproto->ml, ctx->flow.dl_dst, vlan, - &ctx->tags); - if (mac) { - out_bundle = mac->port.p; - } else if (!ctx->packet && !eth_addr_is_multicast(ctx->flow.dl_dst)) { - /* If we are revalidating but don't have a learning entry then eject - * the flow. Installing a flow that floods packets opens up a window - * of time where we could learn from a packet reflected on a bond and - * blackhole packets before the learning table is updated to reflect - * the correct port. */ - return false; - } else { - out_bundle = OFBUNDLE_FLOOD; - } + assert(ofpbuf_headroom(packet) >= sizeof(struct ofp_packet_in)); - /* Don't send packets out their input bundles. */ - if (in_bundle == out_bundle) { - out_bundle = NULL; - } + flow_extract(packet, 0, in_port, &flow); + rule->ofproto->ofproto_class->rule_execute(rule, &flow, packet); +} -done: - if (in_bundle) { - compose_actions(ctx, vlan, in_bundle, out_bundle); - } +/* Remove 'rule' from 'ofproto' and free up the associated memory: + * + * - Removes 'rule' from the classifier. + * + * - If 'rule' has facets, revalidates them (and possibly uninstalls and + * destroys them), via rule_destroy(). + */ +void +ofproto_rule_remove(struct rule *rule) +{ + rule->ofproto->ofproto_class->rule_remove(rule); + ofproto_rule_destroy(rule); +} - return true; +/* Returns true if 'rule' should be hidden from the controller. + * + * Rules with priority higher than UINT16_MAX are set up by ofproto itself + * (e.g. by in-band control) and are intentionally hidden from the + * controller. */ +static bool +rule_is_hidden(const struct rule *rule) +{ + return rule->cr.priority > UINT16_MAX; } static void @@ -3266,7 +1443,7 @@ handle_get_config_request(struct ofconn *ofconn, const struct ofp_header *oh) bool drop_frags; /* Figure out flags. */ - dpif_get_drop_frags(ofproto->dpif, &drop_frags); + drop_frags = ofproto->ofproto_class->get_drop_frags(ofproto); flags = drop_frags ? OFPC_FRAG_DROP : OFPC_FRAG_NORMAL; /* Send reply. */ @@ -3288,10 +1465,10 @@ handle_set_config(struct ofconn *ofconn, const struct ofp_switch_config *osc) && ofconn_get_role(ofconn) != NX_ROLE_SLAVE) { switch (flags & OFPC_FRAG_MASK) { case OFPC_FRAG_NORMAL: - dpif_set_drop_frags(ofproto->dpif, false); + ofproto->ofproto_class->set_drop_frags(ofproto, false); break; case OFPC_FRAG_DROP: - dpif_set_drop_frags(ofproto->dpif, true); + ofproto->ofproto_class->set_drop_frags(ofproto, true); break; default: VLOG_WARN_RL(&rl, "requested bad fragment mode (flags=%"PRIx16")", @@ -3305,538 +1482,6 @@ handle_set_config(struct ofconn *ofconn, const struct ofp_switch_config *osc) return 0; } -static void do_xlate_actions(const union ofp_action *in, size_t n_in, - struct action_xlate_ctx *ctx); - -static void -add_output_action(struct action_xlate_ctx *ctx, uint16_t port) -{ - const struct ofport *ofport = get_port(ctx->ofproto, port); - - if (ofport) { - if (ofport->opp.config & htonl(OFPPC_NO_FWD)) { - /* Forwarding disabled on port. */ - return; - } - } else { - /* - * We don't have an ofport record for this port, but it doesn't hurt to - * allow forwarding to it anyhow. Maybe such a port will appear later - * and we're pre-populating the flow table. - */ - } - - nl_msg_put_u32(ctx->odp_actions, ODP_ACTION_ATTR_OUTPUT, port); - ctx->nf_output_iface = port; -} - -static struct rule * -rule_lookup(struct ofproto *ofproto, const struct flow *flow) -{ - return rule_from_cls_rule(classifier_lookup(&ofproto->cls, flow)); -} - -static void -xlate_table_action(struct action_xlate_ctx *ctx, uint16_t in_port) -{ - if (ctx->recurse < MAX_RESUBMIT_RECURSION) { - uint16_t old_in_port; - struct rule *rule; - - /* Look up a flow with 'in_port' as the input port. Then restore the - * original input port (otherwise OFPP_NORMAL and OFPP_IN_PORT will - * have surprising behavior). */ - old_in_port = ctx->flow.in_port; - ctx->flow.in_port = in_port; - rule = rule_lookup(ctx->ofproto, &ctx->flow); - ctx->flow.in_port = old_in_port; - - if (ctx->resubmit_hook) { - ctx->resubmit_hook(ctx, rule); - } - - if (rule) { - ctx->recurse++; - do_xlate_actions(rule->actions, rule->n_actions, ctx); - ctx->recurse--; - } - } else { - static struct vlog_rate_limit recurse_rl = VLOG_RATE_LIMIT_INIT(1, 1); - - VLOG_ERR_RL(&recurse_rl, "NXAST_RESUBMIT recursed over %d times", - MAX_RESUBMIT_RECURSION); - } -} - -static void -flood_packets(struct ofproto *ofproto, uint16_t odp_in_port, ovs_be32 mask, - uint16_t *nf_output_iface, struct ofpbuf *odp_actions) -{ - struct ofport *ofport; - - HMAP_FOR_EACH (ofport, hmap_node, &ofproto->ports) { - uint16_t odp_port = ofport->odp_port; - if (odp_port != odp_in_port && !(ofport->opp.config & mask)) { - nl_msg_put_u32(odp_actions, ODP_ACTION_ATTR_OUTPUT, odp_port); - } - } - *nf_output_iface = NF_OUT_FLOOD; -} - -static void -xlate_output_action__(struct action_xlate_ctx *ctx, - uint16_t port, uint16_t max_len) -{ - uint16_t odp_port; - uint16_t prev_nf_output_iface = ctx->nf_output_iface; - - ctx->nf_output_iface = NF_OUT_DROP; - - switch (port) { - case OFPP_IN_PORT: - add_output_action(ctx, ctx->flow.in_port); - break; - case OFPP_TABLE: - xlate_table_action(ctx, ctx->flow.in_port); - break; - case OFPP_NORMAL: - xlate_normal(ctx); - break; - case OFPP_FLOOD: - flood_packets(ctx->ofproto, ctx->flow.in_port, htonl(OFPPC_NO_FLOOD), - &ctx->nf_output_iface, ctx->odp_actions); - break; - case OFPP_ALL: - flood_packets(ctx->ofproto, ctx->flow.in_port, htonl(0), - &ctx->nf_output_iface, ctx->odp_actions); - break; - case OFPP_CONTROLLER: - nl_msg_put_u64(ctx->odp_actions, ODP_ACTION_ATTR_CONTROLLER, max_len); - break; - case OFPP_LOCAL: - add_output_action(ctx, ODPP_LOCAL); - break; - default: - odp_port = ofp_port_to_odp_port(port); - if (odp_port != ctx->flow.in_port) { - add_output_action(ctx, odp_port); - } - break; - } - - if (prev_nf_output_iface == NF_OUT_FLOOD) { - ctx->nf_output_iface = NF_OUT_FLOOD; - } else if (ctx->nf_output_iface == NF_OUT_DROP) { - ctx->nf_output_iface = prev_nf_output_iface; - } else if (prev_nf_output_iface != NF_OUT_DROP && - ctx->nf_output_iface != NF_OUT_FLOOD) { - ctx->nf_output_iface = NF_OUT_MULTI; - } -} - -static void -xlate_output_action(struct action_xlate_ctx *ctx, - const struct ofp_action_output *oao) -{ - xlate_output_action__(ctx, ntohs(oao->port), ntohs(oao->max_len)); -} - -/* If the final ODP action in 'ctx' is "pop priority", drop it, as an - * optimization, because we're going to add another action that sets the - * priority immediately after, or because there are no actions following the - * pop. */ -static void -remove_pop_action(struct action_xlate_ctx *ctx) -{ - if (ctx->odp_actions->size == ctx->last_pop_priority) { - ctx->odp_actions->size -= NLA_ALIGN(NLA_HDRLEN); - ctx->last_pop_priority = -1; - } -} - -static void -add_pop_action(struct action_xlate_ctx *ctx) -{ - if (ctx->odp_actions->size != ctx->last_pop_priority) { - nl_msg_put_flag(ctx->odp_actions, ODP_ACTION_ATTR_POP_PRIORITY); - ctx->last_pop_priority = ctx->odp_actions->size; - } -} - -static void -xlate_enqueue_action(struct action_xlate_ctx *ctx, - const struct ofp_action_enqueue *oae) -{ - uint16_t ofp_port, odp_port; - uint32_t priority; - int error; - - error = dpif_queue_to_priority(ctx->ofproto->dpif, ntohl(oae->queue_id), - &priority); - if (error) { - /* Fall back to ordinary output action. */ - xlate_output_action__(ctx, ntohs(oae->port), 0); - return; - } - - /* Figure out ODP output port. */ - ofp_port = ntohs(oae->port); - if (ofp_port != OFPP_IN_PORT) { - odp_port = ofp_port_to_odp_port(ofp_port); - } else { - odp_port = ctx->flow.in_port; - } - - /* Add ODP actions. */ - remove_pop_action(ctx); - nl_msg_put_u32(ctx->odp_actions, ODP_ACTION_ATTR_SET_PRIORITY, priority); - add_output_action(ctx, odp_port); - add_pop_action(ctx); - - /* Update NetFlow output port. */ - if (ctx->nf_output_iface == NF_OUT_DROP) { - ctx->nf_output_iface = odp_port; - } else if (ctx->nf_output_iface != NF_OUT_FLOOD) { - ctx->nf_output_iface = NF_OUT_MULTI; - } -} - -static void -xlate_set_queue_action(struct action_xlate_ctx *ctx, - const struct nx_action_set_queue *nasq) -{ - uint32_t priority; - int error; - - error = dpif_queue_to_priority(ctx->ofproto->dpif, ntohl(nasq->queue_id), - &priority); - if (error) { - /* Couldn't translate queue to a priority, so ignore. A warning - * has already been logged. */ - return; - } - - remove_pop_action(ctx); - nl_msg_put_u32(ctx->odp_actions, ODP_ACTION_ATTR_SET_PRIORITY, priority); -} - -static void -xlate_set_dl_tci(struct action_xlate_ctx *ctx) -{ - ovs_be16 tci = ctx->flow.vlan_tci; - if (!(tci & htons(VLAN_CFI))) { - nl_msg_put_flag(ctx->odp_actions, ODP_ACTION_ATTR_STRIP_VLAN); - } else { - nl_msg_put_be16(ctx->odp_actions, ODP_ACTION_ATTR_SET_DL_TCI, - tci & ~htons(VLAN_CFI)); - } -} - -struct xlate_reg_state { - ovs_be16 vlan_tci; - ovs_be64 tun_id; -}; - -static void -save_reg_state(const struct action_xlate_ctx *ctx, - struct xlate_reg_state *state) -{ - state->vlan_tci = ctx->flow.vlan_tci; - state->tun_id = ctx->flow.tun_id; -} - -static void -update_reg_state(struct action_xlate_ctx *ctx, - const struct xlate_reg_state *state) -{ - if (ctx->flow.vlan_tci != state->vlan_tci) { - xlate_set_dl_tci(ctx); - } - if (ctx->flow.tun_id != state->tun_id) { - nl_msg_put_be64(ctx->odp_actions, - ODP_ACTION_ATTR_SET_TUNNEL, ctx->flow.tun_id); - } -} - -static void -xlate_autopath(struct action_xlate_ctx *ctx, - const struct nx_action_autopath *naa) -{ - uint16_t ofp_port = ntohl(naa->id); - struct ofport *port; - - port = get_port(ctx->ofproto, ofp_port_to_odp_port(ofp_port)); - if (!port || !port->bundle) { - ofp_port = OFPP_NONE; - } else if (port->bundle->bond) { - /* Autopath does not support VLAN hashing. */ - struct ofport *slave = bond_choose_output_slave( - port->bundle->bond, &ctx->flow, OFP_VLAN_NONE, &ctx->tags); - if (slave) { - ofp_port = odp_port_to_ofp_port(slave->odp_port); - } - } - autopath_execute(naa, &ctx->flow, ofp_port); -} - -static void -xlate_nicira_action(struct action_xlate_ctx *ctx, - const struct nx_action_header *nah) -{ - const struct nx_action_resubmit *nar; - const struct nx_action_set_tunnel *nast; - const struct nx_action_set_queue *nasq; - const struct nx_action_multipath *nam; - const struct nx_action_autopath *naa; - enum nx_action_subtype subtype = ntohs(nah->subtype); - struct xlate_reg_state state; - ovs_be64 tun_id; - - assert(nah->vendor == htonl(NX_VENDOR_ID)); - switch (subtype) { - case NXAST_RESUBMIT: - nar = (const struct nx_action_resubmit *) nah; - xlate_table_action(ctx, ofp_port_to_odp_port(ntohs(nar->in_port))); - break; - - case NXAST_SET_TUNNEL: - nast = (const struct nx_action_set_tunnel *) nah; - tun_id = htonll(ntohl(nast->tun_id)); - nl_msg_put_be64(ctx->odp_actions, ODP_ACTION_ATTR_SET_TUNNEL, tun_id); - ctx->flow.tun_id = tun_id; - break; - - case NXAST_DROP_SPOOFED_ARP: - if (ctx->flow.dl_type == htons(ETH_TYPE_ARP)) { - nl_msg_put_flag(ctx->odp_actions, - ODP_ACTION_ATTR_DROP_SPOOFED_ARP); - } - break; - - case NXAST_SET_QUEUE: - nasq = (const struct nx_action_set_queue *) nah; - xlate_set_queue_action(ctx, nasq); - break; - - case NXAST_POP_QUEUE: - add_pop_action(ctx); - break; - - case NXAST_REG_MOVE: - save_reg_state(ctx, &state); - nxm_execute_reg_move((const struct nx_action_reg_move *) nah, - &ctx->flow); - update_reg_state(ctx, &state); - break; - - case NXAST_REG_LOAD: - save_reg_state(ctx, &state); - nxm_execute_reg_load((const struct nx_action_reg_load *) nah, - &ctx->flow); - update_reg_state(ctx, &state); - break; - - case NXAST_NOTE: - /* Nothing to do. */ - break; - - case NXAST_SET_TUNNEL64: - tun_id = ((const struct nx_action_set_tunnel64 *) nah)->tun_id; - nl_msg_put_be64(ctx->odp_actions, ODP_ACTION_ATTR_SET_TUNNEL, tun_id); - ctx->flow.tun_id = tun_id; - break; - - case NXAST_MULTIPATH: - nam = (const struct nx_action_multipath *) nah; - multipath_execute(nam, &ctx->flow); - break; - - case NXAST_AUTOPATH: - naa = (const struct nx_action_autopath *) nah; - xlate_autopath(ctx, naa); - break; - - /* If you add a new action here that modifies flow data, don't forget to - * update the flow key in ctx->flow at the same time. */ - - case NXAST_SNAT__OBSOLETE: - default: - VLOG_DBG_RL(&rl, "unknown Nicira action type %d", (int) subtype); - break; - } -} - -static void -do_xlate_actions(const union ofp_action *in, size_t n_in, - struct action_xlate_ctx *ctx) -{ - struct actions_iterator iter; - const union ofp_action *ia; - const struct ofport *port; - - port = get_port(ctx->ofproto, ctx->flow.in_port); - if (port && port->opp.config & htonl(OFPPC_NO_RECV | OFPPC_NO_RECV_STP) && - port->opp.config & (eth_addr_equals(ctx->flow.dl_dst, eth_addr_stp) - ? htonl(OFPPC_NO_RECV_STP) - : htonl(OFPPC_NO_RECV))) { - /* Drop this flow. */ - return; - } - - for (ia = actions_first(&iter, in, n_in); ia; ia = actions_next(&iter)) { - enum ofp_action_type type = ntohs(ia->type); - const struct ofp_action_dl_addr *oada; - - switch (type) { - case OFPAT_OUTPUT: - xlate_output_action(ctx, &ia->output); - break; - - case OFPAT_SET_VLAN_VID: - ctx->flow.vlan_tci &= ~htons(VLAN_VID_MASK); - ctx->flow.vlan_tci |= ia->vlan_vid.vlan_vid | htons(VLAN_CFI); - xlate_set_dl_tci(ctx); - break; - - case OFPAT_SET_VLAN_PCP: - ctx->flow.vlan_tci &= ~htons(VLAN_PCP_MASK); - ctx->flow.vlan_tci |= htons( - (ia->vlan_pcp.vlan_pcp << VLAN_PCP_SHIFT) | VLAN_CFI); - xlate_set_dl_tci(ctx); - break; - - case OFPAT_STRIP_VLAN: - ctx->flow.vlan_tci = htons(0); - xlate_set_dl_tci(ctx); - break; - - case OFPAT_SET_DL_SRC: - oada = ((struct ofp_action_dl_addr *) ia); - nl_msg_put_unspec(ctx->odp_actions, ODP_ACTION_ATTR_SET_DL_SRC, - oada->dl_addr, ETH_ADDR_LEN); - memcpy(ctx->flow.dl_src, oada->dl_addr, ETH_ADDR_LEN); - break; - - case OFPAT_SET_DL_DST: - oada = ((struct ofp_action_dl_addr *) ia); - nl_msg_put_unspec(ctx->odp_actions, ODP_ACTION_ATTR_SET_DL_DST, - oada->dl_addr, ETH_ADDR_LEN); - memcpy(ctx->flow.dl_dst, oada->dl_addr, ETH_ADDR_LEN); - break; - - case OFPAT_SET_NW_SRC: - nl_msg_put_be32(ctx->odp_actions, ODP_ACTION_ATTR_SET_NW_SRC, - ia->nw_addr.nw_addr); - ctx->flow.nw_src = ia->nw_addr.nw_addr; - break; - - case OFPAT_SET_NW_DST: - nl_msg_put_be32(ctx->odp_actions, ODP_ACTION_ATTR_SET_NW_DST, - ia->nw_addr.nw_addr); - ctx->flow.nw_dst = ia->nw_addr.nw_addr; - break; - - case OFPAT_SET_NW_TOS: - nl_msg_put_u8(ctx->odp_actions, ODP_ACTION_ATTR_SET_NW_TOS, - ia->nw_tos.nw_tos); - ctx->flow.nw_tos = ia->nw_tos.nw_tos; - break; - - case OFPAT_SET_TP_SRC: - nl_msg_put_be16(ctx->odp_actions, ODP_ACTION_ATTR_SET_TP_SRC, - ia->tp_port.tp_port); - ctx->flow.tp_src = ia->tp_port.tp_port; - break; - - case OFPAT_SET_TP_DST: - nl_msg_put_be16(ctx->odp_actions, ODP_ACTION_ATTR_SET_TP_DST, - ia->tp_port.tp_port); - ctx->flow.tp_dst = ia->tp_port.tp_port; - break; - - case OFPAT_VENDOR: - xlate_nicira_action(ctx, (const struct nx_action_header *) ia); - break; - - case OFPAT_ENQUEUE: - xlate_enqueue_action(ctx, (const struct ofp_action_enqueue *) ia); - break; - - default: - VLOG_DBG_RL(&rl, "unknown action type %d", (int) type); - break; - } - } -} - -static void -action_xlate_ctx_init(struct action_xlate_ctx *ctx, - struct ofproto *ofproto, const struct flow *flow, - const struct ofpbuf *packet) -{ - ctx->ofproto = ofproto; - ctx->flow = *flow; - ctx->packet = packet; - ctx->resubmit_hook = NULL; - ctx->check_special = true; -} - -static bool -ofproto_process_special(struct ofproto *ofproto, const struct flow *flow, - const struct ofpbuf *packet) -{ - if (cfm_should_process_flow(flow)) { - struct ofport *ofport = get_port(ofproto, flow->in_port); - if (ofport && ofport->cfm) { - cfm_process_heartbeat(ofport->cfm, packet); - } - return true; - } else if (flow->dl_type == htons(ETH_TYPE_LACP)) { - struct ofport *port = get_port(ofproto, flow->in_port); - if (port && port->bundle && port->bundle->lacp) { - const struct lacp_pdu *pdu = parse_lacp_packet(packet); - if (pdu) { - lacp_process_pdu(port->bundle->lacp, port, pdu); - } - return true; - } - } - return false; -} - -static struct ofpbuf * -xlate_actions(struct action_xlate_ctx *ctx, - const union ofp_action *in, size_t n_in) -{ - COVERAGE_INC(ofproto_ofp2odp); - - ctx->odp_actions = ofpbuf_new(512); - ctx->tags = 0; - ctx->may_set_up_flow = true; - ctx->nf_output_iface = NF_OUT_DROP; - ctx->recurse = 0; - ctx->last_pop_priority = -1; - - if (ctx->check_special - && ofproto_process_special(ctx->ofproto, &ctx->flow, ctx->packet)) { - ctx->may_set_up_flow = false; - } else { - do_xlate_actions(in, n_in, ctx); - } - - remove_pop_action(ctx); - - /* Check with in-band control to see if we're allowed to set up this - * flow. */ - if (!connmgr_may_set_up_flow(ctx->ofproto->connmgr, &ctx->flow, - ctx->odp_actions->data, - ctx->odp_actions->size)) { - ctx->may_set_up_flow = false; - } - - return ctx->odp_actions; -} - /* Checks whether 'ofconn' is a slave controller. If so, returns an OpenFlow * error message code (composed with ofp_mkerr()) for the caller to propagate * upward. Otherwise, returns 0. @@ -3864,8 +1509,6 @@ handle_packet_out(struct ofconn *ofconn, const struct ofp_header *oh) struct ofp_packet_out *opo; struct ofpbuf payload, *buffer; union ofp_action *ofp_actions; - struct action_xlate_ctx ctx; - struct ofpbuf *odp_actions; struct ofpbuf request; struct flow flow; size_t n_ofp_actions; @@ -3903,29 +1546,20 @@ handle_packet_out(struct ofconn *ofconn, const struct ofp_header *oh) buffer = NULL; } - /* Extract flow, check actions. */ - flow_extract(&payload, 0, ofp_port_to_odp_port(ntohs(opo->in_port)), - &flow); - error = validate_actions(ofp_actions, n_ofp_actions, &flow, p->max_ports); - if (error) { - goto exit; - } - - /* Send. */ - action_xlate_ctx_init(&ctx, p, &flow, &payload); - odp_actions = xlate_actions(&ctx, ofp_actions, n_ofp_actions); - dpif_execute(p->dpif, odp_actions->data, odp_actions->size, &payload); - ofpbuf_delete(odp_actions); - -exit: + /* Send out packet. */ + flow_extract(&payload, 0, ntohs(opo->in_port), &flow); + error = p->ofproto_class->packet_out(p, &payload, &flow, + ofp_actions, n_ofp_actions); ofpbuf_delete(buffer); - return 0; + + return error; } static void -update_port_config(struct ofproto *p, struct ofport *port, - ovs_be32 config, ovs_be32 mask) +update_port_config(struct ofport *port, ovs_be32 config, ovs_be32 mask) { + ovs_be32 old_config = port->opp.config; + mask &= config ^ port->opp.config; if (mask & htonl(OFPPC_PORT_DOWN)) { if (config & htonl(OFPPC_PORT_DOWN)) { @@ -3934,16 +1568,12 @@ update_port_config(struct ofproto *p, struct ofport *port, netdev_turn_flags_on(port->netdev, NETDEV_UP, true); } } -#define REVALIDATE_BITS (OFPPC_NO_RECV | OFPPC_NO_RECV_STP | \ - OFPPC_NO_FWD | OFPPC_NO_FLOOD) - if (mask & htonl(REVALIDATE_BITS)) { - COVERAGE_INC(ofproto_costly_flags); - port->opp.config ^= mask & htonl(REVALIDATE_BITS); - p->need_revalidate = true; - } -#undef REVALIDATE_BITS - if (mask & htonl(OFPPC_NO_PACKET_IN)) { - port->opp.config ^= htonl(OFPPC_NO_PACKET_IN); + + port->opp.config ^= mask & (htonl(OFPPC_NO_RECV | OFPPC_NO_RECV_STP | + OFPPC_NO_FLOOD | OFPPC_NO_FWD | + OFPPC_NO_PACKET_IN)); + if (port->opp.config != old_config) { + port->ofproto->ofproto_class->port_reconfigured(port, old_config); } } @@ -3960,13 +1590,13 @@ handle_port_mod(struct ofconn *ofconn, const struct ofp_header *oh) return error; } - port = get_port(p, ofp_port_to_odp_port(ntohs(opm->port_no))); + port = ofproto_get_port(p, ntohs(opm->port_no)); if (!port) { return ofp_mkerr(OFPET_PORT_MOD_FAILED, OFPPMFC_BAD_PORT); } else if (memcmp(port->opp.hw_addr, opm->hw_addr, OFP_ETH_ALEN)) { return ofp_mkerr(OFPET_PORT_MOD_FAILED, OFPPMFC_BAD_HW_ADDR); } else { - update_port_config(p, port, opm->config, opm->mask); + update_port_config(port, opm->config, opm->mask); if (opm->advertise) { netdev_set_advertisements(port->netdev, ntohl(opm->advertise)); } @@ -4132,7 +1762,7 @@ handle_port_stats_request(struct ofconn *ofconn, const struct ofp_header *oh) msg = start_ofp_stats_reply(oh, sizeof *ops * 16); if (psr->port_no != htons(OFPP_NONE)) { - port = get_port(p, ofp_port_to_odp_port(ntohs(psr->port_no))); + port = ofproto_get_port(p, ntohs(psr->port_no)); if (port) { append_port_stat(port, ofconn, &msg); } @@ -4168,6 +1798,7 @@ static void put_ofp_flow_stats(struct ofconn *ofconn, struct rule *rule, ovs_be16 out_port, struct ofpbuf **replyp) { + struct ofproto *ofproto = ofconn_get_ofproto(ofconn); struct ofp_flow_stats *ofs; uint64_t packet_count, byte_count; ovs_be64 cookie; @@ -4180,7 +1811,7 @@ put_ofp_flow_stats(struct ofconn *ofconn, struct rule *rule, act_len = sizeof *rule->actions * rule->n_actions; len = offsetof(struct ofp_flow_stats, actions) + act_len; - rule_get_stats(rule, &packet_count, &byte_count); + ofproto->ofproto_class->rule_get_stats(rule, &packet_count, &byte_count); ofs = append_ofp_stats_reply(len, ofconn, replyp); ofs->length = htons(len); @@ -4255,7 +1886,8 @@ put_nx_flow_stats(struct ofconn *ofconn, struct rule *rule, return; } - rule_get_stats(rule, &packet_count, &byte_count); + rule->ofproto->ofproto_class->rule_get_stats(rule, + &packet_count, &byte_count); act_len = sizeof *rule->actions * rule->n_actions; @@ -4325,11 +1957,11 @@ flow_stats_ds(struct rule *rule, struct ds *results) uint64_t packet_count, byte_count; size_t act_len = sizeof *rule->actions * rule->n_actions; - rule_get_stats(rule, &packet_count, &byte_count); + rule->ofproto->ofproto_class->rule_get_stats(rule, + &packet_count, &byte_count); ds_put_format(results, "duration=%llds, ", (time_msec() - rule->created) / 1000); - ds_put_format(results, "idle=%.3fs, ", (time_msec() - rule->used) / 1000.0); ds_put_format(results, "priority=%u, ", rule->cr.priority); ds_put_format(results, "n_packets=%"PRIu64", ", packet_count); ds_put_format(results, "n_bytes=%"PRIu64", ", byte_count); @@ -4363,7 +1995,7 @@ void ofproto_get_netflow_ids(const struct ofproto *ofproto, uint8_t *engine_type, uint8_t *engine_id) { - dpif_get_netflow_ids(ofproto->dpif, engine_type, engine_id); + ofproto->ofproto_class->get_netflow_ids(ofproto, engine_type, engine_id); } static void @@ -4387,7 +2019,8 @@ query_aggregate_stats(struct ofproto *ofproto, struct cls_rule *target, uint64_t packet_count; uint64_t byte_count; - rule_get_stats(rule, &packet_count, &byte_count); + ofproto->ofproto_class->rule_get_stats(rule, &packet_count, + &byte_count); total_packets += packet_count; total_bytes += byte_count; @@ -4531,8 +2164,8 @@ handle_queue_stats_request(struct ofconn *ofconn, const struct ofp_header *oh) HMAP_FOR_EACH (port, hmap_node, &ofproto->ports) { handle_queue_stats_for_port(port, queue_id, &cbdata); } - } else if (port_no < ofproto->max_ports) { - port = get_port(ofproto, ofp_port_to_odp_port(port_no)); + } else if (port_no < OFPP_MAX) { + port = ofproto_get_port(ofproto, port_no); if (port) { handle_queue_stats_for_port(port, queue_id, &cbdata); } @@ -4545,99 +2178,6 @@ handle_queue_stats_request(struct ofconn *ofconn, const struct ofp_header *oh) return 0; } -/* Updates 'facet''s used time. Caller is responsible for calling - * facet_push_stats() to update the flows which 'facet' resubmits into. */ -static void -facet_update_time(struct ofproto *ofproto, struct facet *facet, - long long int used) -{ - if (used > facet->used) { - facet->used = used; - if (used > facet->rule->used) { - facet->rule->used = used; - } - netflow_flow_update_time(ofproto->netflow, &facet->nf_flow, used); - } -} - -/* Folds the statistics from 'stats' into the counters in 'facet'. - * - * Because of the meaning of a facet's counters, it only makes sense to do this - * if 'stats' are not tracked in the datapath, that is, if 'stats' represents a - * packet that was sent by hand or if it represents statistics that have been - * cleared out of the datapath. */ -static void -facet_update_stats(struct ofproto *ofproto, struct facet *facet, - const struct dpif_flow_stats *stats) -{ - if (stats->n_packets || stats->used > facet->used) { - facet_update_time(ofproto, facet, stats->used); - facet->packet_count += stats->n_packets; - facet->byte_count += stats->n_bytes; - facet_push_stats(ofproto, facet); - netflow_flow_update_flags(&facet->nf_flow, stats->tcp_flags); - } -} - -static void -facet_push_stats(struct ofproto *ofproto, struct facet *facet) -{ - uint64_t rs_packets, rs_bytes; - - assert(facet->packet_count >= facet->rs_packet_count); - assert(facet->byte_count >= facet->rs_byte_count); - assert(facet->used >= facet->rs_used); - - rs_packets = facet->packet_count - facet->rs_packet_count; - rs_bytes = facet->byte_count - facet->rs_byte_count; - - if (rs_packets || rs_bytes || facet->used > facet->rs_used) { - facet->rs_packet_count = facet->packet_count; - facet->rs_byte_count = facet->byte_count; - facet->rs_used = facet->used; - - flow_push_stats(ofproto, facet->rule, &facet->flow, - rs_packets, rs_bytes, facet->used); - } -} - -struct ofproto_push { - struct action_xlate_ctx ctx; - uint64_t packets; - uint64_t bytes; - long long int used; -}; - -static void -push_resubmit(struct action_xlate_ctx *ctx, struct rule *rule) -{ - struct ofproto_push *push = CONTAINER_OF(ctx, struct ofproto_push, ctx); - - if (rule) { - rule->packet_count += push->packets; - rule->byte_count += push->bytes; - rule->used = MAX(push->used, rule->used); - } -} - -/* Pushes flow statistics to the rules which 'flow' resubmits into given - * 'rule''s actions. */ -static void -flow_push_stats(struct ofproto *ofproto, const struct rule *rule, - struct flow *flow, uint64_t packets, uint64_t bytes, - long long int used) -{ - struct ofproto_push push; - - push.packets = packets; - push.bytes = bytes; - push.used = used; - - action_xlate_ctx_init(&push.ctx, ofproto, flow, NULL); - push.ctx.resubmit_hook = push_resubmit; - ofpbuf_delete(xlate_actions(&push.ctx, rule->actions, rule->n_actions)); -} - /* Implements OFPFC_ADD and the cases for OFPFC_MODIFY and OFPFC_MODIFY_STRICT * in which no matching flow already exists in the flow table. * @@ -4654,30 +2194,27 @@ add_flow(struct ofconn *ofconn, struct flow_mod *fm) struct ofpbuf *packet; struct rule *rule; uint16_t in_port; + int buf_err; int error; if (fm->flags & OFPFF_CHECK_OVERLAP && classifier_rule_overlaps(&p->cls, &fm->cr)) { return ofp_mkerr(OFPET_FLOW_MOD_FAILED, OFPFMFC_OVERLAP); } - - error = 0; - if (fm->buffer_id != UINT32_MAX) { - error = ofconn_pktbuf_retrieve(ofconn, fm->buffer_id, - &packet, &in_port); - } else { - packet = NULL; - in_port = UINT16_MAX; + + buf_err = ofconn_pktbuf_retrieve(ofconn, fm->buffer_id, &packet, &in_port); + error = rule_create(p, &fm->cr, fm->actions, fm->n_actions, + fm->idle_timeout, fm->hard_timeout, fm->cookie, + fm->flags & OFPFF_SEND_FLOW_REM, &rule); + if (error) { + ofpbuf_delete(packet); + return error; } - rule = rule_create(&fm->cr, fm->actions, fm->n_actions, - fm->idle_timeout, fm->hard_timeout, fm->cookie, - fm->flags & OFPFF_SEND_FLOW_REM); - rule_insert(p, rule); if (packet) { - rule_execute(p, rule, in_port, packet); + rule_execute(rule, in_port, packet); } - return error; + return buf_err; } static struct rule * @@ -4690,7 +2227,6 @@ static int send_buffered_packet(struct ofconn *ofconn, struct rule *rule, uint32_t buffer_id) { - struct ofproto *ofproto = ofconn_get_ofproto(ofconn); struct ofpbuf *packet; uint16_t in_port; int error; @@ -4704,7 +2240,7 @@ send_buffered_packet(struct ofconn *ofconn, return error; } - rule_execute(ofproto, rule, in_port, packet); + rule_execute(rule, in_port, packet); return 0; } @@ -4717,8 +2253,7 @@ struct modify_flows_cbdata { struct rule *match; }; -static int modify_flow(struct ofproto *, const struct flow_mod *, - struct rule *); +static int modify_flow(const struct flow_mod *, struct rule *); /* Implements OFPFC_MODIFY. Returns 0 on success or an OpenFlow error code as * encoded by ofp_mkerr() on failure. @@ -4732,16 +2267,24 @@ modify_flows_loose(struct ofconn *ofconn, struct flow_mod *fm) struct rule *match = NULL; struct cls_cursor cursor; struct rule *rule; + int error; + error = 0; cls_cursor_init(&cursor, &p->cls, &fm->cr); CLS_CURSOR_FOR_EACH (rule, cr, &cursor) { if (!rule_is_hidden(rule)) { - match = rule; - modify_flow(p, fm, rule); + int retval = modify_flow(fm, rule); + if (!retval) { + match = rule; + } else { + error = retval; + } } } - if (match) { + if (error) { + return error; + } else if (match) { /* This credits the packet to whichever flow happened to match last. * That's weird. Maybe we should do a lookup for the flow that * actually matches the packet? Who knows. */ @@ -4763,44 +2306,52 @@ modify_flow_strict(struct ofconn *ofconn, struct flow_mod *fm) struct ofproto *p = ofconn_get_ofproto(ofconn); struct rule *rule = find_flow_strict(p, fm); if (rule && !rule_is_hidden(rule)) { - modify_flow(p, fm, rule); - return send_buffered_packet(ofconn, rule, fm->buffer_id); + int error = modify_flow(fm, rule); + if (!error) { + error = send_buffered_packet(ofconn, rule, fm->buffer_id); + } + return error; } else { return add_flow(ofconn, fm); } } /* Implements core of OFPFC_MODIFY and OFPFC_MODIFY_STRICT where 'rule' has - * been identified as a flow in 'p''s flow table to be modified, by changing - * the rule's actions to match those in 'ofm' (which is followed by 'n_actions' - * ofp_action[] structures). */ + * been identified as a flow to be modified, by changing the rule's actions to + * match those in 'ofm' (which is followed by 'n_actions' ofp_action[] + * structures). */ static int -modify_flow(struct ofproto *p, const struct flow_mod *fm, struct rule *rule) +modify_flow(const struct flow_mod *fm, struct rule *rule) { size_t actions_len = fm->n_actions * sizeof *rule->actions; + int error; - rule->flow_cookie = fm->cookie; - - /* If the actions are the same, do nothing. */ if (fm->n_actions == rule->n_actions && (!fm->n_actions || !memcmp(fm->actions, rule->actions, actions_len))) { - return 0; + error = 0; + } else { + error = rule->ofproto->ofproto_class->rule_modify_actions( + rule, fm->actions, fm->n_actions); + if (!error) { + free(rule->actions); + rule->actions = (fm->n_actions + ? xmemdup(fm->actions, actions_len) + : NULL); + rule->n_actions = fm->n_actions; + } } - /* Replace actions. */ - free(rule->actions); - rule->actions = fm->n_actions ? xmemdup(fm->actions, actions_len) : NULL; - rule->n_actions = fm->n_actions; - - p->need_revalidate = true; + if (!error) { + rule->flow_cookie = fm->cookie; + } - return 0; + return error; } /* OFPFC_DELETE implementation. */ -static void delete_flow(struct ofproto *, struct rule *, ovs_be16 out_port); +static void delete_flow(struct rule *, ovs_be16 out_port); /* Implements OFPFC_DELETE. */ static void @@ -4811,7 +2362,7 @@ delete_flows_loose(struct ofproto *p, const struct flow_mod *fm) cls_cursor_init(&cursor, &p->cls, &fm->cr); CLS_CURSOR_FOR_EACH_SAFE (rule, next_rule, cr, &cursor) { - delete_flow(p, rule, htons(fm->out_port)); + delete_flow(rule, htons(fm->out_port)); } } @@ -4821,7 +2372,7 @@ delete_flow_strict(struct ofproto *p, struct flow_mod *fm) { struct rule *rule = find_flow_strict(p, fm); if (rule) { - delete_flow(p, rule, htons(fm->out_port)); + delete_flow(rule, htons(fm->out_port)); } } @@ -4834,7 +2385,7 @@ delete_flow_strict(struct ofproto *p, struct flow_mod *fm) * 'out_port' is htons(OFPP_NONE) or if 'rule' actually outputs to the * specified 'out_port'. */ static void -delete_flow(struct ofproto *p, struct rule *rule, ovs_be16 out_port) +delete_flow(struct rule *rule, ovs_be16 out_port) { if (rule_is_hidden(rule)) { return; @@ -4844,8 +2395,42 @@ delete_flow(struct ofproto *p, struct rule *rule, ovs_be16 out_port) return; } - rule_send_removed(p, rule, OFPRR_DELETE); - rule_remove(p, rule); + ofproto_rule_send_removed(rule, OFPRR_DELETE); + ofproto_rule_remove(rule); +} + +static void +ofproto_rule_send_removed(struct rule *rule, uint8_t reason) +{ + struct ofputil_flow_removed fr; + + if (rule_is_hidden(rule) || !rule->send_flow_removed) { + return; + } + + fr.rule = rule->cr; + fr.cookie = rule->flow_cookie; + fr.reason = reason; + calc_flow_duration__(rule->created, &fr.duration_sec, &fr.duration_nsec); + fr.idle_timeout = rule->idle_timeout; + rule->ofproto->ofproto_class->rule_get_stats(rule, &fr.packet_count, + &fr.byte_count); + + connmgr_send_flow_removed(rule->ofproto->connmgr, &fr); +} + +/* Sends an OpenFlow "flow removed" message with the given 'reason' (either + * OFPRR_HARD_TIMEOUT or OFPRR_IDLE_TIMEOUT), and then removes 'rule' from its + * ofproto. + * + * ofproto implementation ->run() functions should use this function to expire + * OpenFlow flows. */ +void +ofproto_rule_expire(struct rule *rule, uint8_t reason) +{ + assert(reason == OFPRR_HARD_TIMEOUT || reason == OFPRR_IDLE_TIMEOUT); + ofproto_rule_send_removed(rule, reason); + ofproto_rule_remove(rule); } static int @@ -4873,12 +2458,6 @@ handle_flow_mod(struct ofconn *ofconn, const struct ofp_header *oh) return ofp_mkerr(OFPET_FLOW_MOD_FAILED, OFPFMFC_ALL_TABLES_FULL); } - error = validate_actions(fm.actions, fm.n_actions, - &fm.cr.flow, p->max_ports); - if (error) { - return error; - } - switch (fm.command) { case OFPFC_ADD: return add_flow(ofconn, &fm); @@ -5103,458 +2682,12 @@ handle_openflow(struct ofconn *ofconn, struct ofpbuf *ofp_msg) COVERAGE_INC(ofproto_recv_openflow); } -static void -handle_miss_upcall(struct ofproto *p, struct dpif_upcall *upcall) -{ - struct facet *facet; - struct flow flow; - - /* Obtain in_port and tun_id, at least. */ - odp_flow_key_to_flow(upcall->key, upcall->key_len, &flow); - - /* Set header pointers in 'flow'. */ - flow_extract(upcall->packet, flow.tun_id, flow.in_port, &flow); - - /* Handle 802.1ag and LACP. */ - if (ofproto_process_special(p, &flow, upcall->packet)) { - ofpbuf_delete(upcall->packet); - return; - } - - /* Check with in-band control to see if this packet should be sent - * to the local port regardless of the flow table. */ - if (connmgr_msg_in_hook(p->connmgr, &flow, upcall->packet)) { - ofproto_send_packet(p, ODPP_LOCAL, 0, upcall->packet); - } - - facet = facet_lookup_valid(p, &flow); - if (!facet) { - struct rule *rule = rule_lookup(p, &flow); - if (!rule) { - /* Don't send a packet-in if OFPPC_NO_PACKET_IN asserted. */ - struct ofport *port = get_port(p, flow.in_port); - if (port) { - if (port->opp.config & htonl(OFPPC_NO_PACKET_IN)) { - COVERAGE_INC(ofproto_no_packet_in); - /* XXX install 'drop' flow entry */ - ofpbuf_delete(upcall->packet); - return; - } - } else { - VLOG_WARN_RL(&rl, "packet-in on unknown port %"PRIu16, - flow.in_port); - } - - COVERAGE_INC(ofproto_packet_in); - send_packet_in(p, upcall, &flow, false); - return; - } - - facet = facet_create(p, rule, &flow, upcall->packet); - } else if (!facet->may_install) { - /* The facet is not installable, that is, we need to process every - * packet, so process the current packet's actions into 'facet'. */ - facet_make_actions(p, facet, upcall->packet); - } - - if (facet->rule->cr.priority == FAIL_OPEN_PRIORITY) { - /* - * Extra-special case for fail-open mode. - * - * We are in fail-open mode and the packet matched the fail-open rule, - * but we are connected to a controller too. We should send the packet - * up to the controller in the hope that it will try to set up a flow - * and thereby allow us to exit fail-open. - * - * See the top-level comment in fail-open.c for more information. - */ - send_packet_in(p, upcall, &flow, true); - } - - facet_execute(p, facet, upcall->packet); - facet_install(p, facet, false); -} - -static void -handle_upcall(struct ofproto *p, struct dpif_upcall *upcall) -{ - struct flow flow; - - switch (upcall->type) { - case DPIF_UC_ACTION: - COVERAGE_INC(ofproto_ctlr_action); - odp_flow_key_to_flow(upcall->key, upcall->key_len, &flow); - send_packet_in(p, upcall, &flow, false); - break; - - case DPIF_UC_SAMPLE: - if (p->sflow) { - odp_flow_key_to_flow(upcall->key, upcall->key_len, &flow); - ofproto_sflow_received(p->sflow, upcall, &flow); - } - ofpbuf_delete(upcall->packet); - break; - - case DPIF_UC_MISS: - handle_miss_upcall(p, upcall); - break; - - case DPIF_N_UC_TYPES: - default: - VLOG_WARN_RL(&rl, "upcall has unexpected type %"PRIu32, upcall->type); - break; - } -} - -/* Flow expiration. */ - -static int ofproto_dp_max_idle(const struct ofproto *); -static void ofproto_update_stats(struct ofproto *); -static void rule_expire(struct ofproto *, struct rule *); -static void ofproto_expire_facets(struct ofproto *, int dp_max_idle); - -/* This function is called periodically by ofproto_run(). Its job is to - * collect updates for the flows that have been installed into the datapath, - * most importantly when they last were used, and then use that information to - * expire flows that have not been used recently. - * - * Returns the number of milliseconds after which it should be called again. */ -static int -ofproto_expire(struct ofproto *ofproto) -{ - struct rule *rule, *next_rule; - struct cls_cursor cursor; - int dp_max_idle; - - /* Update stats for each flow in the datapath. */ - ofproto_update_stats(ofproto); - - /* Expire facets that have been idle too long. */ - dp_max_idle = ofproto_dp_max_idle(ofproto); - ofproto_expire_facets(ofproto, dp_max_idle); - - /* Expire OpenFlow flows whose idle_timeout or hard_timeout has passed. */ - cls_cursor_init(&cursor, &ofproto->cls, NULL); - CLS_CURSOR_FOR_EACH_SAFE (rule, next_rule, cr, &cursor) { - rule_expire(ofproto, rule); - } - - /* All outstanding data in existing flows has been accounted, so it's a - * good time to do bond rebalancing. */ - if (ofproto->has_bonded_bundles) { - struct ofbundle *bundle; - - HMAP_FOR_EACH (bundle, hmap_node, &ofproto->bundles) { - if (bundle->bond) { - bond_rebalance(bundle->bond, &ofproto->revalidate_set); - } - } - } - - return MIN(dp_max_idle, 1000); -} - -/* Update 'packet_count', 'byte_count', and 'used' members of installed facets. - * - * This function also pushes statistics updates to rules which each facet - * resubmits into. Generally these statistics will be accurate. However, if a - * facet changes the rule it resubmits into at some time in between - * ofproto_update_stats() runs, it is possible that statistics accrued to the - * old rule will be incorrectly attributed to the new rule. This could be - * avoided by calling ofproto_update_stats() whenever rules are created or - * deleted. However, the performance impact of making so many calls to the - * datapath do not justify the benefit of having perfectly accurate statistics. - */ -static void -ofproto_update_stats(struct ofproto *p) -{ - const struct dpif_flow_stats *stats; - struct dpif_flow_dump dump; - const struct nlattr *key; - size_t key_len; - - dpif_flow_dump_start(&dump, p->dpif); - while (dpif_flow_dump_next(&dump, &key, &key_len, NULL, NULL, &stats)) { - struct facet *facet; - struct flow flow; - - if (odp_flow_key_to_flow(key, key_len, &flow)) { - struct ds s; - - ds_init(&s); - odp_flow_key_format(key, key_len, &s); - VLOG_WARN_RL(&rl, "failed to convert ODP flow key to flow: %s", - ds_cstr(&s)); - ds_destroy(&s); - - continue; - } - facet = facet_find(p, &flow); - - if (facet && facet->installed) { - - if (stats->n_packets >= facet->dp_packet_count) { - facet->packet_count += stats->n_packets - facet->dp_packet_count; - } else { - VLOG_WARN_RL(&rl, "unexpected packet count from the datapath"); - } - - if (stats->n_bytes >= facet->dp_byte_count) { - facet->byte_count += stats->n_bytes - facet->dp_byte_count; - } else { - VLOG_WARN_RL(&rl, "unexpected byte count from datapath"); - } - - facet->dp_packet_count = stats->n_packets; - facet->dp_byte_count = stats->n_bytes; - - facet_update_time(p, facet, stats->used); - facet_account(p, facet, stats->n_bytes); - facet_push_stats(p, facet); - } else { - /* There's a flow in the datapath that we know nothing about. - * Delete it. */ - COVERAGE_INC(ofproto_unexpected_rule); - dpif_flow_del(p->dpif, key, key_len, NULL); - } - } - dpif_flow_dump_done(&dump); -} - -/* Calculates and returns the number of milliseconds of idle time after which - * facets should expire from the datapath and we should fold their statistics - * into their parent rules in userspace. */ -static int -ofproto_dp_max_idle(const struct ofproto *ofproto) -{ - /* - * Idle time histogram. - * - * Most of the time a switch has a relatively small number of facets. When - * this is the case we might as well keep statistics for all of them in - * userspace and to cache them in the kernel datapath for performance as - * well. - * - * As the number of facets increases, the memory required to maintain - * statistics about them in userspace and in the kernel becomes - * significant. However, with a large number of facets it is likely that - * only a few of them are "heavy hitters" that consume a large amount of - * bandwidth. At this point, only heavy hitters are worth caching in the - * kernel and maintaining in userspaces; other facets we can discard. - * - * The technique used to compute the idle time is to build a histogram with - * N_BUCKETS buckets whose width is BUCKET_WIDTH msecs each. Each facet - * that is installed in the kernel gets dropped in the appropriate bucket. - * After the histogram has been built, we compute the cutoff so that only - * the most-recently-used 1% of facets (but at least 1000 flows) are kept - * cached. At least the most-recently-used bucket of facets is kept, so - * actually an arbitrary number of facets can be kept in any given - * expiration run (though the next run will delete most of those unless - * they receive additional data). - * - * This requires a second pass through the facets, in addition to the pass - * made by ofproto_update_stats(), because the former function never looks - * at uninstallable facets. - */ - enum { BUCKET_WIDTH = ROUND_UP(100, TIME_UPDATE_INTERVAL) }; - enum { N_BUCKETS = 5000 / BUCKET_WIDTH }; - int buckets[N_BUCKETS] = { 0 }; - struct facet *facet; - int total, bucket; - long long int now; - int i; - - total = hmap_count(&ofproto->facets); - if (total <= 1000) { - return N_BUCKETS * BUCKET_WIDTH; - } - - /* Build histogram. */ - now = time_msec(); - HMAP_FOR_EACH (facet, hmap_node, &ofproto->facets) { - long long int idle = now - facet->used; - int bucket = (idle <= 0 ? 0 - : idle >= BUCKET_WIDTH * N_BUCKETS ? N_BUCKETS - 1 - : (unsigned int) idle / BUCKET_WIDTH); - buckets[bucket]++; - } - - /* Find the first bucket whose flows should be expired. */ - for (bucket = 0; bucket < N_BUCKETS; bucket++) { - if (buckets[bucket]) { - int subtotal = 0; - do { - subtotal += buckets[bucket++]; - } while (bucket < N_BUCKETS && subtotal < MAX(1000, total / 100)); - break; - } - } - - if (VLOG_IS_DBG_ENABLED()) { - struct ds s; - - ds_init(&s); - ds_put_cstr(&s, "keep"); - for (i = 0; i < N_BUCKETS; i++) { - if (i == bucket) { - ds_put_cstr(&s, ", drop"); - } - if (buckets[i]) { - ds_put_format(&s, " %d:%d", i * BUCKET_WIDTH, buckets[i]); - } - } - VLOG_INFO("%s: %s (msec:count)", ofproto->name, ds_cstr(&s)); - ds_destroy(&s); - } - - return bucket * BUCKET_WIDTH; -} - -static void -facet_active_timeout(struct ofproto *ofproto, struct facet *facet) -{ - if (ofproto->netflow && !facet_is_controller_flow(facet) && - netflow_active_timeout_expired(ofproto->netflow, &facet->nf_flow)) { - struct ofexpired expired; - - if (facet->installed) { - struct dpif_flow_stats stats; - - facet_put__(ofproto, facet, facet->actions, facet->actions_len, - &stats); - facet_update_stats(ofproto, facet, &stats); - } - - expired.flow = facet->flow; - expired.packet_count = facet->packet_count; - expired.byte_count = facet->byte_count; - expired.used = facet->used; - netflow_expire(ofproto->netflow, &facet->nf_flow, &expired); - } -} - -static void -ofproto_expire_facets(struct ofproto *ofproto, int dp_max_idle) -{ - long long int cutoff = time_msec() - dp_max_idle; - struct facet *facet, *next_facet; - - HMAP_FOR_EACH_SAFE (facet, next_facet, hmap_node, &ofproto->facets) { - facet_active_timeout(ofproto, facet); - if (facet->used < cutoff) { - facet_remove(ofproto, facet); - } - } -} - -/* If 'rule' is an OpenFlow rule, that has expired according to OpenFlow rules, - * then delete it entirely. */ -static void -rule_expire(struct ofproto *ofproto, struct rule *rule) -{ - struct facet *facet, *next_facet; - long long int now; - uint8_t reason; - - /* Has 'rule' expired? */ - now = time_msec(); - if (rule->hard_timeout - && now > rule->created + rule->hard_timeout * 1000) { - reason = OFPRR_HARD_TIMEOUT; - } else if (rule->idle_timeout && list_is_empty(&rule->facets) - && now >rule->used + rule->idle_timeout * 1000) { - reason = OFPRR_IDLE_TIMEOUT; - } else { - return; - } - - COVERAGE_INC(ofproto_expired); - - /* Update stats. (This is a no-op if the rule expired due to an idle - * timeout, because that only happens when the rule has no facets left.) */ - LIST_FOR_EACH_SAFE (facet, next_facet, list_node, &rule->facets) { - facet_remove(ofproto, facet); - } - - /* Get rid of the rule. */ - if (!rule_is_hidden(rule)) { - rule_send_removed(ofproto, rule, reason); - } - rule_remove(ofproto, rule); -} - -static void -rule_send_removed(struct ofproto *p, struct rule *rule, uint8_t reason) -{ - struct ofputil_flow_removed fr; - - if (!rule->send_flow_removed) { - return; - } - - fr.rule = rule->cr; - fr.cookie = rule->flow_cookie; - fr.reason = reason; - calc_flow_duration__(rule->created, &fr.duration_sec, &fr.duration_nsec); - fr.idle_timeout = rule->idle_timeout; - fr.packet_count = rule->packet_count; - fr.byte_count = rule->byte_count; - - connmgr_send_flow_removed(p->connmgr, &fr); -} - -/* Obtains statistics for 'rule' and stores them in '*packets' and '*bytes'. - * The returned statistics include statistics for all of 'rule''s facets. */ -static void -rule_get_stats(const struct rule *rule, uint64_t *packets, uint64_t *bytes) -{ - uint64_t p, b; - struct facet *facet; - - /* Start from historical data for 'rule' itself that are no longer tracked - * in facets. This counts, for example, facets that have expired. */ - p = rule->packet_count; - b = rule->byte_count; - - /* Add any statistics that are tracked by facets. This includes - * statistical data recently updated by ofproto_update_stats() as well as - * stats for packets that were executed "by hand" via dpif_execute(). */ - LIST_FOR_EACH (facet, list_node, &rule->facets) { - p += facet->packet_count; - b += facet->byte_count; - } - - *packets = p; - *bytes = b; -} - -/* Given 'upcall', of type DPIF_UC_ACTION or DPIF_UC_MISS, sends an - * OFPT_PACKET_IN message to each OpenFlow controller as necessary according to - * their individual configurations. - * - * If 'clone' is true, the caller retains ownership of 'upcall->packet'. - * Otherwise, ownership is transferred to this function. */ -static void -send_packet_in(struct ofproto *ofproto, struct dpif_upcall *upcall, - const struct flow *flow, bool clone) -{ - struct ofputil_packet_in pin; - - pin.packet = upcall->packet; - pin.in_port = odp_port_to_ofp_port(flow->in_port); - pin.reason = upcall->type == DPIF_UC_MISS ? OFPR_NO_MATCH : OFPR_ACTION; - pin.buffer_id = 0; /* not yet known */ - pin.send_len = upcall->userdata; - connmgr_send_packet_in(ofproto->connmgr, upcall, flow, - clone ? NULL : upcall->packet); -} - static uint64_t pick_datapath_id(const struct ofproto *ofproto) { const struct ofport *port; - port = get_port(ofproto, ODPP_LOCAL); + port = ofproto_get_port(ofproto, OFPP_LOCAL); if (port) { uint8_t ea[ETH_ADDR_LEN]; int error; @@ -5577,7 +2710,9 @@ pick_fallback_dpid(void) return eth_addr_to_uint64(ea); } -static struct ofproto * +/* unixctl commands. */ + +struct ofproto * ofproto_lookup(const char *name) { struct ofproto *ofproto; @@ -5606,171 +2741,6 @@ ofproto_unixctl_list(struct unixctl_conn *conn, const char *arg OVS_UNUSED, ds_destroy(&results); } -struct ofproto_trace { - struct action_xlate_ctx ctx; - struct flow flow; - struct ds *result; -}; - -static void -trace_format_rule(struct ds *result, int level, const struct rule *rule) -{ - ds_put_char_multiple(result, '\t', level); - if (!rule) { - ds_put_cstr(result, "No match\n"); - return; - } - - ds_put_format(result, "Rule: cookie=%#"PRIx64" ", - ntohll(rule->flow_cookie)); - cls_rule_format(&rule->cr, result); - ds_put_char(result, '\n'); - - ds_put_char_multiple(result, '\t', level); - ds_put_cstr(result, "OpenFlow "); - ofp_print_actions(result, (const struct ofp_action_header *) rule->actions, - rule->n_actions * sizeof *rule->actions); - ds_put_char(result, '\n'); -} - -static void -trace_format_flow(struct ds *result, int level, const char *title, - struct ofproto_trace *trace) -{ - ds_put_char_multiple(result, '\t', level); - ds_put_format(result, "%s: ", title); - if (flow_equal(&trace->ctx.flow, &trace->flow)) { - ds_put_cstr(result, "unchanged"); - } else { - flow_format(result, &trace->ctx.flow); - trace->flow = trace->ctx.flow; - } - ds_put_char(result, '\n'); -} - -static void -trace_resubmit(struct action_xlate_ctx *ctx, struct rule *rule) -{ - struct ofproto_trace *trace = CONTAINER_OF(ctx, struct ofproto_trace, ctx); - struct ds *result = trace->result; - - ds_put_char(result, '\n'); - trace_format_flow(result, ctx->recurse + 1, "Resubmitted flow", trace); - trace_format_rule(result, ctx->recurse + 1, rule); -} - -static void -ofproto_unixctl_trace(struct unixctl_conn *conn, const char *args_, - void *aux OVS_UNUSED) -{ - char *dpname, *in_port_s, *tun_id_s, *packet_s; - char *args = xstrdup(args_); - char *save_ptr = NULL; - struct ofproto *ofproto; - struct ofpbuf packet; - struct rule *rule; - struct ds result; - struct flow flow; - uint16_t in_port; - ovs_be64 tun_id; - char *s; - - ofpbuf_init(&packet, strlen(args) / 2); - ds_init(&result); - - dpname = strtok_r(args, " ", &save_ptr); - tun_id_s = strtok_r(NULL, " ", &save_ptr); - in_port_s = strtok_r(NULL, " ", &save_ptr); - packet_s = strtok_r(NULL, "", &save_ptr); /* Get entire rest of line. */ - if (!dpname || !in_port_s || !packet_s) { - unixctl_command_reply(conn, 501, "Bad command syntax"); - goto exit; - } - - ofproto = ofproto_lookup(dpname); - if (!ofproto) { - unixctl_command_reply(conn, 501, "Unknown ofproto (use ofproto/list " - "for help)"); - goto exit; - } - - tun_id = htonll(strtoull(tun_id_s, NULL, 0)); - in_port = ofp_port_to_odp_port(atoi(in_port_s)); - - packet_s = ofpbuf_put_hex(&packet, packet_s, NULL); - packet_s += strspn(packet_s, " "); - if (*packet_s != '\0') { - unixctl_command_reply(conn, 501, "Trailing garbage in command"); - goto exit; - } - if (packet.size < ETH_HEADER_LEN) { - unixctl_command_reply(conn, 501, "Packet data too short for Ethernet"); - goto exit; - } - - ds_put_cstr(&result, "Packet: "); - s = ofp_packet_to_string(packet.data, packet.size, packet.size); - ds_put_cstr(&result, s); - free(s); - - flow_extract(&packet, tun_id, in_port, &flow); - ds_put_cstr(&result, "Flow: "); - flow_format(&result, &flow); - ds_put_char(&result, '\n'); - - rule = rule_lookup(ofproto, &flow); - trace_format_rule(&result, 0, rule); - if (rule) { - struct ofproto_trace trace; - struct ofpbuf *odp_actions; - - trace.result = &result; - trace.flow = flow; - action_xlate_ctx_init(&trace.ctx, ofproto, &flow, &packet); - trace.ctx.resubmit_hook = trace_resubmit; - odp_actions = xlate_actions(&trace.ctx, - rule->actions, rule->n_actions); - - ds_put_char(&result, '\n'); - trace_format_flow(&result, 0, "Final flow", &trace); - ds_put_cstr(&result, "Datapath actions: "); - format_odp_actions(&result, odp_actions->data, odp_actions->size); - ofpbuf_delete(odp_actions); - } - - unixctl_command_reply(conn, 200, ds_cstr(&result)); - -exit: - ds_destroy(&result); - ofpbuf_uninit(&packet); - free(args); -} - -static void -ofproto_unixctl_fdb_show(struct unixctl_conn *conn, - const char *args, void *aux OVS_UNUSED) -{ - struct ds ds = DS_EMPTY_INITIALIZER; - const struct ofproto *ofproto; - const struct mac_entry *e; - - ofproto = ofproto_lookup(args); - if (!ofproto) { - unixctl_command_reply(conn, 501, "no such bridge"); - return; - } - - ds_put_cstr(&ds, " port VLAN MAC Age\n"); - LIST_FOR_EACH (e, lru_node, &ofproto->ml->lrus) { - struct ofbundle *bundle = e->port.p; - ds_put_format(&ds, "%5d %4d "ETH_ADDR_FMT" %3d\n", - ofbundle_get_a_port(bundle)->odp_port, - e->vlan, ETH_ADDR_ARGS(e->mac), mac_entry_age(e)); - } - unixctl_command_reply(conn, 200, ds_cstr(&ds)); - ds_destroy(&ds); -} - static void ofproto_unixctl_init(void) { @@ -5781,6 +2751,4 @@ ofproto_unixctl_init(void) registered = true; unixctl_command_register("ofproto/list", ofproto_unixctl_list, NULL); - unixctl_command_register("ofproto/trace", ofproto_unixctl_trace, NULL); - unixctl_command_register("fdb/show", ofproto_unixctl_fdb_show, NULL); } diff --git a/ofproto/ofproto.h b/ofproto/ofproto.h index 183e58865..ed8d45b3f 100644 --- a/ofproto/ofproto.h +++ b/ofproto/ofproto.h @@ -163,7 +163,7 @@ void ofproto_set_desc(struct ofproto *, int ofproto_set_snoops(struct ofproto *, const struct sset *snoops); int ofproto_set_netflow(struct ofproto *, const struct netflow_options *nf_options); -void ofproto_set_sflow(struct ofproto *, const struct ofproto_sflow_options *); +int ofproto_set_sflow(struct ofproto *, const struct ofproto_sflow_options *); /* Configuration of ports. */ @@ -192,9 +192,9 @@ struct ofproto_bundle_settings { struct lacp_slave_settings *lacp_slaves; /* Array of n_slaves elements. */ }; -void ofproto_bundle_register(struct ofproto *, void *aux, - const struct ofproto_bundle_settings *); -void ofproto_bundle_unregister(struct ofproto *, void *aux); +int ofproto_bundle_register(struct ofproto *, void *aux, + const struct ofproto_bundle_settings *); +int ofproto_bundle_unregister(struct ofproto *, void *aux); /* Configuration of mirrors. */ struct ofproto_mirror_settings { @@ -217,11 +217,11 @@ struct ofproto_mirror_settings { uint16_t out_vlan; /* Output VLAN, only if out_bundle is NULL. */ }; -void ofproto_mirror_register(struct ofproto *, void *aux, - const struct ofproto_mirror_settings *); -void ofproto_mirror_unregister(struct ofproto *, void *aux); +int ofproto_mirror_register(struct ofproto *, void *aux, + const struct ofproto_mirror_settings *); +int ofproto_mirror_unregister(struct ofproto *, void *aux); -void ofproto_set_flood_vlans(struct ofproto *, unsigned long *flood_vlans); +int ofproto_set_flood_vlans(struct ofproto *, unsigned long *flood_vlans); bool ofproto_is_mirror_output_bundle(struct ofproto *, void *aux); /* Configuration querying. */ diff --git a/ofproto/pktbuf.c b/ofproto/pktbuf.c index b8698021d..02c590cf6 100644 --- a/ofproto/pktbuf.c +++ b/ofproto/pktbuf.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2008, 2009, 2010 Nicira Networks. + * Copyright (c) 2008, 2009, 2010, 2011 Nicira Networks. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. @@ -173,6 +173,11 @@ pktbuf_retrieve(struct pktbuf *pb, uint32_t id, struct ofpbuf **bufferp, struct packet *p; int error; + if (id == UINT32_MAX) { + error = 0; + goto error; + } + if (!pb) { VLOG_WARN_RL(&rl, "attempt to send buffered packet via connection " "without buffers"); @@ -204,6 +209,7 @@ pktbuf_retrieve(struct pktbuf *pb, uint32_t id, struct ofpbuf **bufferp, "if the switch was recently in fail-open mode)", id); error = 0; } +error: *bufferp = NULL; *in_port = UINT16_MAX; return error; diff --git a/ofproto/private.h b/ofproto/private.h index f75152e2a..cadd19e85 100644 --- a/ofproto/private.h +++ b/ofproto/private.h @@ -20,6 +20,492 @@ /* Definitions for use within ofproto. */ #include "ofproto/ofproto.h" +#include "classifier.h" +#include "list.h" +#include "shash.h" +#include "timeval.h" + +/* An OpenFlow switch. + * + * With few exceptions, ofproto implementations may look at these fields but + * should not modify them. */ +struct ofproto { + const struct ofproto_class *ofproto_class; + char *type; /* Datapath type. */ + char *name; /* Datapath name. */ + struct hmap_node hmap_node; /* In global 'all_ofprotos' hmap. */ + + /* Settings. */ + uint64_t fallback_dpid; /* Datapath ID if no better choice found. */ + uint64_t datapath_id; /* Datapath ID. */ + char *mfr_desc; /* Manufacturer. */ + char *hw_desc; /* Hardware. */ + char *sw_desc; /* Software version. */ + char *serial_desc; /* Serial number. */ + char *dp_desc; /* Datapath description. */ + + /* Datapath. */ + struct netdev_monitor *netdev_monitor; + struct hmap ports; /* Contains "struct ofport"s. */ + struct shash port_by_name; + + /* Flow table. */ + struct classifier cls; /* Contains "struct rule"s. */ + + /* OpenFlow connections. */ + struct connmgr *connmgr; +}; + +struct ofproto *ofproto_lookup(const char *name); +struct ofport *ofproto_get_port(const struct ofproto *, uint16_t ofp_port); + +/* An OpenFlow port within a "struct ofproto". + * + * With few exceptions, ofproto implementations may look at these fields but + * should not modify them. */ +struct ofport { + struct ofproto *ofproto; /* The ofproto that contains this port. */ + struct hmap_node hmap_node; /* In struct ofproto's "ports" hmap. */ + struct netdev *netdev; + struct ofp_phy_port opp; + uint16_t ofp_port; /* OpenFlow port number. */ +}; + +/* An OpenFlow flow within a "struct ofproto". + * + * With few exceptions, ofproto implementations may look at these fields but + * should not modify them. */ +struct rule { + struct ofproto *ofproto; /* The ofproto that contains this rule. */ + struct cls_rule cr; /* In owning ofproto's classifier. */ + + ovs_be64 flow_cookie; /* Controller-issued identifier. */ + + long long int created; /* Creation time. */ + uint16_t idle_timeout; /* In seconds from time of last use. */ + uint16_t hard_timeout; /* In seconds from time of creation. */ + bool send_flow_removed; /* Send a flow removed message? */ + + union ofp_action *actions; /* OpenFlow actions. */ + int n_actions; /* Number of elements in actions[]. */ +}; + +static inline struct rule * +rule_from_cls_rule(const struct cls_rule *cls_rule) +{ + return cls_rule ? CONTAINER_OF(cls_rule, struct rule, cr) : NULL; +} + +struct rule *ofproto_rule_lookup(struct ofproto *, const struct flow *); +void ofproto_rule_expire(struct rule *, uint8_t reason); +void ofproto_rule_destroy(struct rule *); + +/* ofproto class structure, to be defined by each ofproto implementation. + * + * + * Data Structures + * =============== + * + * These functions work primarily with three different kinds of data + * structures: + * + * - "struct ofproto", which represents an OpenFlow switch. + * + * - "struct ofport", which represents a port within an ofproto. + * + * - "struct rule", which represents an OpenFlow flow within an ofproto. + * + * Each of these data structures contains all of the implementation-independent + * generic state for the respective concept, called the "base" state. None of + * them contains any extra space for ofproto implementations to use. Instead, + * each implementation is expected to declare its own data structure that + * contains an instance of the generic data structure plus additional + * implementation-specific members, called the "derived" state. The + * implementation can use casts or (preferably) the CONTAINER_OF macro to + * obtain access to derived state given only a pointer to the embedded generic + * data structure. + * + * + * Life Cycle + * ========== + * + * Four stylized functions accompany each of these data structures: + * + * "alloc" "construct" "destruct" "dealloc" + * ------------ ---------------- --------------- -------------- + * ofproto ->alloc ->construct ->destruct ->dealloc + * ofport ->port_alloc ->port_construct ->port_destruct ->port_dealloc + * rule ->rule_alloc ->rule_construct ->rule_destruct ->rule_dealloc + * + * Any instance of a given data structure goes through the following life + * cycle: + * + * 1. The client calls the "alloc" function to obtain raw memory. If "alloc" + * fails, skip all the other steps. + * + * 2. The client initializes all of the data structure's base state. If this + * fails, skip to step 7. + * + * 3. The client calls the "construct" function. The implementation + * initializes derived state. It may refer to the already-initialized + * base state. If "construct" fails, skip to step 6. + * + * 4. The data structure is now initialized and in use. + * + * 5. When the data structure is no longer needed, the client calls the + * "destruct" function. The implementation uninitializes derived state. + * The base state has not been uninitialized yet, so the implementation + * may still refer to it. + * + * 6. The client uninitializes all of the data structure's base state. + * + * 7. The client calls the "dealloc" to free the raw memory. The + * implementation must not refer to base or derived state in the data + * structure, because it has already been uninitialized. + * + * Each "alloc" function allocates and returns a new instance of the respective + * data structure. The "alloc" function is not given any information about the + * use of the new data structure, so it cannot perform much initialization. + * Its purpose is just to ensure that the new data structure has enough room + * for base and derived state. It may return a null pointer if memory is not + * available, in which case none of the other functions is called. + * + * Each "construct" function initializes derived state in its respective data + * structure. When "construct" is called, all of the base state has already + * been initialized, so the "construct" function may refer to it. The + * "construct" function is allowed to fail, in which case the client calls the + * "dealloc" function (but not the "destruct" function). + * + * Each "destruct" function uninitializes and frees derived state in its + * respective data structure. When "destruct" is called, the base state has + * not yet been uninitialized, so the "destruct" function may refer to it. The + * "destruct" function is not allowed to fail. + * + * Each "dealloc" function frees raw memory that was allocated by the the + * "alloc" function. The memory's base and derived members might not have ever + * been initialized (but if "construct" returned successfully, then it has been + * "destruct"ed already). The "dealloc" function is not allowed to fail. + * + * + * Conventions + * =========== + * + * Most of these functions return 0 if they are successful or a positive error + * code on failure. Depending on the function, valid error codes are either + * errno values or OpenFlow error codes constructed with ofp_mkerr(). + * + * Most of these functions are expected to execute synchronously, that is, to + * block as necessary to obtain a result. Thus, these functions may return + * EAGAIN (or EWOULDBLOCK or EINPROGRESS) only where the function descriptions + * explicitly say those errors are a possibility. We may relax this + * requirement in the future if and when we encounter performance problems. */ +struct ofproto_class { +/* ## ----------------- ## */ +/* ## Factory Functions ## */ +/* ## ----------------- ## */ + + void (*enumerate_types)(struct sset *types); + int (*enumerate_names)(const char *type, struct sset *names); + int (*del)(const char *type, const char *name); + +/* ## --------------------------- ## */ +/* ## Top-Level ofproto Functions ## */ +/* ## --------------------------- ## */ + + /* Life-cycle functions for an "ofproto" (see "Life Cycle" above). + * + * ->construct() should not modify any base members of the ofproto, even + * though it may be tempting in a few cases. In particular, the client + * will initialize the ofproto's 'ports' member after construction is + * complete. An ofproto's flow table should be initially empty, so + * ->construct() should delete flows from the underlying datapath, if + * necessary, rather than populating the ofproto's 'cls'. + * + * Only one ofproto instance needs to be supported for any given datapath. + * If a datapath is already open as part of one "ofproto", then another + * attempt to "construct" the same datapath as part of another ofproto is + * allowed to fail with an error. */ + struct ofproto *(*alloc)(void); + int (*construct)(struct ofproto *ofproto); + void (*destruct)(struct ofproto *ofproto); + void (*dealloc)(struct ofproto *ofproto); + + /* Performs any periodic activity required by 'ofproto'. It should: + * + * - Call connmgr_send_packet_in() for each received packet that missed + * in the OpenFlow flow table or that had a OFPP_CONTROLLER output + * action. + * + * - Call ofproto_rule_expire() for each OpenFlow flow that has reached + * its hard_timeout or idle_timeout, to expire the flow. + */ + int (*run)(struct ofproto *ofproto); + + /* Causes the poll loop to wake up when 'ofproto''s 'run' function needs to + * be called, e.g. by calling the timer or fd waiting functions in + * poll-loop.h. */ + void (*wait)(struct ofproto *ofproto); + + /* Every "struct rule" in 'ofproto' is about to be deleted, one by one. + * This function may prepare for that, for example by clearing state in + * advance. It should *not* actually delete any "struct rule"s from + * 'ofproto', only prepare for it. + * + * This function is optional; it's really just for optimization in case + * it's cheaper to delete all the flows from your hardware in a single pass + * than to do it one by one. */ + void (*flush)(struct ofproto *ofproto); + +/* ## ---------------- ## */ +/* ## ofport Functions ## */ +/* ## ---------------- ## */ + + /* Life-cycle functions for a "struct ofport" (see "Life Cycle" above). + * + * ->port_construct() should not modify any base members of the ofport. + * + * ofports are managed by the base ofproto code. The ofproto + * implementation should only create and destroy them in response to calls + * to these functions. The base ofproto code will create and destroy + * ofports in the following situations: + * + * - Just after the ->construct() function is called, the base ofproto + * iterates over all of the implementation's ports, using + * ->port_dump_start() and related functions, and constructs an ofport + * for each dumped port. + * + * - If ->port_poll() reports that a specific port has changed, then the + * base ofproto will query that port with ->port_query_by_name() and + * construct or destruct ofports as necessary to reflect the updated + * set of ports. + * + * - If ->port_poll() returns ENOBUFS to report an unspecified port set + * change, then the base ofproto will iterate over all of the + * implementation's ports, in the same way as at ofproto + * initialization, and construct and destruct ofports to reflect all of + * the changes. + */ + struct ofport *(*port_alloc)(void); + int (*port_construct)(struct ofport *ofport); + void (*port_destruct)(struct ofport *ofport); + void (*port_dealloc)(struct ofport *ofport); + + /* Called after 'ofport->netdev' is replaced by a new netdev object. If + * the ofproto implementation uses the ofport's netdev internally, then it + * should switch to using the new one. The old one has been closed. + * + * An ofproto implementation that doesn't need to do anything in this + * function may use a null pointer. */ + void (*port_modified)(struct ofport *ofport); + + /* Called after an OpenFlow OFPT_PORT_MOD request changes a port's + * configuration. 'ofport->opp.config' contains the new configuration. + * 'old_config' contains the previous configuration. + * + * The caller implements OFPPC_PORT_DOWN using netdev functions to turn + * NETDEV_UP on and off, so this function doesn't have to do anything for + * that bit (and it won't be called if that is the only bit that + * changes). */ + void (*port_reconfigured)(struct ofport *ofport, ovs_be32 old_config); + + /* Looks up a port named 'devname' in 'ofproto'. On success, initializes + * '*port' appropriately. + * + * The caller owns the data in 'port' and must free it with + * ofproto_port_destroy() when it is no longer needed. */ + int (*port_query_by_name)(const struct ofproto *ofproto, + const char *devname, struct ofproto_port *port); + + /* Attempts to add 'netdev' as a port on 'ofproto'. If successful, sets + * '*ofp_portp' to the new port's port number. */ + int (*port_add)(struct ofproto *ofproto, struct netdev *netdev, + uint16_t *ofp_portp); + + /* Deletes port number 'ofp_port' from the datapath for 'ofproto'. */ + int (*port_del)(struct ofproto *ofproto, uint16_t ofp_port); + + /* Attempts to begin dumping the ports in 'ofproto'. On success, returns 0 + * and initializes '*statep' with any data needed for iteration. On + * failure, returns a positive errno value. */ + int (*port_dump_start)(const struct ofproto *ofproto, void **statep); + + /* Attempts to retrieve another port from 'ofproto' for 'state', which was + * initialized by a successful call to the 'port_dump_start' function for + * 'ofproto'. On success, stores a new ofproto_port into 'port' and + * returns 0. Returns EOF if the end of the port table has been reached, + * or a positive errno value on error. This function will not be called + * again once it returns nonzero once for a given iteration (but the + * 'port_dump_done' function will be called afterward). + * + * The ofproto provider retains ownership of the data stored in 'port'. It + * must remain valid until at least the next call to 'port_dump_next' or + * 'port_dump_done' for 'state'. */ + int (*port_dump_next)(const struct ofproto *ofproto, void *state, + struct ofproto_port *port); + + /* Releases resources from 'ofproto' for 'state', which was initialized by + * a successful call to the 'port_dump_start' function for 'ofproto'. */ + int (*port_dump_done)(const struct ofproto *ofproto, void *state); + + /* Polls for changes in the set of ports in 'ofproto'. If the set of ports + * in 'ofproto' has changed, then this function should do one of the + * following: + * + * - Preferably: store the name of the device that was added to or deleted + * from 'ofproto' in '*devnamep' and return 0. The caller is responsible + * for freeing '*devnamep' (with free()) when it no longer needs it. + * + * - Alternatively: return ENOBUFS, without indicating the device that was + * added or deleted. + * + * Occasional 'false positives', in which the function returns 0 while + * indicating a device that was not actually added or deleted or returns + * ENOBUFS without any change, are acceptable. + * + * The purpose of 'port_poll' is to let 'ofproto' know about changes made + * externally to the 'ofproto' object, e.g. by a system administrator via + * ovs-dpctl. Therefore, it's OK, and even preferable, for port_poll() to + * not report changes made through calls to 'port_add' or 'port_del' on the + * same 'ofproto' object. (But it's OK for it to report them too, just + * slightly less efficient.) + * + * If the set of ports in 'ofproto' has not changed, returns EAGAIN. May + * also return other positive errno values to indicate that something has + * gone wrong. */ + int (*port_poll)(const struct ofproto *ofproto, char **devnamep); + + /* Arranges for the poll loop to wake up when 'port_poll' will return a + * value other than EAGAIN. */ + void (*port_poll_wait)(const struct ofproto *ofproto); + + int (*port_is_lacp_current)(const struct ofport *port); + + struct rule *(*rule_alloc)(void); + int (*rule_construct)(struct rule *rule); + void (*rule_destruct)(struct rule *rule); + void (*rule_dealloc)(struct rule *rule); + + void (*rule_remove)(struct rule *rule); + + void (*rule_get_stats)(struct rule *rule, uint64_t *packet_count, + uint64_t *byte_count); + + void (*rule_execute)(struct rule *rule, struct flow *flow, + struct ofpbuf *packet); + + int (*rule_modify_actions)(struct rule *rule, + const union ofp_action *actions, size_t n); + + bool (*get_drop_frags)(struct ofproto *ofproto); + void (*set_drop_frags)(struct ofproto *ofproto, bool drop_frags); + + int (*packet_out)(struct ofproto *ofproto, struct ofpbuf *packet, + const struct flow *flow, + const union ofp_action *actions, + size_t n_actions); + + /* Configures NetFlow on 'ofproto' according to the options in + * 'netflow_options', or turns off NetFlow if 'netflow_options' is NULL. + * + * EOPNOTSUPP as a return value indicates that 'ofproto' does not support + * sFlow, as does a null pointer. */ + int (*set_netflow)(struct ofproto *ofproto, + const struct netflow_options *netflow_options); + + void (*get_netflow_ids)(const struct ofproto *ofproto, + uint8_t *engine_type, uint8_t *engine_id); + + /* Configures sFlow on 'ofproto' according to the options in + * 'sflow_options', or turns off sFlow if 'sflow_options' is NULL. + * + * EOPNOTSUPP as a return value indicates that 'ofproto' does not support + * sFlow, as does a null pointer. */ + int (*set_sflow)(struct ofproto *ofproto, + const struct ofproto_sflow_options *sflow_options); + + /* Configures connectivity fault management on 'ofport'. + * + * If 'cfm' is nonnull, takes basic configuration from the configuration + * members in 'cfm', and the set of remote maintenance points from the + * 'n_remote_mps' elements in 'remote_mps'. Ignores the statistics members + * of 'cfm'. + * + * If 'cfm' is null, removes any connectivity fault management + * configuration from 'ofport'. + * + * EOPNOTSUPP as a return value indicates that this ofproto_class does not + * support CFM, as does a null pointer. */ + int (*set_cfm)(struct ofport *ofport, const struct cfm *cfm, + const uint16_t *remote_mps, size_t n_remote_mps); + + /* Stores the connectivity fault management object associated with 'ofport' + * in '*cfmp'. Stores a null pointer in '*cfmp' if CFM is not configured + * on 'ofport'. The caller must not modify or destroy the returned object. + * + * This function may be NULL if this ofproto_class does not support CFM. */ + int (*get_cfm)(struct ofport *ofport, const struct cfm **cfmp); + + /* If 's' is nonnull, this function registers a "bundle" associated with + * client data pointer 'aux' in 'ofproto'. A bundle is the same concept as + * a Port in OVSDB, that is, it consists of one or more "slave" devices + * (Interfaces, in OVSDB) along with VLAN and LACP configuration and, if + * there is more than one slave, a bonding configuration. If 'aux' is + * already registered then this function updates its configuration to 's'. + * Otherwise, this function registers a new bundle. + * + * If 's' is NULL, this function unregisters the bundle registered on + * 'ofproto' associated with client data pointer 'aux'. If no such bundle + * has been registered, this has no effect. + * + * This function affects only the behavior of the NXAST_AUTOPATH action and + * output to the OFPP_NORMAL port. An implementation that does not support + * it at all may set it to NULL or return EOPNOTSUPP. An implementation + * that supports only a subset of the functionality should implement what + * it can and return 0. */ + int (*bundle_set)(struct ofproto *ofproto, void *aux, + const struct ofproto_bundle_settings *s); + + /* If 'port' is part of any bundle, removes it from that bundle. If the + * bundle now has no ports, deletes the bundle. If the bundle now has only + * one port, deconfigures the bundle's bonding configuration. */ + void (*bundle_remove)(struct ofport *ofport); + + /* If 's' is nonnull, this function registers a mirror associated with + * client data pointer 'aux' in 'ofproto'. A mirror is the same concept as + * a Mirror in OVSDB. If 'aux' is already registered then this function + * updates its configuration to 's'. Otherwise, this function registers a + * new mirror. + * + * If 's' is NULL, this function unregisters the mirror registered on + * 'ofproto' associated with client data pointer 'aux'. If no such mirror + * has been registered, this has no effect. + * + * This function affects only the behavior of the OFPP_NORMAL action. An + * implementation that does not support it at all may set it to NULL or + * return EOPNOTSUPP. An implementation that supports only a subset of the + * functionality should implement what it can and return 0. */ + int (*mirror_set)(struct ofproto *ofproto, void *aux, + const struct ofproto_mirror_settings *s); + + /* Configures the VLANs whose bits are set to 1 in 'flood_vlans' as VLANs + * on which all packets are flooded, instead of using MAC learning. If + * 'flood_vlans' is NULL, then MAC learning applies to all VLANs. + * + * This function affects only the behavior of the OFPP_NORMAL action. An + * implementation that does not support it may set it to NULL or return + * EOPNOTSUPP. */ + int (*set_flood_vlans)(struct ofproto *ofproto, + unsigned long *flood_vlans); + + /* Returns true if 'aux' is a registered bundle that is currently in use as + * the output for a mirror. */ + bool (*is_mirror_output_bundle)(struct ofproto *ofproto, void *aux); +}; + +extern const struct ofproto_class ofproto_dpif_class; + +int ofproto_class_register(const struct ofproto_class *); +int ofproto_class_unregister(const struct ofproto_class *); void ofproto_add_flow(struct ofproto *, const struct cls_rule *, const union ofp_action *, size_t n_actions); diff --git a/tests/ofproto-macros.at b/tests/ofproto-macros.at index 06f9b65f6..d5ff2ad6a 100644 --- a/tests/ofproto-macros.at +++ b/tests/ofproto-macros.at @@ -7,7 +7,7 @@ m4_define([OFPROTO_START], trap 'kill `cat ovs-openflowd.pid`' 0 AT_CAPTURE_FILE([ovs-openflowd.log]) AT_CHECK( - [ovs-openflowd --detach --pidfile --enable-dummy --log-file dummy@br0 none --datapath-id=fedcba9876543210 $1], + [ovs-openflowd --detach --pidfile --enable-dummy --log-file --fail=closed dummy@br0 none --datapath-id=fedcba9876543210 $1], [0], [ignore], [ignore]) ]) diff --git a/tests/ofproto.at b/tests/ofproto.at index fc7ff57ef..9587c9780 100644 --- a/tests/ofproto.at +++ b/tests/ofproto.at @@ -52,8 +52,8 @@ dnl Tests for a bug in which ofproto ignored tun_id in tun_id_from_cookie dnl flow_mod commands. AT_CHECK([ovs-ofctl add-flow -F tun_id_from_cookie br0 tun_id=1,actions=mod_vlan_vid:4]) AT_CHECK([ovs-ofctl dump-flows br0 | STRIP_XIDS | STRIP_DURATION | sort], [0], [dnl + cookie=0x0, duration=?s, table_id=0, n_packets=0, n_bytes=0, in_port=0 actions=output:1 cookie=0x0, duration=?s, table_id=0, n_packets=0, n_bytes=0, in_port=1 actions=output:0 - cookie=0x0, duration=?s, table_id=0, n_packets=0, n_bytes=0, in_port=65534 actions=output:1 cookie=0x100000000, duration=?s, table_id=0, n_packets=0, n_bytes=0, tun_id=0x1 actions=mod_vlan_vid:4 NXST_FLOW reply: ]) diff --git a/tests/ovs-ofctl.at b/tests/ovs-ofctl.at index 466ade6b5..ae3f70b28 100644 --- a/tests/ovs-ofctl.at +++ b/tests/ovs-ofctl.at @@ -41,17 +41,17 @@ normalization changed ofp_match, details: pre: wildcards= 0x3820f8 in_port=65534 dl_src=00:0a:e4:25:6b:b0 dl_dst=00:00:00:00:00:00 dl_vlan= 9 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 post: wildcards= 0x3ffff8 in_port=65534 dl_src=00:0a:e4:25:6b:b0 dl_dst=00:00:00:00:00:00 dl_vlan= 9 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 normalization changed ofp_match, details: - pre: wildcards= 0x3820ff in_port=65534 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 -post: wildcards= 0x3fffff in_port=65534 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 + pre: wildcards= 0x3820ff in_port= 0 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 +post: wildcards= 0x3fffff in_port= 0 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 normalization changed ofp_match, details: - pre: wildcards= 0x3820ff in_port=65534 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 -post: wildcards= 0x3fffff in_port=65534 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 + pre: wildcards= 0x3820ff in_port= 0 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 +post: wildcards= 0x3fffff in_port= 0 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 normalization changed ofp_match, details: - pre: wildcards= 0x3820ff in_port=65534 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 -post: wildcards= 0x3fffff in_port=65534 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 + pre: wildcards= 0x3820ff in_port= 0 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 +post: wildcards= 0x3fffff in_port= 0 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 normalization changed ofp_match, details: - pre: wildcards= 0x23820ff in_port=65534 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 -post: wildcards= 0x23fffff in_port=65534 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 + pre: wildcards= 0x23820ff in_port= 0 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 +post: wildcards= 0x23fffff in_port= 0 dl_src=00:00:00:00:00:00 dl_dst=00:00:00:00:00:00 dl_vlan= 0 dl_vlan_pcp= 0 dl_type= 0 nw_tos= 0 nw_proto= 0 nw_src= 0 nw_dst= 0 tp_src= 0 tp_dst= 0 ]) AT_CLEANUP @@ -315,7 +315,7 @@ AT_CHECK([ovs-ofctl parse-nx-match < nx-match.txt], [0], [dnl # in port -NXM_OF_IN_PORT(fffe) +NXM_OF_IN_PORT(0000) NXM_OF_IN_PORT(fffe) # eth dst diff --git a/tests/test-classifier.c b/tests/test-classifier.c index 9af8aacba..bb75dba1a 100644 --- a/tests/test-classifier.c +++ b/tests/test-classifier.c @@ -251,7 +251,7 @@ static ovs_be32 nw_dst_values[] = { CONSTANT_HTONL(0xc0a80002), static ovs_be64 tun_id_values[] = { 0, CONSTANT_HTONLL(UINT64_C(0xfedcba9876543210)) }; -static uint16_t in_port_values[] = { 1, ODPP_LOCAL }; +static uint16_t in_port_values[] = { 1, OFPP_LOCAL }; static ovs_be16 vlan_tci_values[] = { CONSTANT_HTONS(101), CONSTANT_HTONS(0) }; static ovs_be16 dl_type_values[] = { CONSTANT_HTONS(ETH_TYPE_IP), CONSTANT_HTONS(ETH_TYPE_ARP) }; diff --git a/vswitchd/bridge.c b/vswitchd/bridge.c index e23ee6fcf..72bb28bf7 100644 --- a/vswitchd/bridge.c +++ b/vswitchd/bridge.c @@ -796,7 +796,6 @@ bridge_del_ofproto_ports(struct bridge *br) br->name, name, strerror(error)); } if (iface) { - ofproto_port_unregister(br->ofproto, ofproto_port.ofp_port); netdev_close(iface->netdev); iface->netdev = NULL; } @@ -2672,8 +2671,6 @@ static bool mirror_configure(struct mirror *m, const struct ovsrec_mirror *cfg) { struct ofproto_mirror_settings s; - struct port *out_port; - struct port *port; /* Set name. */ if (strcmp(cfg->name, m->name)) { @@ -2685,7 +2682,7 @@ mirror_configure(struct mirror *m, const struct ovsrec_mirror *cfg) /* Get output port or VLAN. */ if (cfg->output_port) { s.out_bundle = port_lookup(m->bridge, cfg->output_port->name); - if (!out_port) { + if (!s.out_bundle) { VLOG_ERR("bridge %s: mirror %s outputs to port not on bridge", m->bridge->name, m->name); return false; @@ -2711,6 +2708,7 @@ mirror_configure(struct mirror *m, const struct ovsrec_mirror *cfg) if (cfg->select_all) { size_t n_ports = hmap_count(&m->bridge->ports); void **ports = xmalloc(n_ports * sizeof *ports); + struct port *port; size_t i; i = 0;