1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd">
6 <title>Boot Manager Technical Documentation</title>
9 <firstname>Aaron</firstname>
11 <surname>Klingaman</surname>
13 <email>alk@cs.princeton.edu</email>
17 <orgname>Princeton University</orgname>
22 <revnumber>1.0</revnumber>
24 <date>March 15, 2005</date>
26 <authorinitials>AK</authorinitials>
29 <para>Initial draft.</para>
34 <revnumber>1.1</revnumber>
36 <date>May 31, 2005</date>
38 <authorinitials>AK</authorinitials>
41 <para>Updated post implementation and deployment.</para>
48 <title>Components</title>
50 <para>The entire Boot Manager system consists of several components that
51 are designed to work together to provide the functionality outline in the
52 Boot Manager PDN <citation>1</citation>. These consist of:</para>
56 <para>A set of API calls available at PlanetLab Central</para>
60 <para>A package to be run in the boot cd environment on nodes</para>
64 <para>An appropriate user interface allowing administrators to add
65 nodes and create node configuration files</para>
69 <para>The previous implementation of the software responsible for
70 installing and booting nodes consisted of a set of boot scripts that the
71 boot cd would download and run, depending on the node's current boot
72 state. Only the necessary script for the current state would be
73 downloaded, and the logic behind which script the node was sent to the
74 node existed on the boot server in the form of PHP scripts. However, the
75 intention with the new Boot Manager system is to send the same boot
76 manager back for all nodes, in all boot states, each time the node starts.
77 Then, the boot manager will run and detiremine which operations to perform
78 on the node, based on the current boot state. All state based logic for
79 the node boot, install, debug, and reconfigure operations are contained in
80 one place; there is no longer any boot state specific logic at PLC.</para>
84 <title>API Calls</title>
86 <para>Most of the API calls available as part of the PlanetLab Central API
87 are intended to be run by users, and thus authentication for these calls
88 is done with the user's email address and password. However, the API calls
89 described below will be run by the nodes themselves, so a new
90 authentication mechanism is required.</para>
93 <title>Authentication</title>
95 <para>As is done with other PLC API calls, the first parameter to all
96 Boot Manager related calls will be an authentication structure,
97 consisting of these named fields:</para>
101 <para>AuthMethod</para>
103 <para>The authentication method, only 'hmac' is currently
110 <para>The node id, contained on the configuration file.</para>
116 <para>The node's primary IP address. This will be checked with the
117 node_id against PLC records.</para>
123 <para>The authentication string, depending on method. For the 'hmac'
124 method, a hash for the call using the HMAC algorithm, made from the
125 parameters of the call the key contained on the configuration
130 <para>Authentication is succesful if PLC is able to create the same hash
131 from the values usings its own copy of the node key. If the hash values
132 to not match, then either the keys do not match or the values of the
133 call were modified in transmision and the node cannot be
134 authenticated.</para>
136 <para>Both the BootManager and the authentication software at PLC must
137 agree on a method for creating the hash values for each call. This hash
138 is essentially a finger print of the method call, and is created by this
143 <para>Take the value of every part of each parameter, except the
144 authentication structure, and convert them to strings. For arrays,
145 each element is used. For dictionaries, not only is the value of all
146 the items used, but the keys themselves. Embedded types (arrays or
147 dictionaries inside arrays or dictionaries, etc), also have all
148 values extracted.</para>
152 <para>Alphabetically sort all the parameters.</para>
156 <para>Concatenate them into a single string.</para>
160 <para>Prepend the string with the method name.</para>
164 <para>The resultant string is fed into the HMAC algorithm with the node
165 key, and the resultant hash value is used in the authentication
168 <para>This authentication method makes a number of assumptions, detailed
173 <para>All calls made to PLC are done over SSL, so the details of the
174 authentication structure cannot be viewed by 3rd parties. If, in the
175 future, non-SSL based calls are desired, a sequence number or some
176 other value making each call unique will would be required to
177 prevent replay attacks. In fact, the current use of SSL negates the
178 need to create and send hashes across - technically, the key itself
179 could be sent directly to PLC, assuming the connection is made to an
180 HTTPS server with a third party signed SSL certificate.</para>
188 <title>PLC API Calls</title>
190 <para>Full, up to date technical documentation of these functions can be
191 found in the PlanetLab API documentation. They are listed here for
196 <para>BootUpdateNode( authentication, update_values )</para>
198 <para>Update a node record, including its boot state, primary
199 network, or ssh host key.</para>
203 <para>BootCheckAuthentication( authentication )</para>
205 <para>Simply check to see if the node is recognized by the system
206 and is authorized.</para>
210 <para>BootGetNodeDetails( authentication )</para>
212 <para>Return details about a node, including its state, what
213 networks the PLC database has configured for the node, and what the
214 model of the node is.</para>
218 <para>BootNotifyOwners( authentication, message, include_pi,
219 include_tech, include_support )</para>
221 <para>Notify someone about an event that happened on the machine,
222 and optionally include the site PIs, technical contacts, and
223 PlanetLab Support.</para>
227 <para>BootUpdateNodeHardware( authentication, pci_entries )</para>
229 <para>Send the set of hardware this node has and update the record
237 <title>Core Package</title>
239 <para>The Boot Manager core package, which is run on the nodes and
240 contacts the Boot API as necessary, is responsible for the following major
241 functional units:</para>
245 <para>Installing the node operating system</para>
249 <para>Putting a node into a debug state so administrators can track
254 <para>Reconfiguring an already installed node to reflect new hardware,
255 or changed network settings</para>
259 <para>Booting an already installed node</para>
264 <title>Boot States</title>
266 <para>Each node always has one of four possible boot states.</para>
272 <para>Install. The boot state cooresponds to a new node that has not
273 yet been installed, but record of it does exist. When the boot
274 manager starts, and the node is in this state, the user is prompted
275 to continue with the installation. The intention here is to prevent
276 a non-PlanetLab machine (like a user's desktop machine) from
277 becoming inadvertantly wiped and installed with the PlanetLab node
284 <para>Reinstall. In this state, a node will reinstall the node
285 software, erasing anything that might have been on the disk
292 <para>Boot. This state cooresponds with nodes that have sucessfully
293 installed, and can be chain booted to the runtime node
300 <para>Debug. Regardless of whether or not a machine has been
301 installed, this state sets up a node to be debugged by
302 administrators.</para>
308 <title>Flow Chart</title>
310 <para>Below is a high level flow chart of the boot manager, from the
311 time it is executed to when it exits.</para>
314 <title>Boot Manager Flow Chart</title>
318 <imagedata align="left" fileref="boot-manager-flowchart.png"
326 <title>Boot CD Environment</title>
328 <para>The boot manager needs to be able to operate under all currently
329 supported boot cds. The new 3.0 cd contains software the current 2.x cds
330 do not contain, including the Logical Volume Manager (LVM) client tools,
331 RPM, and YUM, among other packages. Given this requirement, the boot cd
332 will need to download as necessary the extra support files it needs to
333 run. Depending on the size of these files, they may only be downloaded
334 by specific steps in the flow chart in figure 1, and thus are not
339 <title>Node Configuration Files</title>
341 <para>To remain compatible with 2.x boot cds, the format and existing
342 contents of the configuration files for the nodes will not change. There
343 will be, however, the addition of three fields:</para>
347 <para>NET_DEVICE</para>
349 <para>If present, use the device with the specified mac address to
350 contact PLC. The network on this device will be setup. If not
351 present, the device represented by 'eth0' will be used.</para>
355 <para>NODE_KEY</para>
357 <para>The unique, per-node key to be used during authentication and
358 identity verification. This is a fixed length, random value that is
359 only known to the node and PLC.</para>
365 <para>The PLC assigned node identifier.</para>
369 <para>An example of a configuration file for a dhcp networked
372 <programlisting>IP_METHOD="dhcp"
373 HOST_NAME="planetlab-1"
374 DOMAIN_NAME="cs.princeton.edu"
375 NET_DEVICE="00:06:5B:EC:33:BB"
376 NODE_KEY="79efbe871722771675de604a227db8386bc6ef482a4b74"
377 NODE_ID="121"</programlisting>
379 <para>An example of a configuration file for the same machine, only with
380 a statically assigned network address:</para>
382 <programlisting>IP_METHOD="static"
383 IP_ADDRESS="128.112.139.71"
384 IP_GATEWAY="128.112.139.65"
385 IP_NETMASK="255.255.255.192"
386 IP_NETADDR="128.112.139.127"
387 IP_BROADCASTADDR="128.112.139.127"
388 IP_DNS1="128.112.136.10"
389 IP_DNS2="128.112.136.12"
390 HOST_NAME="planetlab-1"
391 DOMAIN_NAME="cs.princeton.edu"
392 NET_DEVICE="00:06:5B:EC:33:BB"
393 NODE_KEY="79efbe871722771675de604a227db8386bc6ef482a4b74"
394 NODE_ID="121"</programlisting>
396 <para>Existing 2.x boot cds will look for the configuration files only
397 on a floppy disk, and the file must be named 'planet.cnf'. The new 3.x
398 boot cds, however, will initially look for a file named 'plnode.txt' on
399 either a floppy disk, or burned onto the cd itself. Alternatively, it
400 will fall back to looking for the original file name, 'planet.cnf'. This
401 initial file reading is performed by the boot cd itself to bring the
402 nodes network online, so it can download and execute the Boot
405 <para>However, the Boot Manager will also need to identify the location
406 of and read in the file, so it can get the extra fields not initially
407 used to bring the network online (node_key and node_id). Below is the
408 search order that the boot manager will use to locate a file.</para>
410 <para>Configuration file location search order:<informaltable>
414 <entry>File name</entry>
416 <entry>Floppy drive</entry>
418 <entry>Flash devices</entry>
420 <entry>CDRom, in /usr/boot</entry>
422 <entry>CDRom, in /usr</entry>
426 <entry>plode.txt</entry>
438 <entry>planet.cnf</entry>
450 </informaltable></para>
457 <title>User Interface for Node Management</title>
460 <title>Adding Nodes</title>
462 <para>New nodes are added to the system explicitly by either a PI or a
463 tech contact, either directly through the API calls, or by using the
464 appropriate interfaces on the website. As nodes are added, their
465 hostname, network configuration method (dhcp or static), and any static
466 settings are required to be entered. Regardless of network configuration
467 method, IP address is required. When the node is brought online, the
468 records at PLC will be updated with any remaining information.</para>
470 <para>After a node is added, the user has the option of creating a
471 configuration file for that node. Once the node is added, the contents
472 of the file are created automatically, and the user is prompted to
473 download and save the file. This file contains only the primary network
474 interface information (necessary to contact PLC), the node id, and the
477 <para>The default boot state of a new node is 'inst', which requires the
478 user to confirm the installation at the node, by typing yes on the
479 console. If this is not desired, as is the case with nodes in a
480 co-location site, or for a large number of nodes being setup at the same
481 time, the administrator can change the node state, after the entry is in
482 the PLC records, from 'inst' to 'reinstall'. This will bypass the
483 confirmation screen, and proceed directly to reinstall the machine (even
484 if it already had a node installation on it).</para>
488 <title>Updating Node Network Settings</title>
490 <para>If the primary node network address must be updated, if the node
491 is moved to a new network for example, then two steps must be performed
492 to successfully complete the move:</para>
496 <para>The node network will need to be updated at PLC, either
497 through the API directly or via the website.</para>
501 <para>Either the floppy file regenerated and put into the machine,
502 or, update the existing floppy to match the new settings.</para>
506 <para>If the node ip address on the floppy does not match the record at
507 PLC, then the node will not boot until they do match, as authentication
508 will fail. The intention here is to prevent a malicious user from taking
509 the floppy disk, altering the network settings, and trying to bring up a
510 new machine with the new settings.</para>
512 <para>On the other hand, if a non-primary network address needs to be
513 updated, then simply updating the record in the configuration file will
514 suffice. The boot manager, at next restart, will reconfigure the
515 machine, and update the PLC records to match the configuration
520 <title>Removing Nodes</title>
522 <para>Nodes are removed from the system by:</para>
526 <para>Deleting the record of the node at PLC</para>
530 <para>Shutting down the machine.</para>
534 <para>Once this is done, even if the machine attempts to come back
535 online, it cannot be authorized with PLC and will not boot.</para>
540 <title>BootManager Configuration</title>
542 <para>All run time configuration options for the BootManager exist in a
543 single file named 'configuration'. These values are described
548 <para><literal>VERSION</literal></para>
550 <para>The current BootManager version. During install, written out to
551 /etc/planetlab/install_version</para>
555 <para><literal>BOOT_API_SERVER</literal></para>
557 <para>The full url of the Boot API server to contact and make API
558 calls against.</para>
562 <para><literal>TEMP_PATH</literal></para>
564 <para>A writable path on the boot cd we can use for temporary storage
569 <para><literal>SYSIMG_PATH</literal></para>
571 <para>The path were we will mount the node logical volumes during any
572 step that requires access to the disks.</para>
576 <para><literal>NONCE_FILE</literal></para>
578 <para>The location of the nonce value that is sent to PLC when the
579 initial script to run is requested. Used only as a replace for
580 node_key when one does not exist (see Backward Compatibiltiy for more
585 <para><literal>PLCONF_DIR</literal></para>
587 <para>The path that PlanetLab node configuration files will be created
588 in during install</para>
592 <para><literal>SUPPORT_FILE_DIR</literal></para>
594 <para>A path on the boot server where per-step additional files may be
595 located. For example, the packages that include the tools to allow
596 older 2.x version boot cds to partition disks with LVM.</para>
600 <para><literal>ROOT_SIZE</literal></para>
602 <para>During install, this sets the size of the node root partition.
603 It must be large enough to house all the node operational software. It
604 does not store any user/slice files. Include 'G' suffix in this value,
605 indicating gigabytes.</para>
609 <para><literal>SWAP_SIZE</literal></para>
611 <para>How much swap to configure the node with during install. Include
612 'G' suffix in this value, indicating gigabytes.</para>
616 <para><literal>SKIP_HARDWARE_REQUIREMENT_CHECK</literal></para>
618 <para>Whether or not to skip any of the hardware requirement checks,
619 including total disk and memory size constraints.</para>
623 <para><literal>MINIMUM_MEMORY</literal></para>
625 <para>How much memory is required by a running PlanetLab node. If a
626 machine contains less physical memory than this value, the install
627 will not proceed.</para>
631 <para><literal>MINIMUM_DISK_SIZE</literal></para>
633 <para>The size of the small disk we are willing to attempt to use
634 during the install, in gigabytes. Do not include any suffixes.</para>
638 <para><literal>TOTAL_MINIMUM_DISK_SIZE</literal></para>
640 <para>The size of all usable disks must be at least this sizse, in
641 gigabytes. Do not include any suffixes.</para>
645 <para><literal>INSTALL_LANGS</literal></para>
647 <para>Which language support to install. This value is used by RPM,
648 and is used in writting /etc/rpm/macros before any RPMs are
653 <para><literal>NUM_AUTH_FAILURES_BEFORE_DEBUG</literal></para>
655 <para>How many authentication failures the BootManager is willing to
656 except for any set of calls, before stopping and putting the node into
663 <title>Installer Hardware Detection</title>
665 <para>When a node is being installed, the Boot Manager must identify which
666 hardware the machine has that is applicable to a running node, and
667 configure the node properly so it can boot properly post-install. The
668 general procedure for doing so is outline in this section. It is
669 implemented in the <filename>systeminfo.py</filename> file.</para>
671 <para>The process for identifying which kernel module needs to be load
676 <para>Create a lookup table of all modules, and which PCI ids
677 coorespond to this module.</para>
681 <para>For each PCI device on the system, lookup its module in the
686 <para>If a module is found, put in into one of two categories of
687 modules, either network module or scsi module, based on the PCI device
692 <para>For each network module, write out an 'eth<index>' entry
693 in the modprobe.conf configuration file.</para>
697 <para>For each scsi module, write out a
698 'scsi_hostadapter<index>' entry in the modprobe.conf
699 configuration file.</para>
703 <para>This process is fairly straight forward, and is simplified by the
704 fact that we currently do not need support for USB, sound, or video
705 devices when the node is fully running. The boot cd itself uses a similar
706 process, but includes USB devices. Consult the boot cd technical
707 documentation for more information.</para>
709 <para>The creation of the PCI id to kernel module table lookup uses three
710 different sources of information, and merges them together into a single
711 table for easier lookups. With these three sources of information, a
712 fairly comprehensive lookup table can be generated for the devices that
713 PlanetLab nodes need to have configured. They include:</para>
717 <para>The installed <filename>/usr/share/hwdata/pcitable
718 </filename>file</para>
720 <para>Created at the time the hwdata rpm was built, this file contains
721 mappings of PCI ids to devices for a large number of devices. It is
722 not necessarily complete, and doesn't take into account the modules
723 that are actually available by the built PlanetLab kernel, which is a
724 subset of the full set available (again, PlanetLab nodes do not have a
725 use for network or video drivers, and thus are not typically
730 <para>From the built kernel, the <filename>modules.pcimap</filename>
731 from the <filename>/lib/modules/<kernelversion>/</filename>
734 <para>This file is generated at the time the kernel is installed, and
735 pulls the PCI ids out of each module, for the modules list they
736 devices they support. Not all modules list all devices they sort, and
737 some contain wild cards (that match any device of a single
738 manufacturer).</para>
742 <para>From the built kernel, the <filename>modules.dep</filename> from
743 the <filename>/lib/modules/<kernelversion>/</filename>
746 <para>This file is also generated at the time the kernel is installed,
747 but lists the dependencies between various modules. It is used to
748 generate a list of modules that are actually available.</para>
752 <para>It should be noted here that SATA (Serial ATA) devices have been
753 known to exist with both a PCI SCSI device class, and with a PCI IDE
754 device class. Under linux 2.6 kernels, all SATA modules need to be listed
755 in modprobe.conf under 'scsi_hostadapter' lines. This case is handled in
756 the hardware loading scripts by making the assumption that if an IDE
757 device matches a loadable module, it should be put in the modprobe.conf
758 file, as 'real' IDE drivers are all currently built into the kernel, and
759 do not need to be loaded. SATA devices that have a PCI SCSI device class
760 are easily identified.</para>
762 <para>It is enssential that the modprobe.conf configuration file contain
763 the correct drivers for the disks on the system, if they are present, as
764 during kernel installation the creation of the initrd (initial ramdisk)
765 which is responsible for booting the system uses this file to identify
766 which drivers to include in it. A failure to do this typically results in
767 an kernel panic at boot with a 'no init found' message.</para>
771 <title>Backward Compatibility</title>
773 <para>Given the large number of nodes in PlanetLab, and the lack of direct
774 physical access to them, the process of updating all configuration files
775 to include the new node id and node key will take a fairly significant
776 amount of time. Rather than delay deployment of the Boot Manager until all
777 machines are updated, alternative methods for aquiring these values is
778 used for existing nodes.</para>
780 <para>First, the node id. For any machine already part of PlanetLab, there
781 exists a record of its IP address and MAC address in PlanetLab central. To
782 get the node_id value, if it is not located in the configuration file, the
783 BootManager uses a standard HTTP POST request to a known php page on the
784 boot server, sending the IP and MAC address of the node. This php page
785 queries the PLC database, and returns a node_Id if the node is part of
786 PlanetLab, -1 otherwise.</para>
788 <para>Second, the node key. All Boot CDs currently in use, at the time
789 they request a script from PLC to run, send in the request a randomly
790 generated value called a boot_nonce, usually 32 bytes or larger. During
791 normal BootManager operation, this value is ignored. However, in the
792 absense of a node key, we can use this value. Although it is not as secure
793 as a typical node key (because it is not distributed through external
794 mechanisms, but is generated by the node itself), it can be used if we
795 validate that the IP address of the node making the request matches the
796 PLC record. This means that nodes behind firewalls can no longer be
797 allowed in this situation.</para>
801 <title>Common Scenarios</title>
803 <para>Below are common scenarios that the boot manager might encounter
804 that would exist outside of the documented procedures for handling nodes.
805 A full description of how they will be handled follows each.</para>
809 <para>A configuration file from previously installed and functioning
810 node is copied or moved to another machine, and the networks settings
811 are updated on it (but the key and node_id is left the same).</para>
813 <para>Since the authentication for a node consists of matching not
814 only the node id, but the primary node ip, this step will fail, and
815 the node will not allow the boot manager to be run. Instead, the new
816 node must be created at PLC first, and a network configuration file
817 for it must be generated, with its own node key.</para>
821 <para>After a node is installed and running, the administrators
822 mistakenly remove the cd and media containing the configuration
825 <para>The node installer clears all boot records from the disk, so the
826 node will not boot. Typically, the bios will report no operating
831 <para>A new network configuration file is generated on the website,
832 but is not put on the node.</para>
834 <para>Creating a new network configuration file through the PLC
835 interfaces will generate a new node key, effectively invalidating the
836 old configuration file (still in use by the machine). The next time
837 the node reboots and attempts to authentication with PLC, it will
838 fail. After two consecutive authentication failures, the node will
839 automatically put itself into debug mode. In this case, regardless of
840 the API function being called that was unable to authentication, the
841 software at PLC will automatically notify the PlanetLab
842 administrators, and the contacts at the site of the node was able to
843 be identified (usually through its IP address or node_id by searching
844 PLC records.).</para>
853 <title>The PlanetLab Boot Manager</title>
855 <date>January 14, 2005</date>
858 <firstname>Aaron</firstname>
860 <surname>Klingaman</surname>