- <para>The previous implementation of the software responsible for
- installing and booting nodes consisted of a set of boot scripts that the
- boot cd would run, depending on the node's current boot state. The logic
- behind which script the node was sent to the node existed on the boot
- server in the form of PHP scripts. However, the intention with the new
- Boot Manager system is to send the same boot manager back for all nodes,
- in all boot states, each time the node starts. Then, the boot manager will
- run and detiremine which operations to perform on the node, based on the
- current boot state. There is no longer any boot state specific logic at
- PLC.</para>
+ <para>The intention with the BootManager system is to send the same script
+ to all nodes (consisting of the core BootManager code), each time the node
+ starts. Then, the BootManager will run and detiremine which operations to
+ perform on the node, based on its state of installation. All state based
+ logic for the node boot, install, debug, and reconfigure operations are
+ contained in one place; there is no boot state specific logic located on
+ the MA servers.</para>
+ </section>
+
+ <section>
+ <title>Soure Code</title>
+
+ <para>All BootManager source code is located in the repository
+ 'bootmanager' on the PlanetLab CVS system. For information on how to
+ access CVS, consult the PlanetLab website. Unless otherwise noted, all
+ file references refer to this repository.</para>
+ </section>
+
+ <section>
+ <title>Management Authority Node Fields</title>
+
+ <para>The following MA database fields are directly applicable to the
+ BootManager operation, and to the node-related API calls (detailed
+ below).</para>
+
+ <section>
+ <title>node_id</title>
+
+ <para>An integer unique identifier for a specific node.</para>
+ </section>
+
+ <section>
+ <title>node_key</title>
+
+ <para>This is a per-node, unique value that forms the basis of the node
+ authentication mechanism detailed below. When a new node record is added
+ to the MA by a principal, it is automatically assigned a new, random
+ key, and distributed out of band to the nodes. This shared secret is
+ then used for node authentication. The contents of node_key are
+ generated using this command:</para>
+
+ <para><programlisting>openssl rand -base64 32</programlisting></para>
+
+ <para>Any = (equals) characters are removed from the string.</para>
+ </section>
+
+ <section>
+ <title>boot_state</title>
+
+ <para>Each node always has one of four possible boot states, stored as a
+ string, refered to as boot_state. These are:</para>
+
+ <orderedlist>
+ <listitem>
+ <para>'inst'</para>
+
+ <para>Install. The boot state cooresponds to a new node that has not
+ yet been installed, but record of it does exist. When the
+ BootManager starts, and the node is in this state, the user is
+ prompted to continue with the installation. The intention here is to
+ prevent a non-PlanetLab machine (like a user's desktop machine) from
+ becoming inadvertantly wiped and installed with the PlanetLab node
+ software. This is the default state for new nodes.</para>
+ </listitem>
+
+ <listitem>
+ <para>'rins'</para>
+
+ <para>Reinstall. In this state, a node will reinstall the node
+ software, erasing anything that might have been on the disk
+ before.</para>
+ </listitem>
+
+ <listitem>
+ <para>'boot'</para>
+
+ <para>Boot to bring a node online. This state cooresponds with nodes
+ that have sucessfully installed, and can be chain booted to the
+ runtime node kernel.</para>
+ </listitem>
+
+ <listitem>
+ <para>'dbg'</para>
+
+ <para>Debug. Regardless of whether or not a machine has been
+ installed, this state sets up a node to be debugged by
+ administrators. In debug mode, no node software is running, and the
+ node can be accessed remotely by administrators.</para>
+ </listitem>
+ </orderedlist>
+ </section>