--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
+"@DOCBOOK-43@" [
+ <!ENTITY Variables SYSTEM "plc_variables.xml">
+]>
+<article>
+ <articleinfo>
+ <title>MyPLC User's Guide</title>
+
+ <authorgroup>
+ <author> <firstname>Mark </firstname> <surname>Huang</surname> </author>
+ <author> <firstname>Thierry </firstname> <surname>Parmentelat</surname> </author>
+ </authorgroup>
+
+ <affiliation>
+ <orgname>Princeton University</orgname>
+ </affiliation>
+
+ <abstract>
+ <para>This document describes the design, installation, and
+ administration of MyPLC, a complete PlanetLab Central (PLC)
+ portable installation. This document assumes advanced
+ knowledge of the PlanetLab architecture and Linux system
+ administration.</para>
+ </abstract>
+
+ <revhistory>
+ <revision>
+ <revnumber>1.0</revnumber>
+ <date>April 7, 2006</date>
+ <authorinitials>MLH</authorinitials>
+ <revdescription><para>Initial draft.</para></revdescription>
+ </revision>
+ <revision>
+ <revnumber>1.1</revnumber>
+ <date>July 19, 2006</date>
+ <authorinitials>MLH</authorinitials>
+ <revdescription><para>Add development environment.</para></revdescription>
+ </revision>
+ <revision>
+ <revnumber>1.2</revnumber>
+ <date>August 18, 2006</date>
+ <authorinitials>TPT</authorinitials>
+ <revdescription>
+ <para>Review section on configuration and introduce <command>plc-config-tty</command>.</para>
+ <para>Present implementation details last.</para>
+ </revdescription>
+ </revision>
+ <revision>
+ <revnumber>1.3</revnumber>
+ <date>May 9, 2008</date>
+ <authorinitials>TPT</authorinitials>
+ <revdescription>
+ <para>
+ Review for 4.2 : focus on new packaging <emphasis>myplc-native</emphasis>.
+ </para>
+ <para>
+ Removed deprecated <emphasis>myplc-devel</emphasis>.
+ </para>
+ </revdescription>
+ </revision>
+ </revhistory>
+ </articleinfo>
+
+ <section>
+ <title>Overview</title>
+
+ <para>MyPLC is a complete PlanetLab Central (PLC) portable
+ installation. The default installation consists of a web server, an
+ XML-RPC API server, a boot server, and a database server: the core
+ components of PLC. The installation is customized through an
+ easy-to-use graphical interface. All PLC services are started up
+ and shut down through a single script installed on the host
+ system.</para>
+
+ <figure id="Architecture">
+ <title>MyPLC architecture</title>
+ <mediaobject>
+ <imageobject>
+ <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
+ </imageobject>
+ <imageobject>
+ <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
+ </imageobject>
+ <textobject>
+ <phrase>MyPLC architecture</phrase>
+ </textobject>
+ </mediaobject>
+ </figure>
+ </section>
+
+
+ <section> <title> Historical Notes</title>
+
+ <para> This document focuses on the new packaging named
+ <emphasis>myplc-native</emphasis> as introduced in the 4.2 release
+ of PlanetLab. </para>
+
+ <para> The former, chroot-based, packaging known as simply
+ <emphasis>myplc</emphasis> might still be present in this release
+ but its usage is not recommended anymore. </para>
+
+ <para> With 4.2, the general architecture of the build system has
+ drastically changed as well. Rather than providing a static chroot
+ image for building the software, that was formerly known as
+ <emphasis>myplc-devel</emphasis>, the current paradigm is to
+ create a fresh vserver and to rely on yum to install all needed
+ development tools. More details on how to set up such an
+ environment can be found at <ulink
+ url="http://svn.planet-lab.org/wiki/VserverCentos" /> that
+ describes how to turn a CentOS5 box into a vserver-capable host
+ system.
+ </para>
+
+ </section>
+
+ <section id="Requirements"> <title> Requirements </title>
+
+ <para> The recommended way to deploy MyPLC relies on
+ <emphasis>vserver</emphasis>. Here again, please refer to <ulink
+ url="http://svn.planet-lab.org/wiki/VserverCentos" /> for how to set
+ up such en environment. As of PlanetLab 4.2, the recommended Linux
+ distribution here is CentOS5, because there are publicly available
+ resources that allow a smooth setup. </para>
+
+ <para> As of PlanetLab 4.2, the current focus is on Fedora 8. This
+ means that you should create a fresh Fedora 8 vserver in your
+ vserver-capable CentOS box, and perform all subsequent installations
+ from that place as described below. Although you might find builds
+ for other Linux distributions, it is recommended for new users to
+ use this particular variant.
+ </para>
+
+ <para> It is also possible to perform these installations from a
+ fresh Fedora 8 installation. However, having a vserver-capable box
+ instead provides much more flexibility and is thus recommended, in
+ particular in terms of future upgrades of the system. </para>
+
+ <para> In addition, there have been numerous reports that SELINUX
+ should be turned off for running myplc in former releases. This is
+ part of the instructions provided to set up vserver, and please
+ keep this in mind if you plan on running MyPLC in a dedicated Fedora
+ 8 box.</para>
+
+ <para> Last, you need to check your firewall configuration(s) since
+ it is required, of course, to open up the <emphasis>http</emphasis>
+ and <emphasis>https</emphasis> ports, so as to accept connections
+ from the managed nodes and from the users desktops, and possibly
+ <emphasis>ssh</emphasis> as well. </para>
+
+ </section>
+
+ <section id="Installation">
+ <title>Installing and using MyPLC</title>
+
+ <section>
+ <title>Locating a build.</title>
+ <para>The following locations are entry points for locating the
+ build you plan on using.</para>
+ <itemizedlist>
+ <listitem> <para> <ulink url="http://build.planet-lab.org/" />
+ is maintained by the PlanetLab team at Princeton
+ University. </para>
+ </listitem>
+ <listitem> <para> <ulink url="http://build.one-lab.org/" /> is
+ maintained by the OneLab team at INRIA. </para>
+ </listitem>
+ </itemizedlist>
+ <para> There are currently two so-called PlanetLab
+ Distributions known as <emphasis>planetlab</emphasis> and
+ <emphasis>onelab</emphasis>. planet-lab.org generally builds
+ only the <emphasis>planetlab</emphasis> flavour, while both
+ flavours are generally available at one-lab.org.</para>
+ </section>
+
+ <section>
+ <title> Note on IP addressing</title>
+ <para> Once you've located the build you want to use, it is
+ strongly advised to assign this vserver a unique IP address, and
+ to avoid sharing the hosting box's one. To that end, the typical
+ sentence for creating such a vserver would be:</para>
+
+ <example><title>Creating the vserver</title>
+ <programlisting><![CDATA[# vtest-init-vserver.sh -p linux32 myvserver \
+ http://build.one-lab.org/4.2/planetlab-4.2-rc2.1lab-f8-32/RPMS \
+ -- --netdev eth0 --interface 138.96.250.134 --hostname myvserver.inria.fr]]>
+ </programlisting> </example>
+
+ <para>
+ In this example, we have chosen to use a planetlab flavour,
+ based on rc2.1lab, for i386 (this is what the final 32 stands
+ for).
+ </para>
+
+ </section>
+
+ <section>
+ <title>Setting up yum </title>
+
+ <para> If you do not use the convenience script mentioned above, you need to
+ create an entry in your yum configuration:</para>
+
+ <example><title>Setting up yum repository </title>
+ <programlisting>
+ <![CDATA[[myvserver] # cd /etc/yum.repos.d
+[myvserver] # cat > myplc.repo
+[myplc]
+name= MyPLC
+baseurl=http://build.one-lab.org/4.2/planetlab-4.2-rc2.1lab-f8-32/RPMS
+enabled=1
+gpgcheck=0
+^D
+[myvserver] #]]>
+ </programlisting>
+ </example>
+
+ </section>
+
+ <section>
+ <title>Installing MyPLC.</title>
+
+ <para>
+ To actually install myplc at that stage, just run:
+ </para>
+
+ <example><title>Installing MyPLC </title>
+ <programlisting><![CDATA[[myvserver] # yum -y install myplc-native]]>
+ </programlisting>
+ </example>
+
+ <para> The <xref linkend="FilesInvolvedRuntime" /> below explains in
+ details the installation strategy and the miscellaneous files and
+ directories involved.</para>
+
+ </section>
+
+ <section id="QuickStart"> <title> QuickStart </title>
+
+ <para> On a Red Hat or Fedora host system, it is customary to use
+ the <command>service</command> command to invoke System V init
+ scripts. As the examples suggest, the service must be started as root:</para>
+
+ <example><title>Starting MyPLC:</title>
+ <programlisting><![CDATA[[myvserver] # service plc start]]></programlisting>
+ </example>
+ <example><title>Stopping MyPLC:</title>
+ <programlisting><![CDATA[[myvserver] # service plc stop]]></programlisting>
+ </example>
+
+ <para> In <xref linkend="StartupSequence" />, we provide greater
+ details that might be helpful in the case where the service does
+ not seem to take off correctly.</para>
+
+ <para>Like all other registered System V init services, MyPLC is
+ started and shut down automatically when your host system boots
+ and powers off. You may disable automatic startup by invoking the
+ <command>chkconfig</command> command on a Red Hat or Fedora host
+ system:</para>
+
+ <example> <title>Disabling automatic startup of MyPLC.</title>
+ <programlisting><![CDATA[# chkconfig plc off]]></programlisting></example>
+ <example> <title>Re-enabling automatic startup of MyPLC.</title>
+ <programlisting><![CDATA[# chkconfig plc on]]></programlisting></example>
+
+ </section>
+
+ <section id="Configuration">
+ <title>Changing the configuration</title>
+
+ <para>After verifying that MyPLC is working correctly, shut it
+ down and begin changing some of the default variable
+ values. Shut down MyPLC with <command>service plc stop</command>
+ (see <xref linkend="QuickStart" />). </para>
+
+ <para> The preferred option for changing the configuration is to
+ use the <command>plc-config-tty</command> tool. The
+ full set of applicable variables is described in <xref
+ linkend="VariablesRuntime"/>, but using the <command>u</command>
+ guides you to the most useful ones.
+ Here is sample session:
+ </para>
+
+ <example><title>Using plc-config-tty for configuration:</title>
+ <programlisting><![CDATA[<myvserver> # plc-config-tty
+Enter command (u for usual changes, w to save, ? for help) u
+== PLC_NAME : [PlanetLab Test] OneLab
+== PLC_SLICE_PREFIX : [pl] thone
+== PLC_ROOT_USER : [root@localhost.localdomain] root@onelab-plc.inria.fr
+== PLC_ROOT_PASSWORD : [root] plain-passwd
+== PLC_MAIL_ENABLED : [false] true
+== PLC_MAIL_SUPPORT_ADDRESS : [root+support@localhost.localdomain] support@one-lab.org
+== PLC_BOOT_HOST : [localhost.localdomain] onelab-plc.inria.fr
+== PLC_NET_DNS1 : [127.0.0.1] 138.96.250.248
+== PLC_NET_DNS2 : [None] 138.96.250.249
+Enter command (u for usual changes, w to save, ? for help) w
+Wrote /etc/planetlab/configs/site.xml
+Merged
+ /etc/planetlab/default_config.xml
+and /etc/planetlab/configs/site.xml
+into /etc/planetlab/plc_config.xml
+You might want to type 'r' (restart plc) or 'q' (quit)
+Enter command (u for usual changes, w to save, ? for help) r
+==================== Stopping plc
+...
+==================== Starting plc
+...
+Enter command (u for usual changes, w to save, ? for help) q
+[myvserver] # ]]></programlisting>
+ </example>
+
+ <para>The variables that you should change immediately are:</para>
+
+ <itemizedlist>
+ <listitem><para>
+ <envar>PLC_NAME</envar>: Change this to the name of your PLC installation.
+ </para></listitem>
+ <listitem><para>
+ <envar>PLC_SLICE_PREFIX</envar>: Pick some
+ reasonable, short value; <emphasis> this is especially crucial if you
+ plan on federating with other PLCs</emphasis>.
+ </para></listitem>
+ <listitem><para>
+ <envar>PLC_ROOT_PASSWORD</envar>: Change this to a more
+ secure password.
+ </para></listitem>
+ <listitem><para>
+ <envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
+ Change this to the e-mail address at which you would like to
+ receive support requests.
+ </para></listitem>
+ <listitem><para>
+ <envar>PLC_DB_HOST</envar>, <envar>PLC_API_HOST</envar>,
+ <envar>PLC_WWW_HOST</envar>, <envar>PLC_BOOT_HOST</envar>,
+ Change all of these to the preferred FQDN address of your
+ host system. The corresponding <envar>*_IP</envar> values can
+ be safely ignored if the FQDN can be resolved through DNS.
+ </para></listitem>
+ </itemizedlist>
+
+ <para> After changing these variables, make sure that you save
+ (w) and restart your plc (r), as shown in the above example.
+ You should notice that the password of the default administrator
+ account is no longer <literal>root</literal>, and that the
+ default site name includes the name of your PLC installation
+ instead of PlanetLab. As a side effect of these changes, the ISO
+ images for the boot CDs now have new names, so that you can
+ freely remove the ones names after 'PlanetLab Test', which is
+ the default value of <envar>PLC_NAME</envar> </para>
+
+ <para>If you used the above method method for configuring, you
+ can skip to <xref linkend="LoginRealUser" />. As an alternative
+ to using <command>plc-config-tty</command>, you may also use a
+ text editor, but this requires some understanding on how the
+ configuration files are used within myplc. The
+ <emphasis>default</emphasis> configuration is stored in a file
+ named <filename>/etc/planetlab/default_config.xml</filename>,
+ that is designed to remain intact. You may store your local
+ changes in a file named
+ <filename>/etc/planetlab/configs/site.xml</filename>, that gets
+ loaded on top of the defaults. The resulting complete
+ configuration is stored in the file
+ <filename>/etc/planetlab/plc_config.xml</filename> that is used
+ as the reference. If you use this strategy, be sure to issue the
+ following command to refresh this file:</para>
+
+ <example><title> Refreshing <filename> plc_config.xml
+ </filename> after a manual change in<filename>
+ site.xml</filename> </title>
+ <programlisting><![CDATA[[myvserver] # service plc reload]]></programlisting>
+ </example>
+
+ <para>The defaults configuration file is a self-documenting
+ configuration file written in XML. Variables are divided into
+ categories. Variable identifiers must be alphanumeric, plus
+ underscore. A variable is referred to canonically as the
+ uppercase concatenation of its category identifier, an
+ underscore, and its variable identifier. Thus, a variable with
+ an <literal>id</literal> of <literal>slice_prefix</literal> in
+ the <literal>plc</literal> category is referred to canonically
+ as <envar>PLC_SLICE_PREFIX</envar>.</para>
+
+ <para>The reason for this convention is that during MyPLC
+ startup, <filename>plc_config.xml</filename> is translated into
+ several different languages—shell, PHP, and
+ Python—so that scripts written in each of these languages
+ can refer to the same underlying configuration. Most MyPLC
+ scripts are written in shell, so the convention for shell
+ variables predominates.</para>
+
+ </section>
+
+ <section id="LoginRealUser"> <title> Login as a real user </title>
+
+ <para>Now that myplc is up and running, you can connect to the
+ web site that by default runs on port 80. You can either
+ directly use the default administrator user that you configured
+ in <envar>PLC_ROOT_USER</envar> and
+ <envar>PLC_ROOT_PASSWORD</envar>, or create a real user through
+ the 'Joining' tab. Do not forget to select both PI and tech
+ roles, and to select the only site created at this stage.
+ Login as the administrator to enable this user, then login as
+ the real user.</para>
+ </section>
+
+ <section>
+ <title>Installing nodes</title>
+
+ <para>Install your first node by clicking <literal>Add
+ Node</literal> under the <literal>Nodes</literal> tab. Fill in
+ all the appropriate details, then click
+ <literal>Add</literal>. Download the node's boot material,
+ please refer to <xref linkend="TechsGuide" /> for more details
+ about this stage.</para>
+
+ <para>Please keep in mind that this boot medium is customized
+ for your particular instance, and contains details such as the
+ host that your configured as <envar>PLC_BOOT_HOST</envar>), or
+ the SSL certificate of your boot server that might expire. So
+ changes in your configuration may require you to replace all
+ your boot CD's.</para>
+
+ </section>
+
+ <section>
+ <title>Administering nodes</title>
+
+ <para>You may administer nodes as <literal>root</literal> by
+ using the SSH key stored in
+ <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
+
+ <example>
+ <title>Accessing nodes via SSH. Replace
+ <literal>node</literal> with the hostname of the node.</title>
+
+ <programlisting>[myvserver] # ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
+ </example>
+
+ <para>From the node's root context, besides the standard Linux
+ log files located in <filename>/var/log</filename>, several
+ other files can give you clues about any problems with active
+ processes:</para>
+
+ <itemizedlist>
+ <listitem><para><filename>/var/log/nm</filename>: The log
+ file for the Node Manager.</para></listitem>
+
+ <listitem><para><filename>/vservers/slicename/var/log/nm</filename>:
+ The log file for the Node Manager operations that perform
+ within the slice's vserver.</para></listitem>
+
+ </itemizedlist>
+ </section>
+
+ <section>
+ <title>Creating a slice</title>
+
+ <para>Create a slice by clicking <literal>Create Slice</literal>
+ under the <literal>Slices</literal> tab. Fill in all the
+ appropriate details, then click <literal>Create</literal>. Add
+ nodes to the slice by clicking <literal>Manage Nodes</literal>
+ on the <command>Slice Details</command> page for
+ the slice.</para>
+
+ <para>
+ Slice creation is performed by the NodeManager. In some
+ particular cases you may wish to restart it manually, here is
+ how to do this:
+ </para>
+
+ <example>
+ <title>Forcing slice creation on a node.</title>
+
+ <programlisting><![CDATA[[myvserver] # ssh -i /etc/planetlab/root_ssh_key.rsa root@node service nm restart]]>
+ </programlisting>
+ </example>
+ </section>
+
+ <section id="StartupSequence">
+ <title>Understanding the startup sequence</title>
+
+ <para>During service startup described in <xref
+ linkend="QuickStart" />, observe the output of this command for
+ any failures. If no failures occur, you should see output similar
+ to the following. <command>Please note that as of this writing, with 4.2
+ the system logger step might fail, this is harmless!</command></para>
+
+ <example>
+ <title>A successful MyPLC startup.</title>
+
+ <programlisting><![CDATA[
+PLC: Generating network files: [ OK ]
+PLC: Starting system logger: [ OK ]
+PLC: Starting database server: [ OK ]
+PLC: Generating SSL certificates: [ OK ]
+PLC: Configuring the API: [ OK ]
+PLC: Updating GPG keys: [ OK ]
+PLC: Generating SSH keys: [ OK ]
+PLC: Starting web server: [ OK ]
+PLC: Bootstrapping the database: [ OK ]
+PLC: Starting DNS server: [ OK ]
+PLC: Starting crond: [ OK ]
+PLC: Rebuilding Boot CD: [ OK ]
+PLC: Rebuilding Boot Manager: [ OK ]
+PLC: Signing node packages: [ OK ]
+]]></programlisting>
+ </example>
+
+ <para>A complete log file of the startup process may be found at
+ <filename>/var/log/boot.log</filename>. Possible reasons
+ for failure of each step include:</para>
+
+ <itemizedlist>
+ <listitem><para><literal>Starting database server</literal>: If
+ this step fails, check
+ <filename>/var/log/pgsql</filename> and
+ <filename>/var/log/boot.log</filename>. The most common
+ reason for failure is that the default PostgreSQL port, TCP port
+ 5432, is already in use. Check that you are not running a
+ PostgreSQL server on the host system.</para></listitem>
+
+ <listitem><para><literal>Starting web server</literal>: If this
+ step fails, check
+ <filename>/var/log/httpd/error_log</filename> and
+ <filename>/var/log/boot.log</filename> for obvious
+ errors. The most common reason for failure is that the default
+ web ports, TCP ports 80 and 443, are already in use. Check that
+ you are not running a web server on the host
+ system.</para></listitem>
+
+ <listitem><para><literal>Bootstrapping the database</literal>:
+ If this step fails, it is likely that the previous step
+ (<literal>Starting web server</literal>) also failed. Another
+ reason that it could fail is if <envar>PLC_API_HOST</envar> (see
+ <xref linkend="Configuration" />) does not resolve to
+ the host on which the API server has been enabled. By default,
+ all services, including the API server, are enabled and run on
+ the same host, so check that <envar>PLC_API_HOST</envar> is
+ either <filename>localhost</filename> or resolves to a local IP
+ address. Also check that <envar>PLC_ROOT_USER</envar> looks like
+ an e-mail address.</para></listitem>
+
+ <listitem><para><literal>Starting crond</literal>: If this step
+ fails, it is likely that the previous steps (<literal>Starting
+ web server</literal> and <literal>Bootstrapping the
+ database</literal>) also failed. If not, check
+ <filename>/var/log/boot.log</filename> for obvious
+ errors. This step starts the <command>cron</command> service and
+ generates the initial set of XML files that the Slice Creation
+ Service uses to determine slice state.</para></listitem>
+ </itemizedlist>
+
+ <para>If no failures occur, then MyPLC should be active with a
+ default configuration. Open a web browser on the host system and
+ visit <literal>http://localhost/</literal>, which should bring you
+ to the front page of your PLC installation. The default password
+ for the administrator account
+ <literal>root@localhost.localdomain</literal> (set by
+ <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
+ <envar>PLC_ROOT_PASSWORD</envar>).</para>
+ </section>
+
+ <section id="FilesInvolvedRuntime">
+
+ <title>
+ Files and directories involved in <emphasis>myplc</emphasis>
+ </title>
+ <para>
+ The various places where is stored the persistent
+ information pertaining to your own deployment are
+ </para>
+ <itemizedlist>
+ <listitem><para><filename>/etc/planetlab</filename>: This
+ directory contains the configuration files, keys, and
+ certificates that define your MyPLC
+ installation.</para></listitem>
+
+ <listitem><para><filename>/var/lib/pgsql</filename>: This
+ directory contains PostgreSQL database
+ files.</para></listitem>
+
+ <listitem><para><filename>/var/www/html/boot</filename>: This
+ directory contains the Boot Manager, customized for your MyPLC
+ installation, and its data files.</para></listitem>
+
+ <listitem><para><filename>/var/www/html/download</filename>:
+ This directory contains Boot CD images, customized for your
+ MyPLC installation.</para></listitem>
+
+ <listitem><para><filename>/var/www/html/install-rpms</filename>:
+ This directory is where you should install node package
+ updates, if any. By default, nodes are installed from the
+ tarball located at
+ <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
+ which is pre-built from the latest PlanetLab Central sources,
+ and installed as part of your MyPLC installation. However,
+ nodes will attempt to install any newer RPMs located in
+ <filename>/var/www/html/install-rpms/planetlab</filename>,
+ after initial installation and periodically thereafter. You
+ must run
+ <programlisting><![CDATA[[myvserver] # service plc start packages]]></programlisting>
+ command to update the
+ <command>yum</command> caches in this directory after
+ installing a new RPM. </para>
+
+ <para>
+ If you wish to upgrade all your nodes RPMs from a more
+ recent build, you should take advantage of the
+ <filename>noderepo</filename> RPM, as described in <ulink
+ url="http://svn.planet-lab.org/wiki/NodeFamily" />
+ </para>
+ </listitem>
+ </itemizedlist>
+
+ </section>
+ </section>
+
+ <section id="DevelopmentEnvironment">
+
+ <title>Rebuilding and customizing MyPLC</title>
+
+ <para>
+ Please refer to the following resources for setting up a build environment:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <para>
+ <ulink url="http://svn.planet-lab.org/wiki/VserverCentos" />
+ will get you started for setting up vserver, launching a
+ nightly build or running the build manually.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <ulink url="http://svn.planet-lab.org/svn/build/trunk/" />
+ and in particular the various README files, provide some
+ help on how to use advanced features of the build.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </section>
+
+ <appendix id="VariablesRuntime">
+ <title>Configuration variables</title>
+
+ <para>
+ Listed below is the set of standard configuration variables
+ together with their default values, as defined in the template
+ <filename>/etc/planetlab/default_config.xml</filename>.
+ </para>
+ <para>
+ This information is available online within
+ <command>plc-config-tty</command>, e.g.:
+ </para>
+
+<example>
+ <title>Advanced usage of plc-config-tty</title>
+ <programlisting><![CDATA[[myvserver] # plc-config-tty
+Enter command (u for usual changes, w to save, ? for help) V plc_dns
+========== Category = PLC_DNS
+### Enable DNS
+# Enable the internal DNS server. The server does not provide reverse
+# resolution and is not a production quality or scalable DNS solution.
+# Use the internal DNS server only for small deployments or for testing.
+PLC_DNS_ENABLED
+]]></programlisting></example>
+
+ <para> List of the <command>myplc</command> configuration variables:</para>
+ &Variables;
+ </appendix>
+
+ <bibliography>
+ <title>Bibliography</title>
+
+ <biblioentry id="UsersGuide">
+ <title><ulink
+ url="http://www.planet-lab.org/doc/guides/user">PlanetLab
+ User's Guide</ulink></title>
+ </biblioentry>
+
+ <biblioentry id="PIsGuide">
+ <title><ulink
+ url="http://www.planet-lab.org/doc/guides/pi">PlanetLab
+ Principal Investigator's Guide</ulink></title>
+ </biblioentry>
+
+ <biblioentry id="TechsGuide">
+ <author><firstname>Mark</firstname><surname>Huang</surname></author>
+ <title><ulink
+ url="http://www.planet-lab.org/doc/guides/tech">PlanetLab
+ Technical Contact's Guide</ulink></title>
+ </biblioentry>
+
+ </bibliography>
+</article>