1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
4 <!ENTITY Variables SYSTEM "plc_variables.xml">
8 <title>MyPLC User's Guide</title>
11 <author> <firstname>Mark </firstname> <surname>Huang</surname> </author>
12 <author> <firstname>Thierry </firstname> <surname>Parmentelat</surname> </author>
16 <orgname>Princeton University</orgname>
20 <para>This document describes the design, installation, and
21 administration of MyPLC, a complete PlanetLab Central (PLC)
22 portable installation. This document assumes advanced
23 knowledge of the PlanetLab architecture and Linux system
24 administration.</para>
29 <revnumber>1.0</revnumber>
30 <date>April 7, 2006</date>
31 <authorinitials>MLH</authorinitials>
32 <revdescription><para>Initial draft.</para></revdescription>
35 <revnumber>1.1</revnumber>
36 <date>July 19, 2006</date>
37 <authorinitials>MLH</authorinitials>
38 <revdescription><para>Add development environment.</para></revdescription>
41 <revnumber>1.2</revnumber>
42 <date>August 18, 2006</date>
43 <authorinitials>TPT</authorinitials>
45 <para>Review section on configuration and introduce <command>plc-config-tty</command>.</para>
46 <para>Present implementation details last.</para>
50 <revnumber>1.3</revnumber>
51 <date>May 9, 2008</date>
52 <authorinitials>TPT</authorinitials>
55 Review for 4.2 : focus on new packaging <emphasis>myplc-native</emphasis>.
58 Removed deprecated <emphasis>myplc-devel</emphasis>.
66 <title>Overview</title>
68 <para>MyPLC is a complete PlanetLab Central (PLC) portable
69 installation. The default installation consists of a web server, an
70 XML-RPC API server, a boot server, and a database server: the core
71 components of PLC. The installation is customized through an
72 easy-to-use graphical interface. All PLC services are started up
73 and shut down through a single script installed on the host
76 <figure id="Architecture">
77 <title>MyPLC architecture</title>
80 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
83 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
86 <phrase>MyPLC architecture</phrase>
93 <section> <title> Historical Notes</title>
95 <para> This document focuses on the new packaging named
96 <emphasis>myplc-native</emphasis> as introduced in the 4.2 release
99 <para> The former, chroot-based, packaging known as simply
100 <emphasis>myplc</emphasis> might still be present in this release
101 but its usage is not recommended anymore. </para>
103 <para> With 4.2, the general architecture of the build system has
104 drastically changed as well. Rather than providing a static chroot
105 image for building the software, that was formerly known as
106 <emphasis>myplc-devel</emphasis>, the current paradigm is to
107 create a fresh vserver and to rely on yum to install all needed
108 development tools. More details on how to set up such an
109 environment can be found at <ulink
110 url="http://svn.planet-lab.org/wiki/VserverCentos" /> that
111 describes how to turn a CentOS5 box into a vserver-capable host
117 <section id="Requirements"> <title> Requirements </title>
119 <para> The recommended way to deploy MyPLC relies on
120 <emphasis>vserver</emphasis>. Here again, please refer to <ulink
121 url="http://svn.planet-lab.org/wiki/VserverCentos" /> for how to set
122 up such en environment. As of PlanetLab 4.2, the recommended Linux
123 distribution here is CentOS5, because there are publicly available
124 resources that allow a smooth setup. </para>
126 <para> As of PlanetLab 4.2, the current focus is on Fedora 8. This
127 means that you should create a fresh Fedora 8 vserver in your
128 vserver-capable CentOS box, and perform all subsequent installations
129 from that place as described below. Although you might find builds
130 for other Linux distributions, it is recommended for new users to
131 use this particular variant.
134 <para> It is also possible to perform these installations from a
135 fresh Fedora 8 installation. However, having a vserver-capable box
136 instead provides much more flexibility and is thus recommended, in
137 particular in terms of future upgrades of the system. </para>
139 <para> In addition, there have been numerous reports that SELINUX
140 should be turned off for running myplc in former releases. This is
141 part of the instructions provided to set up vserver, and please
142 keep this in mind if you plan on running MyPLC in a dedicated Fedora
145 <para> Last, you need to check your firewall configuration(s) since
146 it is required, of course, to open up the <emphasis>http</emphasis>
147 and <emphasis>https</emphasis> ports, so as to accept connections
148 from the managed nodes and from the users desktops, and possibly
149 <emphasis>ssh</emphasis> as well. </para>
153 <section id="Installation">
154 <title>Installing and using MyPLC</title>
157 <title>Locating a build.</title>
158 <para>The following locations are entry points for locating the
159 build you plan on using.</para>
161 <listitem> <para> <ulink url="http://build.planet-lab.org/" />
162 is maintained by the PlanetLab team at Princeton
165 <listitem> <para> <ulink url="http://build.one-lab.org/" /> is
166 maintained by the OneLab team at INRIA. </para>
169 <para> There are currently two so-called PlanetLab
170 Distributions known as <emphasis>planetlab</emphasis> and
171 <emphasis>onelab</emphasis>. planet-lab.org generally builds
172 only the <emphasis>planetlab</emphasis> flavour, while both
173 flavours are generally available at one-lab.org.</para>
177 <title> Note on IP addressing</title>
178 <para> Once you've located the build you want to use, it is
179 strongly advised to assign this vserver a unique IP address, and
180 to avoid sharing the hosting box's one. To that end, the typical
181 sentence for creating such a vserver would be:</para>
183 <example><title>Creating the vserver</title>
184 <programlisting><![CDATA[# vtest-init-vserver.sh -p linux32 myvserver \
185 http://build.one-lab.org/4.2/planetlab-4.2-rc2.1lab-f8-32/RPMS \
186 -- --netdev eth0 --interface 138.96.250.134 --hostname myvserver.inria.fr]]>
187 </programlisting> </example>
190 In this example, we have chosen to use a planetlab flavour,
191 based on rc2.1lab, for i386 (this is what the final 32 stands
198 <title>Setting up yum </title>
200 <para> If you do not use the convenience script mentioned above, you need to
201 create an entry in your yum configuration:</para>
203 <example><title>Setting up yum repository </title>
205 <![CDATA[[myvserver] # cd /etc/yum.repos.d
206 [myvserver] # cat > myplc.repo
209 baseurl=http://build.one-lab.org/4.2/planetlab-4.2-rc2.1lab-f8-32/RPMS
220 <title>Installing MyPLC.</title>
223 To actually install myplc at that stage, just run:
226 <example><title>Installing MyPLC </title>
227 <programlisting><![CDATA[[myvserver] # yum -y install myplc-native]]>
231 <para> The <xref linkend="FilesInvolvedRuntime" /> below explains in
232 details the installation strategy and the miscellaneous files and
233 directories involved.</para>
237 <section id="QuickStart"> <title> QuickStart </title>
239 <para> On a Red Hat or Fedora host system, it is customary to use
240 the <command>service</command> command to invoke System V init
241 scripts. As the examples suggest, the service must be started as root:</para>
243 <example><title>Starting MyPLC:</title>
244 <programlisting><![CDATA[[myvserver] # service plc start]]></programlisting>
246 <example><title>Stopping MyPLC:</title>
247 <programlisting><![CDATA[[myvserver] # service plc stop]]></programlisting>
250 <para> In <xref linkend="StartupSequence" />, we provide greater
251 details that might be helpful in the case where the service does
252 not seem to take off correctly.</para>
254 <para>Like all other registered System V init services, MyPLC is
255 started and shut down automatically when your host system boots
256 and powers off. You may disable automatic startup by invoking the
257 <command>chkconfig</command> command on a Red Hat or Fedora host
260 <example> <title>Disabling automatic startup of MyPLC.</title>
261 <programlisting><![CDATA[# chkconfig plc off]]></programlisting></example>
262 <example> <title>Re-enabling automatic startup of MyPLC.</title>
263 <programlisting><![CDATA[# chkconfig plc on]]></programlisting></example>
267 <section id="Configuration">
268 <title>Changing the configuration</title>
270 <para>After verifying that MyPLC is working correctly, shut it
271 down and begin changing some of the default variable
272 values. Shut down MyPLC with <command>service plc stop</command>
273 (see <xref linkend="QuickStart" />). </para>
275 <para> The preferred option for changing the configuration is to
276 use the <command>plc-config-tty</command> tool. The
277 full set of applicable variables is described in <xref
278 linkend="VariablesRuntime"/>, but using the <command>u</command>
279 guides you to the most useful ones.
280 Here is sample session:
283 <example><title>Using plc-config-tty for configuration:</title>
284 <programlisting><![CDATA[<myvserver> # plc-config-tty
285 Enter command (u for usual changes, w to save, ? for help) u
286 == PLC_NAME : [PlanetLab Test] OneLab
287 == PLC_SLICE_PREFIX : [pl] thone
288 == PLC_ROOT_USER : [root@localhost.localdomain] root@onelab-plc.inria.fr
289 == PLC_ROOT_PASSWORD : [root] plain-passwd
290 == PLC_MAIL_ENABLED : [false] true
291 == PLC_MAIL_SUPPORT_ADDRESS : [root+support@localhost.localdomain] support@one-lab.org
292 == PLC_BOOT_HOST : [localhost.localdomain] onelab-plc.inria.fr
293 == PLC_NET_DNS1 : [127.0.0.1] 138.96.250.248
294 == PLC_NET_DNS2 : [None] 138.96.250.249
295 Enter command (u for usual changes, w to save, ? for help) w
296 Wrote /etc/planetlab/configs/site.xml
298 /etc/planetlab/default_config.xml
299 and /etc/planetlab/configs/site.xml
300 into /etc/planetlab/plc_config.xml
301 You might want to type 'r' (restart plc) or 'q' (quit)
302 Enter command (u for usual changes, w to save, ? for help) r
303 ==================== Stopping plc
305 ==================== Starting plc
307 Enter command (u for usual changes, w to save, ? for help) q
308 [myvserver] # ]]></programlisting>
311 <para>The variables that you should change immediately are:</para>
315 <envar>PLC_NAME</envar>: Change this to the name of your PLC installation.
318 <envar>PLC_SLICE_PREFIX</envar>: Pick some
319 reasonable, short value; <emphasis> this is especially crucial if you
320 plan on federating with other PLCs</emphasis>.
323 <envar>PLC_ROOT_PASSWORD</envar>: Change this to a more
327 <envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
328 Change this to the e-mail address at which you would like to
329 receive support requests.
332 <envar>PLC_DB_HOST</envar>, <envar>PLC_API_HOST</envar>,
333 <envar>PLC_WWW_HOST</envar>, <envar>PLC_BOOT_HOST</envar>,
334 Change all of these to the preferred FQDN address of your
335 host system. The corresponding <envar>*_IP</envar> values can
336 be safely ignored if the FQDN can be resolved through DNS.
340 <para> After changing these variables, make sure that you save
341 (w) and restart your plc (r), as shown in the above example.
342 You should notice that the password of the default administrator
343 account is no longer <literal>root</literal>, and that the
344 default site name includes the name of your PLC installation
345 instead of PlanetLab. As a side effect of these changes, the ISO
346 images for the boot CDs now have new names, so that you can
347 freely remove the ones names after 'PlanetLab Test', which is
348 the default value of <envar>PLC_NAME</envar> </para>
350 <para>If you used the above method method for configuring, you
351 can skip to <xref linkend="LoginRealUser" />. As an alternative
352 to using <command>plc-config-tty</command>, you may also use a
353 text editor, but this requires some understanding on how the
354 configuration files are used within myplc. The
355 <emphasis>default</emphasis> configuration is stored in a file
356 named <filename>/etc/planetlab/default_config.xml</filename>,
357 that is designed to remain intact. You may store your local
358 changes in a file named
359 <filename>/etc/planetlab/configs/site.xml</filename>, that gets
360 loaded on top of the defaults. The resulting complete
361 configuration is stored in the file
362 <filename>/etc/planetlab/plc_config.xml</filename> that is used
363 as the reference. If you use this strategy, be sure to issue the
364 following command to refresh this file:</para>
366 <example><title> Refreshing <filename> plc_config.xml
367 </filename> after a manual change in<filename>
368 site.xml</filename> </title>
369 <programlisting><![CDATA[[myvserver] # service plc reload]]></programlisting>
372 <para>The defaults configuration file is a self-documenting
373 configuration file written in XML. Variables are divided into
374 categories. Variable identifiers must be alphanumeric, plus
375 underscore. A variable is referred to canonically as the
376 uppercase concatenation of its category identifier, an
377 underscore, and its variable identifier. Thus, a variable with
378 an <literal>id</literal> of <literal>slice_prefix</literal> in
379 the <literal>plc</literal> category is referred to canonically
380 as <envar>PLC_SLICE_PREFIX</envar>.</para>
382 <para>The reason for this convention is that during MyPLC
383 startup, <filename>plc_config.xml</filename> is translated into
384 several different languages—shell, PHP, and
385 Python—so that scripts written in each of these languages
386 can refer to the same underlying configuration. Most MyPLC
387 scripts are written in shell, so the convention for shell
388 variables predominates.</para>
392 <section id="LoginRealUser"> <title> Login as a real user </title>
394 <para>Now that myplc is up and running, you can connect to the
395 web site that by default runs on port 80. You can either
396 directly use the default administrator user that you configured
397 in <envar>PLC_ROOT_USER</envar> and
398 <envar>PLC_ROOT_PASSWORD</envar>, or create a real user through
399 the 'Joining' tab. Do not forget to select both PI and tech
400 roles, and to select the only site created at this stage.
401 Login as the administrator to enable this user, then login as
402 the real user.</para>
406 <title>Installing nodes</title>
408 <para>Install your first node by clicking <literal>Add
409 Node</literal> under the <literal>Nodes</literal> tab. Fill in
410 all the appropriate details, then click
411 <literal>Add</literal>. Download the node's boot material,
412 please refer to <xref linkend="TechsGuide" /> for more details
413 about this stage.</para>
415 <para>Please keep in mind that this boot medium is customized
416 for your particular instance, and contains details such as the
417 host that your configured as <envar>PLC_BOOT_HOST</envar>), or
418 the SSL certificate of your boot server that might expire. So
419 changes in your configuration may require you to replace all
420 your boot CD's.</para>
425 <title>Administering nodes</title>
427 <para>You may administer nodes as <literal>root</literal> by
428 using the SSH key stored in
429 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
432 <title>Accessing nodes via SSH. Replace
433 <literal>node</literal> with the hostname of the node.</title>
435 <programlisting>[myvserver] # ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
438 <para>From the node's root context, besides the standard Linux
439 log files located in <filename>/var/log</filename>, several
440 other files can give you clues about any problems with active
444 <listitem><para><filename>/var/log/nm</filename>: The log
445 file for the Node Manager.</para></listitem>
447 <listitem><para><filename>/vservers/slicename/var/log/nm</filename>:
448 The log file for the Node Manager operations that perform
449 within the slice's vserver.</para></listitem>
455 <title>Creating a slice</title>
457 <para>Create a slice by clicking <literal>Create Slice</literal>
458 under the <literal>Slices</literal> tab. Fill in all the
459 appropriate details, then click <literal>Create</literal>. Add
460 nodes to the slice by clicking <literal>Manage Nodes</literal>
461 on the <command>Slice Details</command> page for
465 Slice creation is performed by the NodeManager. In some
466 particular cases you may wish to restart it manually, here is
471 <title>Forcing slice creation on a node.</title>
473 <programlisting><![CDATA[[myvserver] # ssh -i /etc/planetlab/root_ssh_key.rsa root@node service nm restart]]>
478 <section id="StartupSequence">
479 <title>Understanding the startup sequence</title>
481 <para>During service startup described in <xref
482 linkend="QuickStart" />, observe the output of this command for
483 any failures. If no failures occur, you should see output similar
484 to the following. <command>Please note that as of this writing, with 4.2
485 the system logger step might fail, this is harmless!</command></para>
488 <title>A successful MyPLC startup.</title>
490 <programlisting><![CDATA[
491 PLC: Generating network files: [ OK ]
492 PLC: Starting system logger: [ OK ]
493 PLC: Starting database server: [ OK ]
494 PLC: Generating SSL certificates: [ OK ]
495 PLC: Configuring the API: [ OK ]
496 PLC: Updating GPG keys: [ OK ]
497 PLC: Generating SSH keys: [ OK ]
498 PLC: Starting web server: [ OK ]
499 PLC: Bootstrapping the database: [ OK ]
500 PLC: Starting DNS server: [ OK ]
501 PLC: Starting crond: [ OK ]
502 PLC: Rebuilding Boot CD: [ OK ]
503 PLC: Rebuilding Boot Manager: [ OK ]
504 PLC: Signing node packages: [ OK ]
508 <para>A complete log file of the startup process may be found at
509 <filename>/var/log/boot.log</filename>. Possible reasons
510 for failure of each step include:</para>
513 <listitem><para><literal>Starting database server</literal>: If
514 this step fails, check
515 <filename>/var/log/pgsql</filename> and
516 <filename>/var/log/boot.log</filename>. The most common
517 reason for failure is that the default PostgreSQL port, TCP port
518 5432, is already in use. Check that you are not running a
519 PostgreSQL server on the host system.</para></listitem>
521 <listitem><para><literal>Starting web server</literal>: If this
523 <filename>/var/log/httpd/error_log</filename> and
524 <filename>/var/log/boot.log</filename> for obvious
525 errors. The most common reason for failure is that the default
526 web ports, TCP ports 80 and 443, are already in use. Check that
527 you are not running a web server on the host
528 system.</para></listitem>
530 <listitem><para><literal>Bootstrapping the database</literal>:
531 If this step fails, it is likely that the previous step
532 (<literal>Starting web server</literal>) also failed. Another
533 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
534 <xref linkend="Configuration" />) does not resolve to
535 the host on which the API server has been enabled. By default,
536 all services, including the API server, are enabled and run on
537 the same host, so check that <envar>PLC_API_HOST</envar> is
538 either <filename>localhost</filename> or resolves to a local IP
539 address. Also check that <envar>PLC_ROOT_USER</envar> looks like
540 an e-mail address.</para></listitem>
542 <listitem><para><literal>Starting crond</literal>: If this step
543 fails, it is likely that the previous steps (<literal>Starting
544 web server</literal> and <literal>Bootstrapping the
545 database</literal>) also failed. If not, check
546 <filename>/var/log/boot.log</filename> for obvious
547 errors. This step starts the <command>cron</command> service and
548 generates the initial set of XML files that the Slice Creation
549 Service uses to determine slice state.</para></listitem>
552 <para>If no failures occur, then MyPLC should be active with a
553 default configuration. Open a web browser on the host system and
554 visit <literal>http://localhost/</literal>, which should bring you
555 to the front page of your PLC installation. The default password
556 for the administrator account
557 <literal>root@localhost.localdomain</literal> (set by
558 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
559 <envar>PLC_ROOT_PASSWORD</envar>).</para>
562 <section id="FilesInvolvedRuntime">
565 Files and directories involved in <emphasis>myplc</emphasis>
568 The various places where is stored the persistent
569 information pertaining to your own deployment are
572 <listitem><para><filename>/etc/planetlab</filename>: This
573 directory contains the configuration files, keys, and
574 certificates that define your MyPLC
575 installation.</para></listitem>
577 <listitem><para><filename>/var/lib/pgsql</filename>: This
578 directory contains PostgreSQL database
579 files.</para></listitem>
581 <listitem><para><filename>/var/www/html/boot</filename>: This
582 directory contains the Boot Manager, customized for your MyPLC
583 installation, and its data files.</para></listitem>
585 <listitem><para><filename>/var/www/html/download</filename>:
586 This directory contains Boot CD images, customized for your
587 MyPLC installation.</para></listitem>
589 <listitem><para><filename>/var/www/html/install-rpms</filename>:
590 This directory is where you should install node package
591 updates, if any. By default, nodes are installed from the
593 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
594 which is pre-built from the latest PlanetLab Central sources,
595 and installed as part of your MyPLC installation. However,
596 nodes will attempt to install any newer RPMs located in
597 <filename>/var/www/html/install-rpms/planetlab</filename>,
598 after initial installation and periodically thereafter. You
600 <programlisting><![CDATA[[myvserver] # service plc start packages]]></programlisting>
601 command to update the
602 <command>yum</command> caches in this directory after
603 installing a new RPM. </para>
606 If you wish to upgrade all your nodes RPMs from a more
607 recent build, you should take advantage of the
608 <filename>noderepo</filename> RPM, as described in <ulink
609 url="http://svn.planet-lab.org/wiki/NodeFamily" />
617 <section id="DevelopmentEnvironment">
619 <title>Rebuilding and customizing MyPLC</title>
622 Please refer to the following resources for setting up a build environment:
628 <ulink url="http://svn.planet-lab.org/wiki/VserverCentos" />
629 will get you started for setting up vserver, launching a
630 nightly build or running the build manually.
635 <ulink url="http://svn.planet-lab.org/svn/build/trunk/" />
636 and in particular the various README files, provide some
637 help on how to use advanced features of the build.
643 <appendix id="VariablesRuntime">
644 <title>Configuration variables</title>
647 Listed below is the set of standard configuration variables
648 together with their default values, as defined in the template
649 <filename>/etc/planetlab/default_config.xml</filename>.
652 This information is available online within
653 <command>plc-config-tty</command>, e.g.:
657 <title>Advanced usage of plc-config-tty</title>
658 <programlisting><![CDATA[[myvserver] # plc-config-tty
659 Enter command (u for usual changes, w to save, ? for help) V plc_dns
660 ========== Category = PLC_DNS
662 # Enable the internal DNS server. The server does not provide reverse
663 # resolution and is not a production quality or scalable DNS solution.
664 # Use the internal DNS server only for small deployments or for testing.
666 ]]></programlisting></example>
668 <para> List of the <command>myplc</command> configuration variables:</para>
673 <title>Bibliography</title>
675 <biblioentry id="UsersGuide">
677 url="http://www.planet-lab.org/doc/guides/user">PlanetLab
678 User's Guide</ulink></title>
681 <biblioentry id="PIsGuide">
683 url="http://www.planet-lab.org/doc/guides/pi">PlanetLab
684 Principal Investigator's Guide</ulink></title>
687 <biblioentry id="TechsGuide">
688 <author><firstname>Mark</firstname><surname>Huang</surname></author>
690 url="http://www.planet-lab.org/doc/guides/tech">PlanetLab
691 Technical Contact's Guide</ulink></title>