1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
4 <!ENTITY Variables SYSTEM "variables.xml">
8 <title>MyPLC User's Guide</title>
11 <firstname>Mark Huang</firstname>
15 <orgname>Princeton University</orgname>
19 <para>This document describes the design, installation, and
20 administration of MyPLC, a complete PlanetLab Central (PLC)
21 portable installation contained within a
22 <command>chroot</command> jail. This document assumes advanced
23 knowledge of the PlanetLab architecture and Linux system
24 administration.</para>
29 <revnumber>1.0</revnumber>
31 <date>April 7, 2006</date>
33 <authorinitials>MLH</authorinitials>
36 <para>Initial draft.</para>
43 <title>Overview</title>
45 <para>MyPLC is a complete PlanetLab Central (PLC) portable
46 installation contained within a <command>chroot</command>
47 jail. The default installation consists of a web server, an
48 XML-RPC API server, a boot server, and a database server: the core
49 components of PLC. The installation is customized through an
50 easy-to-use graphical interface. All PLC services are started up
51 and shut down through a single script installed on the host
52 system. The usually complex process of installing and
53 administering the PlanetLab backend is reduced by containing PLC
54 services within a virtual filesystem. By packaging it in such a
55 manner, MyPLC may also be run on any modern Linux distribution,
56 and could conceivably even run in a PlanetLab slice.</para>
58 <figure id="Architecture">
59 <title>MyPLC architecture</title>
62 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
65 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
68 <phrase>MyPLC architecture</phrase>
71 <para>MyPLC should be viewed as a single application that
72 provides multiple functions and can run on any host
80 <title>Installation</title>
82 <para>Though internally composed of commodity software
83 subpackages, MyPLC should be treated as a monolithic software
84 application. MyPLC is distributed as single RPM package that has
85 no external dependencies, allowing it to be installed on
86 practically any Linux 2.6 based distribution:</para>
89 <title>Installing MyPLC.</title>
91 <programlisting><![CDATA[# If your distribution supports RPM
92 rpm -U myplc-0.3-1.planetlab.i386.rpm
94 # If your distribution does not support RPM
96 rpm2cpio myplc-0.3-1.planetlab.i386.rpm | cpio -diu]]></programlisting>
99 <para>MyPLC installs the following files and directories:</para>
103 <listitem><para><filename>/plc/root.img</filename>: The main
104 root filesystem of the MyPLC application. This file is an
105 uncompressed ext3 filesystem that is loopback mounted on
106 <filename>/plc/root</filename> when MyPLC starts. The
107 filesystem, even when mounted, should be treated an opaque
108 binary that can and will be replaced in its entirety by any
109 upgrade of MyPLC.</para></listitem>
111 <listitem><para><filename>/plc/root</filename>: The mount point
112 for <filename>/plc/root.img</filename>. Once the root filesystem
113 is mounted, all MyPLC services run in a
114 <command>chroot</command> jail based in this
115 directory.</para></listitem>
118 <para><filename>/plc/data</filename>: The directory where user
119 data and generated files are stored. This directory is bind
120 mounted into the <command>chroot</command> jail on
121 <filename>/data</filename>. Files in this directory are marked
122 with <command>%config(noreplace)</command> in the RPM. That
123 is, during an upgrade of MyPLC, if a file has not changed
124 since the last installation or upgrade of MyPLC, it is subject
125 to upgrade and replacement. If the file has chanegd, the new
126 version of the file will be created with a
127 <filename>.rpmnew</filename> extension. Symlinks within the
128 MyPLC root filesystem ensure that the following directories
129 (relative to <filename>/plc/root</filename>) are stored
130 outside the MyPLC filesystem image:</para>
133 <listitem><para><filename>/etc/planetlab</filename>: This
134 directory contains the configuration files, keys, and
135 certificates that define your MyPLC
136 installation.</para></listitem>
138 <listitem><para><filename>/var/lib/pgsql</filename>: This
139 directory contains PostgreSQL database
140 files.</para></listitem>
142 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
143 directory contains node installation logs.</para></listitem>
145 <listitem><para><filename>/var/www/html/boot</filename>: This
146 directory contains the Boot Manager, customized for your MyPLC
147 installation, and its data files.</para></listitem>
149 <listitem><para><filename>/var/www/html/download</filename>: This
150 directory contains Boot CD images, customized for your MyPLC
151 installation.</para></listitem>
153 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
154 directory is where you should install node package updates,
155 if any. By default, nodes are installed from the tarball
157 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
158 which is pre-built from the latest PlanetLab Central
159 sources, and installed as part of your MyPLC
160 installation. However, nodes will attempt to install any
161 newer RPMs located in
162 <filename>/var/www/html/install-rpms/planetlab</filename>,
163 after initial installation and periodically thereafter. You
164 must run <command>yum-arch</command> and
165 <command>createrepo</command> to update the
166 <command>yum</command> caches in this directory after
167 installing a new RPM. PlanetLab Central cannot support any
168 changes to this file.</para></listitem>
170 <listitem><para><filename>/var/www/html/xml</filename>: This
171 directory contains various XML files that the Slice Creation
172 Service uses to determine the state of slices. These XML
173 files are refreshed periodically by <command>cron</command>
174 jobs running in the MyPLC root.</para></listitem>
179 <para><filename>/etc/init.d/plc</filename>: This file
180 is a System V init script installed on your host filesystem,
181 that allows you to start up and shut down MyPLC with a single
182 command. On a Red Hat or Fedora host system, it is customary to
183 use the <command>service</command> command to invoke System V
186 <example id="StartingAndStoppingMyPLC">
187 <title>Starting and stopping MyPLC.</title>
189 <programlisting><![CDATA[# Starting MyPLC
193 service plc stop]]></programlisting>
196 <para>Like all other registered System V init services, MyPLC is
197 started and shut down automatically when your host system boots
198 and powers off. You may disable automatic startup by invoking
199 the <command>chkconfig</command> command on a Red Hat or Fedora
203 <title>Disabling automatic startup of MyPLC.</title>
205 <programlisting><![CDATA[# Disable automatic startup
208 # Enable automatic startup
209 chkconfig plc on]]></programlisting>
213 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
214 file is a shell script fragment that defines the variables
215 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
216 the values of these variables are <filename>/plc/root</filename>
217 and <filename>/plc/data</filename>, respectively. If you wish,
218 you may move your MyPLC installation to another location on your
219 host filesystem and edit the values of these variables
220 appropriately, but you will break the RPM upgrade
221 process. PlanetLab Central cannot support any changes to this
222 file.</para></listitem>
224 <listitem><para><filename>/etc/planetlab</filename>: This
225 symlink to <filename>/plc/data/etc/planetlab</filename> is
226 installed on the host system for convenience.</para></listitem>
231 <title>Quickstart</title>
233 <para>Once installed, start MyPLC (see <xref
234 linkend="StartingAndStoppingMyPLC" />). MyPLC must be started as
235 root. Observe the output of this command for any failures. If no
236 failures occur, you should see output similar to the
240 <title>A successful MyPLC startup.</title>
242 <programlisting><![CDATA[Mounting PLC: [ OK ]
243 PLC: Generating network files: [ OK ]
244 PLC: Starting system logger: [ OK ]
245 PLC: Starting database server: [ OK ]
246 PLC: Generating SSL certificates: [ OK ]
247 PLC: Generating SSH keys: [ OK ]
248 PLC: Starting web server: [ OK ]
249 PLC: Bootstrapping the database: [ OK ]
250 PLC: Starting crond: [ OK ]
251 PLC: Rebuilding Boot CD: [ OK ]
252 PLC: Rebuilding Boot Manager: [ OK ]
256 <para>If <filename>/plc/root</filename> is mounted successfully, a
257 complete log file of the startup process may be found at
258 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
259 for failure of each step include:</para>
262 <listitem><para><literal>Mounting PLC</literal>: If this step
263 fails, first ensure that you started MyPLC as root. Check
264 <filename>/etc/sysconfig/plc</filename> to ensure that
265 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
266 right locations. You may also have too many existing loopback
267 mounts, or your kernel may not support loopback mounting, bind
268 mounting, or the ext3 filesystem. Try freeing at least one
269 loopback device, or re-compiling your kernel to support loopback
270 mounting, bind mounting, and the ext3
271 filesystem.</para></listitem>
273 <listitem><para><literal>Starting database server</literal>: If
274 this step fails, check
275 <filename>/plc/root/var/log/pgsql</filename> and
276 <filename>/plc/root/var/log/boot.log</filename>. The most common
277 reason for failure is that the default PostgreSQL port, TCP port
278 5432, is already in use. Check that you are not running a
279 PostgreSQL server on the host system.</para></listitem>
281 <listitem><para><literal>Starting web server</literal>: If this
283 <filename>/plc/root/var/log/httpd/error_log</filename> and
284 <filename>/plc/root/var/log/boot.log</filename> for obvious
285 errors. The most common reason for failure is that the default
286 web ports, TCP ports 80 and 443, are already in use. Check that
287 you are not running a web server on the host
288 system.</para></listitem>
290 <listitem><para><literal>Bootstrapping the database</literal>:
291 If this step fails, it is likely that the previous step
292 (<literal>Starting web server</literal>) also failed. Another
293 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
294 <xref linkend="ChangingTheConfiguration" />) does not resolve to
295 the host on which the API server has been enabled. By default,
296 all services, including the API server, are enabled and run on
297 the same host, so check that <envar>PLC_API_HOST</envar> is
298 either <filename>localhost</filename> or resolves to a local IP
299 address.</para></listitem>
301 <listitem><para><literal>Starting crond</literal>: If this step
302 fails, it is likely that the previous steps (<literal>Starting
303 web server</literal> and <literal>Bootstrapping the
304 database</literal>) also failed. If not, check
305 <filename>/plc/root/var/log/boot.log</filename> for obvious
306 errors. This step starts the <command>cron</command> service and
307 generates the initial set of XML files that the Slice Creation
308 Service uses to determine slice state.</para></listitem>
311 <para>If no failures occur, then MyPLC should be active with a
312 default configuration. Open a web browser on the host system and
313 visit <literal>http://localhost/</literal>, which should bring you
314 to the front page of your PLC installation. The password of the
315 default administrator account
316 <literal>root@localhost.localdomain</literal> (set by
317 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
318 <envar>PLC_ROOT_PASSWORD</envar>).</para>
320 <section id="ChangingTheConfiguration">
321 <title>Changing the configuration</title>
323 <para>After verifying that MyPLC is working correctly, shut it
324 down and begin changing some of the default variable
325 values. Shut down MyPLC with <command>service plc stop</command>
326 (see <xref linkend="StartingAndStoppingMyPLC" />). With a text
327 editor, open the file
328 <filename>/etc/planetlab/plc_config.xml</filename>. This file is
329 a self-documenting configuration file written in XML. Variables
330 are divided into categories. Variable identifiers must be
331 alphanumeric, plus underscore. A variable is referred to
332 canonically as the uppercase concatenation of its category
333 identifier, an underscore, and its variable identifier. Thus, a
334 variable with an <literal>id</literal> of
335 <literal>slice_prefix</literal> in the <literal>plc</literal>
336 category is referred to canonically as
337 <envar>PLC_SLICE_PREFIX</envar>.</para>
339 <para>The reason for this convention is that during MyPLC
340 startup, <filename>plc_config.xml</filename> is translated into
341 several different languages—shell, PHP, and
342 Python—so that scripts written in each of these languages
343 can refer to the same underlying configuration. Most MyPLC
344 scripts are written in shell, so the convention for shell
345 variables predominates.</para>
347 <para>The variables that you should change immediately are:</para>
350 <listitem><para><envar>PLC_NAME</envar>: Change this to the
351 name of your PLC installation.</para></listitem>
352 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
353 to a more secure password.</para></listitem>
355 <listitem><para><envar>PLC_NET_DNS1</envar>,
356 <envar>PLC_NET_DNS2</envar>: Change these to the IP addresses
357 of your primary and secondary DNS servers. Check
358 <filename>/etc/resolv.conf</filename> on your host
359 filesystem.</para></listitem>
361 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
362 Change this to the e-mail address at which you would like to
363 receive support requests.</para></listitem>
365 <listitem><para><envar>PLC_DB_HOST</envar>,
366 <envar>PLC_API_HOST</envar>, <envar>PLC_WWW_HOST</envar>,
367 <envar>PLC_BOOT_HOST</envar>: Change all of these to the
368 preferred FQDN of your host system.</para></listitem>
371 <para>After changing these variables, save the file, then
372 restart MyPLC with <command>service plc start</command>. You
373 should notice that the password of the default administrator
374 account is no longer <literal>root</literal>, and that the
375 default site name includes the name of your PLC installation
376 instead of PlanetLab.</para>
380 <title>Installing nodes</title>
382 <para>Install your first node by clicking <literal>Add
383 Node</literal> under the <literal>Nodes</literal> tab. Fill in
384 all the appropriate details, then click
385 <literal>Add</literal>. Download the node's configuration file
386 by clicking <literal>Download configuration file</literal> on
387 the <emphasis role="bold">Node Details</emphasis> page for the
388 node. Save it to a floppy disk or USB key as detailed in <xref
389 linkend="TechsGuide" />.</para>
391 <para>Follow the rest of the instructions in <xref
392 linkend="TechsGuide" /> for creating a Boot CD and installing
393 the node, except download the Boot CD image from the
394 <filename>/download</filename> directory of your PLC
395 installation, not from PlanetLab Central. The images located
396 here are customized for your installation. If you change the
397 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
398 if the SSL certificate of your boot server expires, MyPLC will
399 regenerate it and rebuild the Boot CD with the new
400 certificate. If this occurs, you must replace all Boot CDs
401 created before the certificate was regenerated.</para>
403 <para>The installation process for a node has significantly
404 improved since PlanetLab 3.3. It should now take only a few
405 seconds for a new node to become ready to create slices.</para>
409 <title>Administering nodes</title>
411 <para>You may administer nodes as <literal>root</literal> by
412 using the SSH key stored in
413 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
416 <title>Accessing nodes via SSH. Replace
417 <literal>node</literal> with the hostname of the node.</title>
419 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
422 <para>Besides the standard Linux log files located in
423 <filename>/var/log</filename>, several other files can give you
424 clues about any problems with active processes:</para>
427 <listitem><para><filename>/var/log/pl_nm</filename>: The log
428 file for the Node Manager.</para></listitem>
430 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
431 The log file for the Slice Creation Service.</para></listitem>
433 <listitem><para><filename>/var/log/propd</filename>: The log
434 file for Proper, the service which allows certain slices to
435 perform certain privileged operations in the root
436 context.</para></listitem>
438 <listitem><para><filename>/vservers/pl_netflow/var/log/netflow.log</filename>:
439 The log file for PlanetFlow, the network traffic auditing
440 service.</para></listitem>
445 <title>Creating a slice</title>
447 <para>Create a slice by clicking <literal>Create Slice</literal>
448 under the <literal>Slices</literal> tab. Fill in all the
449 appropriate details, then click <literal>Create</literal>. Add
450 nodes to the slice by clicking <literal>Manage Nodes</literal>
451 on the <emphasis role="bold">Slice Details</emphasis> page for
454 <para>A <command>cron</command> job runs every five minutes and
456 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
457 with information about current slice state. The Slice Creation
458 Service running on every node polls this file every ten minutes
459 to determine if it needs to create or delete any slices. You may
460 accelerate this process manually if desired.</para>
463 <title>Forcing slice creation on a node.</title>
465 <programlisting><![CDATA[# Update slices.xml immediately
466 service plc start crond
468 # Kick the Slice Creation Service on a particular node.
469 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
470 vserver pl_conf exec service pl_conf restart]]></programlisting>
476 <title>Configuration variables</title>
478 <para>Listed below is the set of standard configuration variables
479 and their default values, defined in the template
480 <filename>/etc/planetlab/default_config.xml</filename>. Additional
481 variables and their defaults may be defined in site-specific XML
482 templates that should be placed in
483 <filename>/etc/planetlab/configs/</filename>.</para>
489 <title>Bibliography</title>
491 <biblioentry id="TechsGuide">
492 <author><firstname>Mark</firstname><surname>Huang</surname></author>
494 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
495 Technical Contact's Guide</ulink></title>