1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd">
6 <title>MyPLC User's Guide</title>
9 <firstname>Mark Huang</firstname>
13 <orgname>Princeton University</orgname>
17 <para>This document describes the design, installation, and
18 administration of MyPLC, a complete PlanetLab Central (PLC)
19 portable installation contained within a
20 <command>chroot</command> jail. This document assumes advanced
21 knowledge of the PlanetLab architecture and Linux system
22 administration.</para>
27 <revnumber>1.0</revnumber>
29 <date>April 7, 2006</date>
31 <authorinitials>MLH</authorinitials>
34 <para>Initial draft.</para>
41 <title>Overview</title>
43 <para>MyPLC is a complete PlanetLab Central (PLC) portable
44 installation contained within a <command>chroot</command>
45 jail. The default installation consists of a web server, an
46 XML-RPC API server, a boot server, and a database server: the core
47 components of PLC. The installation is customized through an
48 easy-to-use graphical interface. All PLC services are started up
49 and shut down through a single script installed on the host
50 system. The usually complex process of installing and
51 administering the PlanetLab backend is reduced by containing PLC
52 services within a virtual filesystem. By packaging it in such a
53 manner, MyPLC may also be run on any modern Linux distribution,
54 and could conceivably even run in a PlanetLab slice.</para>
56 <figure id="Architecture">
57 <title>MyPLC architecture</title>
60 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
63 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
66 <phrase>MyPLC architecture</phrase>
69 <para>MyPLC should be viewed as a single application that
70 provides multiple functions and can run on any host
78 <title>Installation</title>
80 <para>Though internally composed of commodity software
81 subpackages, MyPLC should be treated as a monolithic software
82 application. MyPLC is distributed as single RPM package that has
83 no external dependencies, allowing it to be installed on
84 practically any Linux 2.6 based distribution:</para>
87 <title>Installing MyPLC.</title>
89 <programlisting><![CDATA[# If your distribution supports RPM
90 rpm -U myplc-0.3-1.planetlab.i386.rpm
92 # If your distribution does not support RPM
94 rpm2cpio myplc-0.3-1.planetlab.i386.rpm | cpio -diu]]></programlisting>
97 <para>MyPLC installs the following files and directories:</para>
101 <listitem><para><filename>/plc/root.img</filename>: The main
102 root filesystem of the MyPLC application. This file is an
103 uncompressed ext3 filesystem that is loopback mounted on
104 <filename>/plc/root</filename> when MyPLC starts. The
105 filesystem, even when mounted, should be treated an opaque
106 binary that can and will be replaced in its entirety by any
107 upgrade of MyPLC.</para></listitem>
109 <listitem><para><filename>/plc/root</filename>: The mount point
110 for <filename>/plc/root.img</filename>. Once the root filesystem
111 is mounted, all MyPLC services run in a
112 <command>chroot</command> jail based in this
113 directory.</para></listitem>
116 <para><filename>/plc/data</filename>: The directory where user
117 data and generated files are stored. This directory is bind
118 mounted into the <command>chroot</command> jail on
119 <filename>/data</filename>. Files in this directory are marked
120 with <command>%config(noreplace)</command> in the RPM. That
121 is, during an upgrade of MyPLC, if a file has not changed
122 since the last installation or upgrade of MyPLC, it is subject
123 to upgrade and replacement. If the file has chanegd, the new
124 version of the file will be created with a
125 <filename>.rpmnew</filename> extension. Symlinks within the
126 MyPLC root filesystem ensure that the following directories
127 (relative to <filename>/plc/root</filename>) are stored
128 outside the MyPLC filesystem image:</para>
131 <listitem><para><filename>/etc/planetlab</filename>: This
132 directory contains the configuration files, keys, and
133 certificates that define your MyPLC
134 installation.</para></listitem>
136 <listitem><para><filename>/var/lib/pgsql</filename>: This
137 directory contains PostgreSQL database
138 files.</para></listitem>
140 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
141 directory contains node installation logs.</para></listitem>
143 <listitem><para><filename>/var/www/html/boot</filename>: This
144 directory contains the Boot Manager, customized for your MyPLC
145 installation, and its data files.</para></listitem>
147 <listitem><para><filename>/var/www/html/download</filename>: This
148 directory contains Boot CD images, customized for your MyPLC
149 installation.</para></listitem>
151 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
152 directory is where you should install node package updates,
153 if any. By default, nodes are installed from the tarball
155 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
156 which is pre-built from the latest PlanetLab Central
157 sources, and installed as part of your MyPLC
158 installation. However, nodes will attempt to install any
159 newer RPMs located in
160 <filename>/var/www/html/install-rpms/planetlab</filename>,
161 after initial installation and periodically thereafter. You
162 must run <command>yum-arch</command> and
163 <command>createrepo</command> to update the
164 <command>yum</command> caches in this directory after
165 installing a new RPM. PlanetLab Central cannot support any
166 changes to this file.</para></listitem>
168 <listitem><para><filename>/var/www/html/xml</filename>: This
169 directory contains various XML files that the Slice Creation
170 Service uses to determine the state of slices. These XML
171 files are refreshed periodically by <command>cron</command>
172 jobs running in the MyPLC root.</para></listitem>
177 <para><filename>/etc/init.d/plc</filename>: This file
178 is a System V init script installed on your host filesystem,
179 that allows you to start up and shut down MyPLC with a single
180 command. On a Red Hat or Fedora host system, it is customary to
181 use the <command>service</command> command to invoke System V
184 <example id="StartingAndStoppingMyPLC">
185 <title>Starting and stopping MyPLC.</title>
187 <programlisting><![CDATA[# Starting MyPLC
191 service plc stop]]></programlisting>
194 <para>Like all other registered System V init services, MyPLC is
195 started and shut down automatically when your host system boots
196 and powers off. You may disable automatic startup by invoking
197 the <command>chkconfig</command> command on a Red Hat or Fedora
201 <title>Disabling automatic startup of MyPLC.</title>
203 <programlisting><![CDATA[# Disable automatic startup
206 # Enable automatic startup
207 chkconfig plc on]]></programlisting>
211 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
212 file is a shell script fragment that defines the variables
213 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
214 the values of these variables are <filename>/plc/root</filename>
215 and <filename>/plc/data</filename>, respectively. If you wish,
216 you may move your MyPLC installation to another location on your
217 host filesystem and edit the values of these variables
218 appropriately, but you will break the RPM upgrade
219 process. PlanetLab Central cannot support any changes to this
220 file.</para></listitem>
222 <listitem><para><filename>/etc/planetlab</filename>: This
223 symlink to <filename>/plc/data/etc/planetlab</filename> is
224 installed on the host system for convenience.</para></listitem>
229 <title>Quickstart</title>
231 <para>Once installed, start MyPLC (see <xref
232 linkend="StartingAndStoppingMyPLC" />). MyPLC must be started as
233 root. Observe the output of this command for any failures. If no
234 failures occur, you should see output similar to the
238 <title>A successful MyPLC startup.</title>
240 <programlisting><![CDATA[Mounting PLC: [ OK ]
241 PLC: Generating network files: [ OK ]
242 PLC: Starting system logger: [ OK ]
243 PLC: Starting database server: [ OK ]
244 PLC: Generating SSL certificates: [ OK ]
245 PLC: Generating SSH keys: [ OK ]
246 PLC: Starting web server: [ OK ]
247 PLC: Bootstrapping the database: [ OK ]
248 PLC: Starting crond: [ OK ]
249 PLC: Rebuilding Boot CD: [ OK ]
250 PLC: Rebuilding Boot Manager: [ OK ]
254 <para>If <filename>/plc/root</filename> is mounted successfully, a
255 complete log file of the startup process may be found at
256 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
257 for failure of each step include:</para>
260 <listitem><para><literal>Mounting PLC</literal>: If this step
261 fails, first ensure that you started MyPLC as root. Check
262 <filename>/etc/sysconfig/plc</filename> to ensure that
263 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
264 right locations. You may also have too many existing loopback
265 mounts, or your kernel may not support loopback mounting, bind
266 mounting, or the ext3 filesystem. Try freeing at least one
267 loopback device, or re-compiling your kernel to support loopback
268 mounting, bind mounting, and the ext3
269 filesystem.</para></listitem>
271 <listitem><para><literal>Starting database server</literal>: If
272 this step fails, check
273 <filename>/plc/root/var/log/pgsql</filename> and
274 <filename>/plc/root/var/log/boot.log</filename>. The most common
275 reason for failure is that the default PostgreSQL port, TCP port
276 5432, is already in use. Check that you are not running a
277 PostgreSQL server on the host system.</para></listitem>
279 <listitem><para><literal>Starting web server</literal>: If this
281 <filename>/plc/root/var/log/httpd/error_log</filename> and
282 <filename>/plc/root/var/log/boot.log</filename> for obvious
283 errors. The most common reason for failure is that the default
284 web ports, TCP ports 80 and 443, are already in use. Check that
285 you are not running a web server on the host
286 system.</para></listitem>
288 <listitem><para><literal>Bootstrapping the database</literal>:
289 If this step fails, it is likely that the previous step
290 (<literal>Starting web server</literal>) also failed. Another
291 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
292 <xref linkend="ChangingTheConfiguration" />) does not resolve to
293 the host on which the API server has been enabled. By default,
294 all services, including the API server, are enabled and run on
295 the same host, so check that <envar>PLC_API_HOST</envar> is
296 either <filename>localhost</filename> or resolves to a local IP
297 address.</para></listitem>
299 <listitem><para><literal>Starting crond</literal>: If this step
300 fails, it is likely that the previous steps (<literal>Starting
301 web server</literal> and <literal>Bootstrapping the
302 database</literal>) also failed. If not, check
303 <filename>/plc/root/var/log/boot.log</filename> for obvious
304 errors. This step starts the <command>cron</command> service and
305 generates the initial set of XML files that the Slice Creation
306 Service uses to determine slice state.</para></listitem>
309 <para>If no failures occur, then MyPLC should be active with a
310 default configuration. Open a web browser on the host system and
311 visit <literal>http://localhost/</literal>, which should bring you
312 to the front page of your PLC installation. The password of the
313 default administrator account
314 <literal>root@localhost.localdomain</literal> (set by
315 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
316 <envar>PLC_ROOT_PASSWORD</envar>).</para>
318 <section id="ChangingTheConfiguration">
319 <title>Changing the configuration</title>
321 <para>After verifying that MyPLC is working correctly, shut it
322 down and begin changing some of the default variable
323 values. Shut down MyPLC with <command>service plc stop</command>
324 (see <xref linkend="StartingAndStoppingMyPLC" />). With a text
325 editor, open the file
326 <filename>/etc/planetlab/plc_config.xml</filename>. This file is
327 a self-documenting configuration file written in XML. Variables
328 are divided into categories. Variable identifiers must be
329 alphanumeric, plus underscore. A variable is referred to
330 canonically as the uppercase concatenation of its category
331 identifier, an underscore, and its variable identifier. Thus, a
332 variable with an <literal>id</literal> of
333 <literal>slice_prefix</literal> in the <literal>plc</literal>
334 category is referred to canonically as
335 <envar>PLC_SLICE_PREFIX</envar>.</para>
337 <para>The reason for this convention is that during MyPLC
338 startup, <filename>plc_config.xml</filename> is translated into
339 several different languages—shell, PHP, and
340 Python—so that scripts written in each of these languages
341 can refer to the same underlying configuration. Most MyPLC
342 scripts are written in shell, so the convention for shell
343 variables predominates.</para>
345 <para>The variables that you should change immediately are:</para>
348 <listitem><para><envar>PLC_NAME</envar>: Change this to the
349 name of your PLC installation.</para></listitem>
350 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
351 to a more secure password.</para></listitem>
353 <listitem><para><envar>PLC_NET_DNS1</envar>,
354 <envar>PLC_NET_DNS2</envar>: Change these to the IP addresses
355 of your primary and secondary DNS servers. Check
356 <filename>/etc/resolv.conf</filename> on your host
357 filesystem.</para></listitem>
359 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
360 Change this to the e-mail address at which you would like to
361 receive support requests.</para></listitem>
363 <listitem><para><envar>PLC_DB_HOST</envar>,
364 <envar>PLC_API_HOST</envar>, <envar>PLC_WWW_HOST</envar>,
365 <envar>PLC_BOOT_HOST</envar>: Change all of these to the
366 preferred FQDN of your host system.</para></listitem>
369 <para>After changing these variables, save the file, then
370 restart MyPLC with <command>service plc start</command>. You
371 should notice that the password of the default administrator
372 account is no longer <literal>root</literal>, and that the
373 default site name includes the name of your PLC installation
374 instead of PlanetLab.</para>
378 <title>Installing nodes</title>
380 <para>Install your first node by clicking <literal>Add
381 Node</literal> under the <literal>Nodes</literal> tab. Fill in
382 all the appropriate details, then click
383 <literal>Add</literal>. Download the node's configuration file
384 by clicking <literal>Download configuration file</literal> on
385 the <emphasis role="bold">Node Details</emphasis> page for the
386 node. Save it to a floppy disk or USB key as detailed in <xref
387 linkend="TechsGuide" />.</para>
389 <para>Follow the rest of the instructions in <xref
390 linkend="TechsGuide" /> for creating a Boot CD and installing
391 the node, except download the Boot CD image from the
392 <filename>/download</filename> directory of your PLC
393 installation, not from PlanetLab Central. The images located
394 here are customized for your installation. If you change the
395 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
396 if the SSL certificate of your boot server expires, MyPLC will
397 regenerate it and rebuild the Boot CD with the new
398 certificate. If this occurs, you must replace all Boot CDs
399 created before the certificate was regenerated.</para>
401 <para>The installation process for a node has significantly
402 improved since PlanetLab 3.3. It should now take only a few
403 seconds for a new node to become ready to create slices.</para>
407 <title>Administering nodes</title>
409 <para>You may administer nodes as <literal>root</literal> by
410 using the SSH key stored in
411 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
414 <title>Accessing nodes via SSH. Replace
415 <literal>node</literal> with the hostname of the node.</title>
417 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
420 <para>Besides the standard Linux log files located in
421 <filename>/var/log</filename>, several other files can give you
422 clues about any problems with active processes:</para>
425 <listitem><para><filename>/var/log/pl_nm</filename>: The log
426 file for the Node Manager.</para></listitem>
428 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
429 The log file for the Slice Creation Service.</para></listitem>
431 <listitem><para><filename>/var/log/propd</filename>: The log
432 file for Proper, the service which allows certain slices to
433 perform certain privileged operations in the root
434 context.</para></listitem>
436 <listitem><para><filename>/vserver/pl_netflow/var/log/netflow.log</filename>:
437 The log file for PlanetFlow, the network traffic auditing
438 service.</para></listitem>
443 <title>Creating a slice</title>
445 <para>Create a slice by clicking <literal>Create Slice</literal>
446 under the <literal>Slices</literal> tab. Fill in all the
447 appropriate details, then click <literal>Create</literal>. Add
448 nodes to the slice by clicking <literal>Manage Nodes</literal>
449 on the <emphasis role="bold">Slice Details</emphasis> page for
452 <para>A <command>cron</command> job runs every five minutes and
454 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
455 with information about current slice state. The Slice Creation
456 Service running on every node polls this file every ten minutes
457 to determine if it needs to create or delete any slices. You may
458 accelerate this process manually if desired.</para>
461 <title>Forcing slice creation on a node.</title>
463 <programlisting><![CDATA[# Update slices.xml immediately
464 service plc start crond
466 # Kick the Slice Creation Service on a particular node.
467 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
468 vserver pl_conf exec service pl_conf restart]]></programlisting>
474 <title>Bibliography</title>
476 <biblioentry id="TechsGuide">
477 <author><firstname>Mark</firstname><surname>Huang</surname></author>
479 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
480 Technical Contact's Guide</ulink></title>