1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
4 <!ENTITY Variables SYSTEM "plc_variables.xml">
5 <!ENTITY DevelVariables SYSTEM "plc_devel_variables.xml">
9 <title>MyPLC User's Guide</title>
12 <firstname>Mark Huang</firstname>
16 <orgname>Princeton University</orgname>
20 <para>This document describes the design, installation, and
21 administration of MyPLC, a complete PlanetLab Central (PLC)
22 portable installation contained within a
23 <command>chroot</command> jail. This document assumes advanced
24 knowledge of the PlanetLab architecture and Linux system
25 administration.</para>
30 <revnumber>1.0</revnumber>
31 <date>April 7, 2006</date>
32 <authorinitials>MLH</authorinitials>
33 <revdescription><para>Initial draft.</para></revdescription>
36 <revnumber>1.1</revnumber>
37 <date>July 19, 2006</date>
38 <authorinitials>MLH</authorinitials>
39 <revdescription><para>Add development environment.</para></revdescription>
42 <revnumber>1.2</revnumber>
43 <date>August 18, 2006</date>
44 <authorinitials>TPT</authorinitials>
46 <para>Review section on configuration and introduce <command>plc-config-tty</command>.</para>
47 <para>Present implementation details last.</para>
54 <title>Overview</title>
56 <para>MyPLC is a complete PlanetLab Central (PLC) portable
57 installation contained within a <command>chroot</command>
58 jail. The default installation consists of a web server, an
59 XML-RPC API server, a boot server, and a database server: the core
60 components of PLC. The installation is customized through an
61 easy-to-use graphical interface. All PLC services are started up
62 and shut down through a single script installed on the host
63 system. The usually complex process of installing and
64 administering the PlanetLab backend is reduced by containing PLC
65 services within a virtual filesystem. By packaging it in such a
66 manner, MyPLC may also be run on any modern Linux distribution,
67 and could conceivably even run in a PlanetLab slice.</para>
69 <figure id="Architecture">
70 <title>MyPLC architecture</title>
73 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
76 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
79 <phrase>MyPLC architecture</phrase>
82 <para>MyPLC should be viewed as a single application that
83 provides multiple functions and can run on any host
89 <section> <title> Purpose of the <emphasis> myplc-devel
90 </emphasis> package </title>
91 <para> The <emphasis>myplc</emphasis> package comes with all
92 required node software, rebuilt from the public PlanetLab CVS
93 repository. If for any reason you need to implement your own
94 customized version of this software, you can use the
95 <emphasis>myplc-devel</emphasis> package instead, for setting up
96 your own development environment, including a local CVS
97 repository; you can then freely manage your changes and rebuild
98 your customized version of <emphasis>myplc</emphasis>. We also
99 provide good practices, that will then allow you to resync your local
100 CVS repository with any further evolution on the mainstream public
101 PlanetLab software. </para> </section>
106 <section id="Requirements"> <title> Requirements </title>
108 <para> <emphasis>myplc</emphasis> and
109 <emphasis>myplc-devel</emphasis> were designed as
110 <command>chroot</command> jails so as to reduce the requirements on
111 your host operating system. So in theory, these distributions should
112 work on virtually any Linux 2.6 based distribution, whether it
113 supports rpm or not. </para>
115 <para> However, things are never that simple and there indeed are
116 some known limitations to this, so here are a couple notes as a
117 recommended reading before you proceed with the installation.</para>
119 <para> As of 17 August 2006 (i.e <emphasis>myplc-0.5-2</emphasis>) :</para>
122 <listitem><para> The software is vastly based on <emphasis>Fedora
123 Core 4</emphasis>. Please note that the build server at Princeton
124 runs <emphasis>Fedora Core 2</emphasis>, togother with a upgraded
128 <listitem><para> myplc and myplc-devel are known to work on both
129 <emphasis>Fedora Core 2</emphasis> and <emphasis>Fedora Core
130 4</emphasis>. Please note however that, on fc4 at least, it is
131 highly recommended to use the <application>Security Level
132 Configuration</application> utility and to <emphasis>switch off
133 SElinux</emphasis> on your box because : </para>
137 myplc requires you to run SElinux as 'Permissive' at most
140 myplc-devel requires you to turn SElinux Off.
145 <listitem> <para> In addition, as far as myplc is concerned, you
146 need to check your firewall configuration since you need, of course,
147 to open up the <emphasis>http</emphasis> and
148 <emphasis>https</emphasis> ports, so as to accept connections from
149 the managed nodes and from the users desktops. </para> </listitem>
154 <section id="Installation">
155 <title>Installating and using MyPLC</title>
157 <para>Though internally composed of commodity software
158 subpackages, MyPLC should be treated as a monolithic software
159 application. MyPLC is distributed as single RPM package that has
160 no external dependencies, allowing it to be installed on
161 practically any Linux 2.6 based distribution.</para>
164 <title>Installing MyPLC.</title>
167 <listitem> <para>If your distribution supports RPM:</para>
168 <programlisting><![CDATA[# rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm]]></programlisting></listitem>
170 <listitem> <para>If your distribution does not support RPM:</para>
171 <programlisting><![CDATA[# cd /tmp
172 # wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
174 # rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu]]></programlisting></listitem>
177 <para> The <xref linkend="FilesInvolvedRuntime" /> below explains in
178 details the installation strategy and the miscellaneous files and
179 directories involved.</para>
183 <section id="QuickStart"> <title> QuickStart </title>
185 <para> On a Red Hat or Fedora host system, it is customary to use
186 the <command>service</command> command to invoke System V init
187 scripts. As the examples suggest, the service must be started as root:</para>
189 <example><title>Starting MyPLC:</title>
190 <programlisting><![CDATA[# service plc start]]></programlisting>
192 <example><title>Stopping MyPLC:</title>
193 <programlisting><![CDATA[# service plc stop]]></programlisting>
196 <para> In <xref linkend="StartupSequence" />, we provide greater
197 details that might be helpful in the case where the service does
198 not seem to take off correctly.</para>
200 <para>Like all other registered System V init services, MyPLC is
201 started and shut down automatically when your host system boots
202 and powers off. You may disable automatic startup by invoking the
203 <command>chkconfig</command> command on a Red Hat or Fedora host
206 <example> <title>Disabling automatic startup of MyPLC.</title>
207 <programlisting><![CDATA[# chkconfig plc off]]></programlisting></example>
208 <example> <title>Re-enabling automatic startup of MyPLC.</title>
209 <programlisting><![CDATA[# chkconfig plc on]]></programlisting></example>
213 <section id="Configuration">
214 <title>Changing the configuration</title>
216 <para>After verifying that MyPLC is working correctly, shut it
217 down and begin changing some of the default variable
218 values. Shut down MyPLC with <command>service plc stop</command>
219 (see <xref linkend="QuickStart" />). </para>
221 <para> The preferred option for changing the configuration is to
222 use the <command>plc-config-tty</command> tool. This tool comes
223 with the root image, so you need to have it mounted first. The
224 full set of applicable variables is described in <xref
225 linkend="VariablesDevel" />, but using the <command>u</command>
226 guides you to the most useful ones. Note that if you
227 plan on federating with other PLCs, <emphasis> it is strongly
228 recommended that you change the <command> PLC_NAME
229 </command> and <command> PLC_SLICE_PREFIX </command>
230 settings. </emphasis>
231 Here is sample session:
234 <example><title>Using plc-config-tty for configuration:</title>
235 <programlisting><![CDATA[# service plc mount
237 # chroot /plc/root su -
238 <plc> # plc-config-tty
239 Config file /etc/planetlab/configs/site.xml located under a non-existing directory
240 Want to create /etc/planetlab/configs [y]/n ? y
241 Created directory /etc/planetlab/configs
242 Enter command (u for usual changes, w to save, ? for help) u
243 == PLC_NAME : [PlanetLab Test] OneLab
244 == PLC_SLICE_PREFIX : [pl] thone
245 == PLC_ROOT_USER : [root@localhost.localdomain] root@onelab-plc.inria.fr
246 == PLC_ROOT_PASSWORD : [root] plain-passwd
247 == PLC_MAIL_ENABLED : [false] true
248 == PLC_MAIL_SUPPORT_ADDRESS : [root+support@localhost.localdomain] support@one-lab.org
249 == PLC_BOOT_HOST : [localhost.localdomain] onelab-plc.inria.fr
250 == PLC_NET_DNS1 : [127.0.0.1] 138.96.250.248
251 == PLC_NET_DNS2 : [None] 138.96.250.249
252 Enter command (u for usual changes, w to save, ? for help) w
253 Wrote /etc/planetlab/configs/site.xml
255 /etc/planetlab/default_config.xml
256 and /etc/planetlab/configs/site.xml
257 into /etc/planetlab/plc_config.xml
258 You might want to type 'r' (restart plc) or 'q' (quit)
259 Enter command (u for usual changes, w to save, ? for help) r
260 ==================== Stopping plc
262 ==================== Starting plc
264 Enter command (u for usual changes, w to save, ? for help) q
270 <para>If you used this method for configuring, you can skip to
271 the <xref linkend="LoginRealUser" />. As an alternative to using
272 <command>plc-config-tty</command>, you may also use a text
273 editor, but this requires some understanding on how the
274 configuration files are used within myplc. The
275 <emphasis>default</emphasis> configuration is stored in a file
276 named <filename>/etc/planetlab/default_config.xml</filename>,
277 that is designed to remain intact. You may store your local
278 changes in any file located in the <filename>configs/</filename>
279 sub-directory, that are loaded on top of the defaults. Finally
280 the file <filename>/etc/planetlab/plc_config.xml</filename> is
281 loaded, and the resulting configuration is stored in the latter
282 file, that is used as a reference.</para>
284 <para> Using a separate file for storing local changes only, as
285 <command>plc-config-tty</command> does, is not a workable option
286 with a text editor because it would involve tedious xml
287 re-assembling. So your local changes should go in
288 <filename>/etc/planetlab/plc_config.xml</filename>. Be warned
289 however that any change you might do this way could be lost if
290 you use <command>plc-config-tty</command> later on. </para>
292 <para>This file is a self-documenting configuration file written
293 in XML. Variables are divided into categories. Variable
294 identifiers must be alphanumeric, plus underscore. A variable is
295 referred to canonically as the uppercase concatenation of its
296 category identifier, an underscore, and its variable
297 identifier. Thus, a variable with an <literal>id</literal> of
298 <literal>slice_prefix</literal> in the <literal>plc</literal>
299 category is referred to canonically as
300 <envar>PLC_SLICE_PREFIX</envar>.</para>
302 <para>The reason for this convention is that during MyPLC
303 startup, <filename>plc_config.xml</filename> is translated into
304 several different languages—shell, PHP, and
305 Python—so that scripts written in each of these languages
306 can refer to the same underlying configuration. Most MyPLC
307 scripts are written in shell, so the convention for shell
308 variables predominates.</para>
310 <para>The variables that you should change immediately are:</para>
313 <listitem><para><envar>PLC_NAME</envar>: Change this to the
314 name of your PLC installation.</para></listitem>
315 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
316 to a more secure password.</para></listitem>
318 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
319 Change this to the e-mail address at which you would like to
320 receive support requests.</para></listitem>
322 <listitem><para><envar>PLC_DB_HOST</envar>,
323 <envar>PLC_DB_IP</envar>, <envar>PLC_API_HOST</envar>,
324 <envar>PLC_API_IP</envar>, <envar>PLC_WWW_HOST</envar>,
325 <envar>PLC_WWW_IP</envar>, <envar>PLC_BOOT_HOST</envar>,
326 <envar>PLC_BOOT_IP</envar>: Change all of these to the
327 preferred FQDN and external IP address of your host
328 system.</para></listitem>
331 <para> After changing these variables,
332 save the file, then restart MyPLC with <command>service plc
333 start</command>. You should notice that the password of the
334 default administrator account is no longer
335 <literal>root</literal>, and that the default site name includes
336 the name of your PLC installation instead of PlanetLab. As a
337 side effect of these changes, the ISO images for the boot CDs
338 now have new names, so that you can freely remove the ones names
339 after 'PlanetLab Test', which is the default value of
340 <envar>PLC_NAME</envar> </para>
343 <section id="LoginRealUser"> <title> Login as a real user </title>
345 <para>Now that myplc is up and running, you can connect to the
346 web site that by default runs on port 80. You can either
347 directly use the default administrator user that you configured
348 in <envar>PLC_ROOT_USER</envar> and
349 <envar>PLC_ROOT_PASSWORD</envar>, or create a real user through
350 the 'Joining' tab. Do not forget to select both PI and tech
351 roles, and to select the only site created at this stage.
352 Login as the administrator to enable this user, then login as
353 the real user.</para>
357 <title>Installing nodes</title>
359 <para>Install your first node by clicking <literal>Add
360 Node</literal> under the <literal>Nodes</literal> tab. Fill in
361 all the appropriate details, then click
362 <literal>Add</literal>. Download the node's configuration file
363 by clicking <literal>Download configuration file</literal> on
364 the <emphasis role="bold">Node Details</emphasis> page for the
365 node. Save it to a floppy disk or USB key as detailed in <xref
366 linkend="TechsGuide" />.</para>
368 <para>Follow the rest of the instructions in <xref
369 linkend="TechsGuide" /> for creating a Boot CD and installing
370 the node, except download the Boot CD image from the
371 <filename>/download</filename> directory of your PLC
372 installation, not from PlanetLab Central. The images located
373 here are customized for your installation. If you change the
374 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
375 if the SSL certificate of your boot server expires, MyPLC will
376 regenerate it and rebuild the Boot CD with the new
377 certificate. If this occurs, you must replace all Boot CDs
378 created before the certificate was regenerated.</para>
380 <para>The installation process for a node has significantly
381 improved since PlanetLab 3.3. It should now take only a few
382 seconds for a new node to become ready to create slices.</para>
386 <title>Administering nodes</title>
388 <para>You may administer nodes as <literal>root</literal> by
389 using the SSH key stored in
390 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
393 <title>Accessing nodes via SSH. Replace
394 <literal>node</literal> with the hostname of the node.</title>
396 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
399 <para>Besides the standard Linux log files located in
400 <filename>/var/log</filename>, several other files can give you
401 clues about any problems with active processes:</para>
404 <listitem><para><filename>/var/log/pl_nm</filename>: The log
405 file for the Node Manager.</para></listitem>
407 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
408 The log file for the Slice Creation Service.</para></listitem>
410 <listitem><para><filename>/var/log/propd</filename>: The log
411 file for Proper, the service which allows certain slices to
412 perform certain privileged operations in the root
413 context.</para></listitem>
415 <listitem><para><filename>/vservers/pl_netflow/var/log/netflow.log</filename>:
416 The log file for PlanetFlow, the network traffic auditing
417 service.</para></listitem>
422 <title>Creating a slice</title>
424 <para>Create a slice by clicking <literal>Create Slice</literal>
425 under the <literal>Slices</literal> tab. Fill in all the
426 appropriate details, then click <literal>Create</literal>. Add
427 nodes to the slice by clicking <literal>Manage Nodes</literal>
428 on the <emphasis role="bold">Slice Details</emphasis> page for
431 <para>A <command>cron</command> job runs every five minutes and
433 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
434 with information about current slice state. The Slice Creation
435 Service running on every node polls this file every ten minutes
436 to determine if it needs to create or delete any slices. You may
437 accelerate this process manually if desired.</para>
440 <title>Forcing slice creation on a node.</title>
442 <programlisting><![CDATA[# Update slices.xml immediately
443 service plc start crond
445 # Kick the Slice Creation Service on a particular node.
446 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
447 vserver pl_conf exec service pl_conf restart]]></programlisting>
451 <section id="StartupSequence">
452 <title>Understanding the startup sequence</title>
454 <para>During service startup described in <xref
455 linkend="QuickStart" />, observe the output of this command for
456 any failures. If no failures occur, you should see output similar
457 to the following:</para>
460 <title>A successful MyPLC startup.</title>
462 <programlisting><![CDATA[Mounting PLC: [ OK ]
463 PLC: Generating network files: [ OK ]
464 PLC: Starting system logger: [ OK ]
465 PLC: Starting database server: [ OK ]
466 PLC: Generating SSL certificates: [ OK ]
467 PLC: Configuring the API: [ OK ]
468 PLC: Updating GPG keys: [ OK ]
469 PLC: Generating SSH keys: [ OK ]
470 PLC: Starting web server: [ OK ]
471 PLC: Bootstrapping the database: [ OK ]
472 PLC: Starting DNS server: [ OK ]
473 PLC: Starting crond: [ OK ]
474 PLC: Rebuilding Boot CD: [ OK ]
475 PLC: Rebuilding Boot Manager: [ OK ]
476 PLC: Signing node packages: [ OK ]
480 <para>If <filename>/plc/root</filename> is mounted successfully, a
481 complete log file of the startup process may be found at
482 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
483 for failure of each step include:</para>
486 <listitem><para><literal>Mounting PLC</literal>: If this step
487 fails, first ensure that you started MyPLC as root. Check
488 <filename>/etc/sysconfig/plc</filename> to ensure that
489 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
490 right locations. You may also have too many existing loopback
491 mounts, or your kernel may not support loopback mounting, bind
492 mounting, or the ext3 filesystem. Try freeing at least one
493 loopback device, or re-compiling your kernel to support loopback
494 mounting, bind mounting, and the ext3 filesystem. If you see an
495 error similar to <literal>Permission denied while trying to open
496 /plc/root.img</literal>, then SELinux may be enabled. See <xref
497 linkend="Requirements" /> above for details.</para></listitem>
499 <listitem><para><literal>Starting database server</literal>: If
500 this step fails, check
501 <filename>/plc/root/var/log/pgsql</filename> and
502 <filename>/plc/root/var/log/boot.log</filename>. The most common
503 reason for failure is that the default PostgreSQL port, TCP port
504 5432, is already in use. Check that you are not running a
505 PostgreSQL server on the host system.</para></listitem>
507 <listitem><para><literal>Starting web server</literal>: If this
509 <filename>/plc/root/var/log/httpd/error_log</filename> and
510 <filename>/plc/root/var/log/boot.log</filename> for obvious
511 errors. The most common reason for failure is that the default
512 web ports, TCP ports 80 and 443, are already in use. Check that
513 you are not running a web server on the host
514 system.</para></listitem>
516 <listitem><para><literal>Bootstrapping the database</literal>:
517 If this step fails, it is likely that the previous step
518 (<literal>Starting web server</literal>) also failed. Another
519 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
520 <xref linkend="Configuration" />) does not resolve to
521 the host on which the API server has been enabled. By default,
522 all services, including the API server, are enabled and run on
523 the same host, so check that <envar>PLC_API_HOST</envar> is
524 either <filename>localhost</filename> or resolves to a local IP
525 address. Also check that <envar>PLC_ROOT_USER</envar> looks like
526 an e-mail address.</para></listitem>
528 <listitem><para><literal>Starting crond</literal>: If this step
529 fails, it is likely that the previous steps (<literal>Starting
530 web server</literal> and <literal>Bootstrapping the
531 database</literal>) also failed. If not, check
532 <filename>/plc/root/var/log/boot.log</filename> for obvious
533 errors. This step starts the <command>cron</command> service and
534 generates the initial set of XML files that the Slice Creation
535 Service uses to determine slice state.</para></listitem>
538 <para>If no failures occur, then MyPLC should be active with a
539 default configuration. Open a web browser on the host system and
540 visit <literal>http://localhost/</literal>, which should bring you
541 to the front page of your PLC installation. The password of the
542 default administrator account
543 <literal>root@localhost.localdomain</literal> (set by
544 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
545 <envar>PLC_ROOT_PASSWORD</envar>).</para>
548 <section id="FilesInvolvedRuntime"> <title> Files and directories
549 involved in <emphasis>myplc</emphasis></title>
550 <para>MyPLC installs the following files and directories:</para>
554 <listitem><para><filename>/plc/root.img</filename>: The main
555 root filesystem of the MyPLC application. This file is an
556 uncompressed ext3 filesystem that is loopback mounted on
557 <filename>/plc/root</filename> when MyPLC starts. This
558 filesystem, even when mounted, should be treated as an opaque
559 binary that can and will be replaced in its entirety by any
560 upgrade of MyPLC.</para></listitem>
562 <listitem><para><filename>/plc/root</filename>: The mount point
563 for <filename>/plc/root.img</filename>. Once the root filesystem
564 is mounted, all MyPLC services run in a
565 <command>chroot</command> jail based in this
566 directory.</para></listitem>
569 <para><filename>/plc/data</filename>: The directory where user
570 data and generated files are stored. This directory is bind
571 mounted onto <filename>/plc/root/data</filename> so that it is
572 accessible as <filename>/data</filename> from within the
573 <command>chroot</command> jail. Files in this directory are
574 marked with <command>%config(noreplace)</command> in the
575 RPM. That is, during an upgrade of MyPLC, if a file has not
576 changed since the last installation or upgrade of MyPLC, it is
577 subject to upgrade and replacement. If the file has changed,
578 the new version of the file will be created with a
579 <filename>.rpmnew</filename> extension. Symlinks within the
580 MyPLC root filesystem ensure that the following directories
581 (relative to <filename>/plc/root</filename>) are stored
582 outside the MyPLC filesystem image:</para>
585 <listitem><para><filename>/etc/planetlab</filename>: This
586 directory contains the configuration files, keys, and
587 certificates that define your MyPLC
588 installation.</para></listitem>
590 <listitem><para><filename>/var/lib/pgsql</filename>: This
591 directory contains PostgreSQL database
592 files.</para></listitem>
594 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
595 directory contains node installation logs.</para></listitem>
597 <listitem><para><filename>/var/www/html/boot</filename>: This
598 directory contains the Boot Manager, customized for your MyPLC
599 installation, and its data files.</para></listitem>
601 <listitem><para><filename>/var/www/html/download</filename>: This
602 directory contains Boot CD images, customized for your MyPLC
603 installation.</para></listitem>
605 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
606 directory is where you should install node package updates,
607 if any. By default, nodes are installed from the tarball
609 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
610 which is pre-built from the latest PlanetLab Central
611 sources, and installed as part of your MyPLC
612 installation. However, nodes will attempt to install any
613 newer RPMs located in
614 <filename>/var/www/html/install-rpms/planetlab</filename>,
615 after initial installation and periodically thereafter. You
616 must run <command>yum-arch</command> and
617 <command>createrepo</command> to update the
618 <command>yum</command> caches in this directory after
619 installing a new RPM. PlanetLab Central cannot support any
620 changes to this directory.</para></listitem>
622 <listitem><para><filename>/var/www/html/xml</filename>: This
623 directory contains various XML files that the Slice Creation
624 Service uses to determine the state of slices. These XML
625 files are refreshed periodically by <command>cron</command>
626 jobs running in the MyPLC root.</para></listitem>
628 <listitem><para><filename>/root</filename>: this is the
629 location of the root-user's homedir, and for your
630 convenience is stored under <filename>/data</filename> so
631 that your local customizations survive across
632 updates - this feature is inherited from the
633 <command>myplc-devel</command> package, where it is probably
634 more useful. </para></listitem>
639 <listitem id="MyplcInitScripts">
640 <para><filename>/etc/init.d/plc</filename>: This file
641 is a System V init script installed on your host filesystem,
642 that allows you to start up and shut down MyPLC with a single
643 command, as described in <xref linkend="QuickStart" />.</para>
646 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
647 file is a shell script fragment that defines the variables
648 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
649 the values of these variables are <filename>/plc/root</filename>
650 and <filename>/plc/data</filename>, respectively. If you wish,
651 you may move your MyPLC installation to another location on your
652 host filesystem and edit the values of these variables
653 appropriately, but you will break the RPM upgrade
654 process. PlanetLab Central cannot support any changes to this
655 file.</para></listitem>
657 <listitem><para><filename>/etc/planetlab</filename>: This
658 symlink to <filename>/plc/data/etc/planetlab</filename> is
659 installed on the host system for convenience.</para></listitem>
664 <section id="DevelopmentEnvironment">
665 <title>Rebuilding and customizing MyPLC</title>
667 <para>The MyPLC package, though distributed as an RPM, is not a
668 traditional package that can be easily rebuilt from SRPM. The
669 requisite build environment is quite extensive and numerous
670 assumptions are made throughout the PlanetLab source code base,
671 that the build environment is based on Fedora Core 4 and that
672 access to a complete Fedora Core 4 mirror is available.</para>
674 <para>For this reason, it is recommended that you only rebuild
675 MyPLC (or any of its components) from within the MyPLC development
676 environment. The MyPLC development environment is similar to MyPLC
677 itself in that it is a portable filesystem contained within a
678 <command>chroot</command> jail. The filesystem contains all the
679 necessary tools required to rebuild MyPLC, as well as a snapshot
680 of the PlanetLab source code base in the form of a local CVS
684 <title>Installation</title>
686 <para>Install the MyPLC development environment similarly to how
687 you would install MyPLC. You may install both packages on the same
688 host system if you wish. As with MyPLC, the MyPLC development
689 environment should be treated as a monolithic software
690 application, and any files present in the
691 <command>chroot</command> jail should not be modified directly, as
692 they are subject to upgrade.</para>
695 <listitem> <para>If your distribution supports RPM:</para>
696 <programlisting><![CDATA[# rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm]]></programlisting></listitem>
698 <listitem> <para>If your distribution does not support RPM:</para>
699 <programlisting><![CDATA[# cd /tmp
700 # wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
702 # rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu]]></programlisting></listitem>
707 <title>Configuration</title>
709 <para> The default configuration should work as-is on most
710 sites. Configuring the development package can be achieved in a
711 similar way as for <emphasis>myplc</emphasis>, as described in
712 <xref linkend="Configuration"
713 />. <command>plc-config-tty</command> supports a
714 <emphasis>-d</emphasis> option for supporting the
715 <emphasis>myplc-devel</emphasis> case, that can be useful in a
716 context where it would not guess it by itself. Refer to <xref
717 linkend="VariablesDevel" /> for a list of variables.</para>
720 <section id="FilesInvolvedDevel"> <title> Files and directories
721 involved in <emphasis>myplc-devl</emphasis></title>
723 <para>The MyPLC development environment installs the following
724 files and directories:</para>
727 <listitem><para><filename>/plc/devel/root.img</filename>: The
728 main root filesystem of the MyPLC development environment. This
729 file is an uncompressed ext3 filesystem that is loopback mounted
730 on <filename>/plc/devel/root</filename> when the MyPLC
731 development environment is initialized. This filesystem, even
732 when mounted, should be treated as an opaque binary that can and
733 will be replaced in its entirety by any upgrade of the MyPLC
734 development environment.</para></listitem>
736 <listitem><para><filename>/plc/devel/root</filename>: The mount
738 <filename>/plc/devel/root.img</filename>.</para></listitem>
741 <para><filename>/plc/devel/data</filename>: The directory
742 where user data and generated files are stored. This directory
743 is bind mounted onto <filename>/plc/devel/root/data</filename>
744 so that it is accessible as <filename>/data</filename> from
745 within the <command>chroot</command> jail. Files in this
746 directory are marked with
747 <command>%config(noreplace)</command> in the RPM. Symlinks
748 ensure that the following directories (relative to
749 <filename>/plc/devel/root</filename>) are stored outside the
750 root filesystem image:</para>
753 <listitem><para><filename>/etc/planetlab</filename>: This
754 directory contains the configuration files that define your
755 MyPLC development environment.</para></listitem>
757 <listitem><para><filename>/cvs</filename>: A
758 snapshot of the PlanetLab source code is stored as a CVS
759 repository in this directory. Files in this directory will
760 <emphasis role="bold">not</emphasis> be updated by an upgrade of
761 <filename>myplc-devel</filename>. See <xref
762 linkend="UpdatingCVS" /> for more information about updating
763 PlanetLab source code.</para></listitem>
765 <listitem><para><filename>/build</filename>:
766 Builds are stored in this directory. This directory is bind
767 mounted onto <filename>/plc/devel/root/build</filename> so that
768 it is accessible as <filename>/build</filename> from within the
769 <command>chroot</command> jail. The build scripts in this
770 directory are themselves source controlled; see <xref
771 linkend="BuildingMyPLC" /> for more information about executing
772 builds.</para></listitem>
774 <listitem><para><filename>/root</filename>: this is the
775 location of the root-user's homedir, and for your
776 convenience is stored under <filename>/data</filename> so
777 that your local customizations survive across
778 updates. </para></listitem> </itemizedlist> </listitem>
781 <para><filename>/etc/init.d/plc-devel</filename>: This file is
782 a System V init script installed on your host filesystem, that
783 allows you to start up and shut down the MyPLC development
784 environment with a single command.</para>
790 <title>Fedora Core 4 mirror requirement</title>
792 <para>The MyPLC development environment requires access to a
793 complete Fedora Core 4 i386 RPM repository, because several
794 different filesystems based upon Fedora Core 4 are constructed
795 during the process of building MyPLC. You may configure the
796 location of this repository via the
797 <envar>PLC_DEVEL_FEDORA_URL</envar> variable in
798 <filename>/plc/devel/data/etc/planetlab/plc_config.xml</filename>. The
799 value of the variable should be a URL that points to the top
800 level of a Fedora mirror that provides the
801 <filename>base</filename>, <filename>updates</filename>, and
802 <filename>extras</filename> repositories, e.g.,</para>
805 <listitem><para><filename>file:///data/fedora</filename></para></listitem>
806 <listitem><para><filename>http://coblitz.planet-lab.org/pub/fedora</filename></para></listitem>
807 <listitem><para><filename>ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</filename></para></listitem>
808 <listitem><para><filename>ftp://mirror.stanford.edu/pub/mirrors/fedora</filename></para></listitem>
809 <listitem><para><filename>http://rpmfind.net/linux/fedora</filename></para></listitem>
812 <para>As implied by the list, the repository may be located on
813 the local filesystem, or it may be located on a remote FTP or
814 HTTP server. URLs beginning with <filename>file://</filename>
815 should exist at the specified location relative to the root of
816 the <command>chroot</command> jail. For optimum performance and
817 reproducibility, specify
818 <envar>PLC_DEVEL_FEDORA_URL=file:///data/fedora</envar> and
819 download all Fedora Core 4 RPMS into
820 <filename>/plc/devel/data/fedora</filename> on the host system
821 after installing <filename>myplc-devel</filename>. Use a tool
822 such as <command>wget</command> or <command>rsync</command> to
823 download the RPMS from a public mirror:</para>
826 <title>Setting up a local Fedora Core 4 repository.</title>
828 <programlisting><![CDATA[# mkdir -p /plc/devel/data/fedora
829 # cd /plc/devel/data/fedora
831 # for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
832 > wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
833 > done]]></programlisting>
836 <para>Change the repository URI and <command>--cut-dirs</command>
837 level as needed to produce a hierarchy that resembles:</para>
839 <programlisting><![CDATA[/plc/devel/data/fedora/core/4/i386/os
840 /plc/devel/data/fedora/core/updates/4/i386
841 /plc/devel/data/fedora/extras/4/i386]]></programlisting>
843 <para>A list of additional Fedora Core 4 mirrors is available at
844 <ulink url="http://fedora.redhat.com/Download/mirrors.html">http://fedora.redhat.com/Download/mirrors.html</ulink>.</para>
847 <section id="BuildingMyPLC">
848 <title>Building MyPLC</title>
850 <para>All PlanetLab source code modules are built and installed
851 as RPMS. A set of build scripts, checked into the
852 <filename>build/</filename> directory of the PlanetLab CVS
853 repository, eases the task of rebuilding PlanetLab source
856 <para> Before you try building MyPLC, you might check the
857 configuration, in a file named
858 <emphasis>plc_config.xml</emphasis> that relies on a very
859 similar model as MyPLC, located in
860 <emphasis>/etc/planetlab</emphasis> within the chroot jail, or
861 in <emphasis>/plc/devel/data/etc/planetlab</emphasis> from the
862 root context. The set of applicable variables is described in
863 <xref linkend="VariablesDevel" />. </para>
865 <para>To build MyPLC, or any PlanetLab source code module, from
866 within the MyPLC development environment, execute the following
867 commands as root:</para>
870 <title>Building MyPLC.</title>
872 <programlisting><![CDATA[# Initialize MyPLC development environment
873 service plc-devel start
875 # Enter development environment
876 chroot /plc/devel/root su -
878 # Check out build scripts into a directory named after the current
879 # date. This is simply a convention, it need not be followed
880 # exactly. See build/build.sh for an example of a build script that
881 # names build directories after CVS tags.
882 DATE=$(date +%Y.%m.%d)
884 cvs -d /cvs checkout -d $DATE build
887 make -C $DATE]]></programlisting>
890 <para>If the build succeeds, a set of binary RPMS will be
892 <filename>/plc/devel/data/build/$DATE/RPMS/</filename> that you
894 <filename>/var/www/html/install-rpms/planetlab</filename>
895 directory of your MyPLC installation (see <xref
896 linkend="Installation" />).</para>
899 <section id="UpdatingCVS">
900 <title>Updating CVS</title>
902 <para>A complete snapshot of the PlanetLab source code is included
903 with the MyPLC development environment as a CVS repository in
904 <filename>/plc/devel/data/cvs</filename>. This CVS repository may
905 be accessed like any other CVS repository. It may be accessed
906 using an interface such as <ulink
907 url="http://www.freebsd.org/projects/cvsweb.html">CVSweb</ulink>,
908 and file permissions may be altered to allow for fine-grained
909 access control. Although the files are included with the
910 <filename>myplc-devel</filename> RPM, they are <emphasis
911 role="bold">not</emphasis> subject to upgrade once installed. New
912 versions of the <filename>myplc-devel</filename> RPM will install
913 updated snapshot repositories in
914 <filename>/plc/devel/data/cvs-%{version}-%{release}</filename>,
915 where <literal>%{version}-%{release}</literal> is replaced with
916 the version number of the RPM.</para>
918 <para>Because the CVS repository is not automatically upgraded,
919 if you wish to keep your local repository synchronized with the
920 public PlanetLab repository, it is highly recommended that you
921 use CVS's support for vendor branches to track changes, as
923 url="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources">here</ulink>
925 url="http://cvsbook.red-bean.com/cvsbook.html#Tracking%20Third-Party%20Sources%20(Vendor%20Branches)">here</ulink>.
926 Vendor branches ease the task of merging upstream changes with
927 your local modifications. To import a new snapshot into your
928 local repository (for example, if you have just upgraded from
929 <filename>myplc-devel-0.4-2</filename> to
930 <filename>myplc-devel-0.4-3</filename> and you notice the new
931 repository in <filename>/plc/devel/data/cvs-0.4-3</filename>),
932 execute the following commands as root from within the MyPLC
933 development environment:</para>
936 <title>Updating /data/cvs from /data/cvs-0.4-3.</title>
938 <para><emphasis role="bold">Warning</emphasis>: This may cause
939 severe, irreversible changes to be made to your local
940 repository. Always tag your local repository before
943 <programlisting><![CDATA[# Initialize MyPLC development environment
944 service plc-devel start
946 # Enter development environment
947 chroot /plc/devel/root su -
950 cvs -d /cvs rtag before-myplc-0_4-3-merge
953 TMP=$(mktemp -d /data/export.XXXXXX)
955 cvs -d /data/cvs-0.4-3 export -r HEAD .
956 cvs -d /cvs import -m "Merging myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
958 rm -rf $TMP]]></programlisting>
961 <para>If there are any merge conflicts, use the command
962 suggested by CVS to help the merge. Explaining how to fix merge
963 conflicts is beyond the scope of this document; consult the CVS
964 documentation for more information on how to use CVS.</para>
965 </section> </section>
967 <section><title> More information : the FAQ wiki page</title>
969 <para> Please refer to, and feel free to contribute, <ulink
970 url="https://wiki.planet-lab.org/twiki/bin/view/Planetlab/MyplcFAQ">
971 the FAQ page on the Princeton's wiki </ulink>.</para></section>
973 <appendix id="VariablesRuntime">
974 <title>Configuration variables (for <emphasis>myplc</emphasis>)</title>
976 <para>Listed below is the set of standard configuration variables
977 and their default values, defined in the template
978 <filename>/etc/planetlab/default_config.xml</filename>. Additional
979 variables and their defaults may be defined in site-specific XML
980 templates that should be placed in
981 <filename>/etc/planetlab/configs/</filename>.</para>
983 <para>This information is available online within
984 <command>plc-config-tty</command>, e.g.:</para>
986 <example><title>Advanced usage of plc-config-tty</title>
987 <programlisting><![CDATA[<plc> # plc-config-tty
988 Enter command (u for usual changes, w to save, ? for help) V plc_dns
989 ========== Category = PLC_DNS
991 # Enable the internal DNS server. The server does not provide reverse
992 # resolution and is not a production quality or scalable DNS solution.
993 # Use the internal DNS server only for small deployments or for testing.
995 ]]></programlisting></example>
997 <para> List of the <command>myplc</command> configuration variables:</para>
1001 <appendix id="VariablesDevel">
1002 <title>Development configuration variables (for <emphasis>myplc-devel</emphasis>)</title>
1008 <title>Bibliography</title>
1010 <biblioentry id="TechsGuide">
1011 <author><firstname>Mark</firstname><surname>Huang</surname></author>
1013 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
1014 Technical Contact's Guide</ulink></title>