1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
4 <!ENTITY Variables SYSTEM "plc_variables.xml">
5 <!ENTITY DevelVariables SYSTEM "plc_devel_variables.xml">
9 <title>MyPLC User's Guide</title>
12 <firstname>Mark Huang</firstname>
16 <orgname>Princeton University</orgname>
20 <para>This document describes the design, installation, and
21 administration of MyPLC, a complete PlanetLab Central (PLC)
22 portable installation contained within a
23 <command>chroot</command> jail. This document assumes advanced
24 knowledge of the PlanetLab architecture and Linux system
25 administration.</para>
30 <revnumber>1.0</revnumber>
31 <date>April 7, 2006</date>
32 <authorinitials>MLH</authorinitials>
33 <revdescription><para>Initial draft.</para></revdescription>
36 <revnumber>1.1</revnumber>
37 <date>July 19, 2006</date>
38 <authorinitials>MLH</authorinitials>
39 <revdescription><para>Add development environment.</para></revdescription>
45 <title>Overview</title>
47 <para>MyPLC is a complete PlanetLab Central (PLC) portable
48 installation contained within a <command>chroot</command>
49 jail. The default installation consists of a web server, an
50 XML-RPC API server, a boot server, and a database server: the core
51 components of PLC. The installation is customized through an
52 easy-to-use graphical interface. All PLC services are started up
53 and shut down through a single script installed on the host
54 system. The usually complex process of installing and
55 administering the PlanetLab backend is reduced by containing PLC
56 services within a virtual filesystem. By packaging it in such a
57 manner, MyPLC may also be run on any modern Linux distribution,
58 and could conceivably even run in a PlanetLab slice.</para>
60 <figure id="Architecture">
61 <title>MyPLC architecture</title>
64 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
67 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
70 <phrase>MyPLC architecture</phrase>
73 <para>MyPLC should be viewed as a single application that
74 provides multiple functions and can run on any host
80 <section> <title> Purpose of the <emphasis> myplc-devel
81 </emphasis> package </title>
82 <para> The <emphasis>myplc</emphasis> package comes with all
83 required node software, rebuilt from the public PlanetLab CVS
84 repository. If for any reason you need to implement your own
85 customized version of this software, you can use the
86 <emphasis>myplc-devel</emphasis> package instead, for setting up
87 your own development environment, including a local CVS
88 repository; you can then freely manage your changes and rebuild
89 your customized version of <emphasis>myplc</emphasis>. We also
90 provide good practices, that will then allow you to resync your local
91 CVS repository with any further evolution on the mainstream public
92 PlanetLab software. </para> </section>
97 <section id="Requirements"> <title> Requirements </title>
99 <para> <emphasis>myplc</emphasis> and
100 <emphasis>myplc-devel</emphasis> were designed as
101 <command>chroot</command> jails so as to reduce the requirements on
102 your host operating system. So in theory, these distributions should
103 work on virtually any Linux 2.6 based distribution, whether it
104 supports rpm or not. </para>
106 <para> However, things are never that simple and there indeed are
107 some known limitations to this, so here are a couple notes as a
108 recommended reading before you proceed with the installation.</para>
110 <para> As of 17 August 2006 (i.e <emphasis>myplc-0.5-2</emphasis>) :</para>
113 <listitem><para> The software is vastly based on <emphasis>Fedora
114 Core 4</emphasis>. Please note that the build server at Princeton
115 runs <emphasis>Fedora Core 2</emphasis>, togother with a upgraded
119 <listitem><para> myplc and myplc-devel are known to work on both
120 <emphasis>Fedora Core 2</emphasis> and <emphasis>Fedora Core
121 4</emphasis>. Please note however that, on fc4 at least, it is
122 highly recommended to use the <application>Security Level
123 Configuration</application> utility and to <emphasis>switch off
124 SElinux</emphasis> on your box because : </para>
128 myplc requires you to run SElinux as 'Permissive' at most
131 myplc-devel requires you to turn SElinux Off.
136 <listitem> <para> In addition, as far as myplc is concerned, you
137 need to check your firewall configuration since you need, of course,
138 to open up the <emphasis>http</emphasis> and
139 <emphasis>https</emphasis> ports, so as to accept connections from
140 the managed nodes and from the users desktops. </para> </listitem>
145 <section id="Installation">
146 <title>Installating and using MyPLC</title>
148 <para>Though internally composed of commodity software
149 subpackages, MyPLC should be treated as a monolithic software
150 application. MyPLC is distributed as single RPM package that has
151 no external dependencies, allowing it to be installed on
152 practically any Linux 2.6 based distribution.</para>
155 <title>Installing MyPLC.</title>
158 <listitem> <para>If your distribution supports RPM:</para>
159 <programlisting><![CDATA[# rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm]]></programlisting></listitem>
161 <listitem> <para>If your distribution does not support RPM:</para>
162 <programlisting><![CDATA[# cd /tmp
163 # wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
165 # rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu]]></programlisting></listitem>
168 <para> The <xref linkend="FilesInvolvedRuntime" /> below explains in
169 details the installation strategy and the miscellaneous files and
170 directories involved.</para>
174 <section id="QuickStart"> <title> QuickStart </title>
176 <para> On a Red Hat or Fedora host system, it is customary to use
177 the <command>service</command> command to invoke System V init
178 scripts. As the examples suggest, the service must be started as root:</para>
180 <example><title>Starting MyPLC:</title>
181 <programlisting><![CDATA[# service plc start]]></programlisting>
183 <example><title>Stopping MyPLC:</title>
184 <programlisting><![CDATA[# service plc stop]]></programlisting>
187 <para> In <xref linkend="StartupSequence" />, we provide greater
188 details that might be helpful in the case where the service does
189 not seem to take off correctly.</para>
191 <para>Like all other registered System V init services, MyPLC is
192 started and shut down automatically when your host system boots
193 and powers off. You may disable automatic startup by invoking the
194 <command>chkconfig</command> command on a Red Hat or Fedora host
197 <example> <title>Disabling automatic startup of MyPLC.</title>
198 <programlisting><![CDATA[# chkconfig plc off]]></programlisting></example>
199 <example> <title>Re-enabling automatic startup of MyPLC.</title>
200 <programlisting><![CDATA[# chkconfig plc on]]></programlisting></example>
204 <section id="Configuration">
205 <title>Changing the configuration</title>
207 <para>After verifying that MyPLC is working correctly, shut it
208 down and begin changing some of the default variable
209 values. Shut down MyPLC with <command>service plc stop</command>
210 (see <xref linkend="QuickStart" />). </para>
212 <para> The preferred option for changing the configuration is to
213 use the <command>plc-config-tty</command> tool. This tools comes
214 with the root image, so you need to have it mounted first. The
215 full set of applicable variables is described in <xref
216 linkend="VariablesDevel" />, but using the <command>u</command>
217 guides you to the most useful ones. Here is sample session:
220 <example><title>Using plc-config-tty for configuration:</title>
221 <programlisting><![CDATA[# service plc mount
223 # chroot /plc/root su -
224 <plc> # plc-config-tty
225 Config file /etc/planetlab/configs/site.xml located under a non-existing directory
226 Want to create /etc/planetlab/configs [y]/n ? y
227 Created directory /etc/planetlab/configs
228 Enter command (u for usual changes, w to save, ? for help) u
229 == PLC_NAME : [PlanetLab Test] OneLab
230 == PLC_ROOT_USER : [root@localhost.localdomain] odie.inria.fr
231 == PLC_ROOT_PASSWORD : [root] plain-passwd
232 == PLC_MAIL_SUPPORT_ADDRESS : [root+support@localhost.localdomain] support@one-lab.org
233 == PLC_DB_HOST : [localhost.localdomain] odie.inria.fr
234 == PLC_API_HOST : [localhost.localdomain] odie.inria.fr
235 == PLC_WWW_HOST : [localhost.localdomain] odie.inria.fr
236 == PLC_BOOT_HOST : [localhost.localdomain] odie.inria.fr
237 == PLC_NET_DNS1 : [127.0.0.1] 138.96.250.248
238 == PLC_NET_DNS2 : [None] 138.96.250.249
239 Enter command (u for usual changes, w to save, ? for help) w
240 Wrote /etc/planetlab/configs/site.xml
242 /etc/planetlab/default_config.xml
243 and /etc/planetlab/configs/site.xml
244 into /etc/planetlab/plc_config.xml
245 You might want to type 'r' (restart plc) or 'q' (quit)
246 Enter command (u for usual changes, w to save, ? for help) r
247 ==================== Stopping plc
249 ==================== Starting plc
251 Enter command (u for usual changes, w to save, ? for help) q
257 <para>If you used this method for configuring, you can skip to
258 the next section. As an alternative to using
259 <command>plc-config-tty</command>, you may also use a text
260 editor, but this requires some understanding on how the
261 configuration files are used within myplc. The
262 <emphasis>default</emphasis> configuration is stored in a file
263 named <filename>/etc/planetlab/default_config.xml</filename>,
264 that is designed to remain intact. You may store your local
265 changes in any file located in the <filename>configs/</filename>
266 sub-directory, that are loaded on top of the defaults. Finally
267 the file <filename>/etc/planetlab/plc_config.xml</filename> is
268 loaded, and the resulting configuration is stored in the latter
269 file, that is used as a reference.</para>
271 <para> Using a separate file for storing local changes only, as
272 <command>plc-config-tty</command> does, is not a workable option
273 with a text editor because it would involve tedious xml
274 re-assembling. So your local changes should go in
275 <filename>/etc/planetlab/plc_config.xml</filename>. Be warned
276 however that any change you might do this way could be lost if
277 you use <command>plc-config-tty</command> later on. </para>
279 <para>This file is a self-documenting configuration file written
280 in XML. Variables are divided into categories. Variable
281 identifiers must be alphanumeric, plus underscore. A variable is
282 referred to canonically as the uppercase concatenation of its
283 category identifier, an underscore, and its variable
284 identifier. Thus, a variable with an <literal>id</literal> of
285 <literal>slice_prefix</literal> in the <literal>plc</literal>
286 category is referred to canonically as
287 <envar>PLC_SLICE_PREFIX</envar>.</para>
289 <para>The reason for this convention is that during MyPLC
290 startup, <filename>plc_config.xml</filename> is translated into
291 several different languages—shell, PHP, and
292 Python—so that scripts written in each of these languages
293 can refer to the same underlying configuration. Most MyPLC
294 scripts are written in shell, so the convention for shell
295 variables predominates.</para>
297 <para>The variables that you should change immediately are:</para>
300 <listitem><para><envar>PLC_NAME</envar>: Change this to the
301 name of your PLC installation.</para></listitem>
302 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
303 to a more secure password.</para></listitem>
305 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
306 Change this to the e-mail address at which you would like to
307 receive support requests.</para></listitem>
309 <listitem><para><envar>PLC_DB_HOST</envar>,
310 <envar>PLC_DB_IP</envar>, <envar>PLC_API_HOST</envar>,
311 <envar>PLC_API_IP</envar>, <envar>PLC_WWW_HOST</envar>,
312 <envar>PLC_WWW_IP</envar>, <envar>PLC_BOOT_HOST</envar>,
313 <envar>PLC_BOOT_IP</envar>: Change all of these to the
314 preferred FQDN and external IP address of your host
315 system.</para></listitem>
318 <para> After changing these variables,
319 save the file, then restart MyPLC with <command>service plc
320 start</command>. You should notice that the password of the
321 default administrator account is no longer
322 <literal>root</literal>, and that the default site name includes
323 the name of your PLC installation instead of PlanetLab. As a
324 side effect of these changes, the ISO images for the boot CDs
325 now have new names, so that you can freely remove the ones names
326 after 'PlanetLab Test', which is the default value of
327 <envar>PLC_NAME</envar> </para>
330 <section> <title> Login as a real user </title>
332 <para>Now that myplc is up and running, you can connect to the
333 web site that by default runs on port 80. You can either
334 directly use the default administrator user that you configured
335 in <envar>PLC_ROOT_USER</envar> and
336 <envar>PLC_ROOT_PASSWORD</envar>, or create a real user through
337 the 'Joining' tab. Do not forget to select both PI and tech
338 roles, and to select the only site created at this stage.
339 Login as the administrator to enable this user, then login as
340 the real user.</para>
344 <title>Installing nodes</title>
346 <para>Install your first node by clicking <literal>Add
347 Node</literal> under the <literal>Nodes</literal> tab. Fill in
348 all the appropriate details, then click
349 <literal>Add</literal>. Download the node's configuration file
350 by clicking <literal>Download configuration file</literal> on
351 the <emphasis role="bold">Node Details</emphasis> page for the
352 node. Save it to a floppy disk or USB key as detailed in <xref
353 linkend="TechsGuide" />.</para>
355 <para>Follow the rest of the instructions in <xref
356 linkend="TechsGuide" /> for creating a Boot CD and installing
357 the node, except download the Boot CD image from the
358 <filename>/download</filename> directory of your PLC
359 installation, not from PlanetLab Central. The images located
360 here are customized for your installation. If you change the
361 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
362 if the SSL certificate of your boot server expires, MyPLC will
363 regenerate it and rebuild the Boot CD with the new
364 certificate. If this occurs, you must replace all Boot CDs
365 created before the certificate was regenerated.</para>
367 <para>The installation process for a node has significantly
368 improved since PlanetLab 3.3. It should now take only a few
369 seconds for a new node to become ready to create slices.</para>
373 <title>Administering nodes</title>
375 <para>You may administer nodes as <literal>root</literal> by
376 using the SSH key stored in
377 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
380 <title>Accessing nodes via SSH. Replace
381 <literal>node</literal> with the hostname of the node.</title>
383 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
386 <para>Besides the standard Linux log files located in
387 <filename>/var/log</filename>, several other files can give you
388 clues about any problems with active processes:</para>
391 <listitem><para><filename>/var/log/pl_nm</filename>: The log
392 file for the Node Manager.</para></listitem>
394 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
395 The log file for the Slice Creation Service.</para></listitem>
397 <listitem><para><filename>/var/log/propd</filename>: The log
398 file for Proper, the service which allows certain slices to
399 perform certain privileged operations in the root
400 context.</para></listitem>
402 <listitem><para><filename>/vservers/pl_netflow/var/log/netflow.log</filename>:
403 The log file for PlanetFlow, the network traffic auditing
404 service.</para></listitem>
409 <title>Creating a slice</title>
411 <para>Create a slice by clicking <literal>Create Slice</literal>
412 under the <literal>Slices</literal> tab. Fill in all the
413 appropriate details, then click <literal>Create</literal>. Add
414 nodes to the slice by clicking <literal>Manage Nodes</literal>
415 on the <emphasis role="bold">Slice Details</emphasis> page for
418 <para>A <command>cron</command> job runs every five minutes and
420 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
421 with information about current slice state. The Slice Creation
422 Service running on every node polls this file every ten minutes
423 to determine if it needs to create or delete any slices. You may
424 accelerate this process manually if desired.</para>
427 <title>Forcing slice creation on a node.</title>
429 <programlisting><![CDATA[# Update slices.xml immediately
430 service plc start crond
432 # Kick the Slice Creation Service on a particular node.
433 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
434 vserver pl_conf exec service pl_conf restart]]></programlisting>
438 <section id="StartupSequence">
439 <title>Understanding the startup sequence</title>
441 <para>During service startup described in <xref
442 linkend="QuickStart" />, observe the output of this command for
443 any failures. If no failures occur, you should see output similar
444 to the following:</para>
447 <title>A successful MyPLC startup.</title>
449 <programlisting><![CDATA[Mounting PLC: [ OK ]
450 PLC: Generating network files: [ OK ]
451 PLC: Starting system logger: [ OK ]
452 PLC: Starting database server: [ OK ]
453 PLC: Generating SSL certificates: [ OK ]
454 PLC: Configuring the API: [ OK ]
455 PLC: Updating GPG keys: [ OK ]
456 PLC: Generating SSH keys: [ OK ]
457 PLC: Starting web server: [ OK ]
458 PLC: Bootstrapping the database: [ OK ]
459 PLC: Starting DNS server: [ OK ]
460 PLC: Starting crond: [ OK ]
461 PLC: Rebuilding Boot CD: [ OK ]
462 PLC: Rebuilding Boot Manager: [ OK ]
463 PLC: Signing node packages: [ OK ]
467 <para>If <filename>/plc/root</filename> is mounted successfully, a
468 complete log file of the startup process may be found at
469 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
470 for failure of each step include:</para>
473 <listitem><para><literal>Mounting PLC</literal>: If this step
474 fails, first ensure that you started MyPLC as root. Check
475 <filename>/etc/sysconfig/plc</filename> to ensure that
476 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
477 right locations. You may also have too many existing loopback
478 mounts, or your kernel may not support loopback mounting, bind
479 mounting, or the ext3 filesystem. Try freeing at least one
480 loopback device, or re-compiling your kernel to support loopback
481 mounting, bind mounting, and the ext3 filesystem. If you see an
482 error similar to <literal>Permission denied while trying to open
483 /plc/root.img</literal>, then SELinux may be enabled. See <xref
484 linkend="Requirements" /> above for details.</para></listitem>
486 <listitem><para><literal>Starting database server</literal>: If
487 this step fails, check
488 <filename>/plc/root/var/log/pgsql</filename> and
489 <filename>/plc/root/var/log/boot.log</filename>. The most common
490 reason for failure is that the default PostgreSQL port, TCP port
491 5432, is already in use. Check that you are not running a
492 PostgreSQL server on the host system.</para></listitem>
494 <listitem><para><literal>Starting web server</literal>: If this
496 <filename>/plc/root/var/log/httpd/error_log</filename> and
497 <filename>/plc/root/var/log/boot.log</filename> for obvious
498 errors. The most common reason for failure is that the default
499 web ports, TCP ports 80 and 443, are already in use. Check that
500 you are not running a web server on the host
501 system.</para></listitem>
503 <listitem><para><literal>Bootstrapping the database</literal>:
504 If this step fails, it is likely that the previous step
505 (<literal>Starting web server</literal>) also failed. Another
506 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
507 <xref linkend="Configuration" />) does not resolve to
508 the host on which the API server has been enabled. By default,
509 all services, including the API server, are enabled and run on
510 the same host, so check that <envar>PLC_API_HOST</envar> is
511 either <filename>localhost</filename> or resolves to a local IP
512 address.</para></listitem>
514 <listitem><para><literal>Starting crond</literal>: If this step
515 fails, it is likely that the previous steps (<literal>Starting
516 web server</literal> and <literal>Bootstrapping the
517 database</literal>) also failed. If not, check
518 <filename>/plc/root/var/log/boot.log</filename> for obvious
519 errors. This step starts the <command>cron</command> service and
520 generates the initial set of XML files that the Slice Creation
521 Service uses to determine slice state.</para></listitem>
524 <para>If no failures occur, then MyPLC should be active with a
525 default configuration. Open a web browser on the host system and
526 visit <literal>http://localhost/</literal>, which should bring you
527 to the front page of your PLC installation. The password of the
528 default administrator account
529 <literal>root@localhost.localdomain</literal> (set by
530 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
531 <envar>PLC_ROOT_PASSWORD</envar>).</para>
534 <section id="FilesInvolvedRuntime"> <title> Files and directories
535 involved in <emphasis>myplc</emphasis></title>
536 <para>MyPLC installs the following files and directories:</para>
540 <listitem><para><filename>/plc/root.img</filename>: The main
541 root filesystem of the MyPLC application. This file is an
542 uncompressed ext3 filesystem that is loopback mounted on
543 <filename>/plc/root</filename> when MyPLC starts. This
544 filesystem, even when mounted, should be treated as an opaque
545 binary that can and will be replaced in its entirety by any
546 upgrade of MyPLC.</para></listitem>
548 <listitem><para><filename>/plc/root</filename>: The mount point
549 for <filename>/plc/root.img</filename>. Once the root filesystem
550 is mounted, all MyPLC services run in a
551 <command>chroot</command> jail based in this
552 directory.</para></listitem>
555 <para><filename>/plc/data</filename>: The directory where user
556 data and generated files are stored. This directory is bind
557 mounted onto <filename>/plc/root/data</filename> so that it is
558 accessible as <filename>/data</filename> from within the
559 <command>chroot</command> jail. Files in this directory are
560 marked with <command>%config(noreplace)</command> in the
561 RPM. That is, during an upgrade of MyPLC, if a file has not
562 changed since the last installation or upgrade of MyPLC, it is
563 subject to upgrade and replacement. If the file has changed,
564 the new version of the file will be created with a
565 <filename>.rpmnew</filename> extension. Symlinks within the
566 MyPLC root filesystem ensure that the following directories
567 (relative to <filename>/plc/root</filename>) are stored
568 outside the MyPLC filesystem image:</para>
571 <listitem><para><filename>/etc/planetlab</filename>: This
572 directory contains the configuration files, keys, and
573 certificates that define your MyPLC
574 installation.</para></listitem>
576 <listitem><para><filename>/var/lib/pgsql</filename>: This
577 directory contains PostgreSQL database
578 files.</para></listitem>
580 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
581 directory contains node installation logs.</para></listitem>
583 <listitem><para><filename>/var/www/html/boot</filename>: This
584 directory contains the Boot Manager, customized for your MyPLC
585 installation, and its data files.</para></listitem>
587 <listitem><para><filename>/var/www/html/download</filename>: This
588 directory contains Boot CD images, customized for your MyPLC
589 installation.</para></listitem>
591 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
592 directory is where you should install node package updates,
593 if any. By default, nodes are installed from the tarball
595 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
596 which is pre-built from the latest PlanetLab Central
597 sources, and installed as part of your MyPLC
598 installation. However, nodes will attempt to install any
599 newer RPMs located in
600 <filename>/var/www/html/install-rpms/planetlab</filename>,
601 after initial installation and periodically thereafter. You
602 must run <command>yum-arch</command> and
603 <command>createrepo</command> to update the
604 <command>yum</command> caches in this directory after
605 installing a new RPM. PlanetLab Central cannot support any
606 changes to this directory.</para></listitem>
608 <listitem><para><filename>/var/www/html/xml</filename>: This
609 directory contains various XML files that the Slice Creation
610 Service uses to determine the state of slices. These XML
611 files are refreshed periodically by <command>cron</command>
612 jobs running in the MyPLC root.</para></listitem>
616 <listitem id="MyplcInitScripts">
617 <para><filename>/etc/init.d/plc</filename>: This file
618 is a System V init script installed on your host filesystem,
619 that allows you to start up and shut down MyPLC with a single
620 command, as described in <xref linkend="QuickStart" />.</para>
623 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
624 file is a shell script fragment that defines the variables
625 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
626 the values of these variables are <filename>/plc/root</filename>
627 and <filename>/plc/data</filename>, respectively. If you wish,
628 you may move your MyPLC installation to another location on your
629 host filesystem and edit the values of these variables
630 appropriately, but you will break the RPM upgrade
631 process. PlanetLab Central cannot support any changes to this
632 file.</para></listitem>
634 <listitem><para><filename>/etc/planetlab</filename>: This
635 symlink to <filename>/plc/data/etc/planetlab</filename> is
636 installed on the host system for convenience.</para></listitem>
641 <section id="DevelopmentEnvironment">
642 <title>Rebuilding and customizing MyPLC</title>
644 <para>The MyPLC package, though distributed as an RPM, is not a
645 traditional package that can be easily rebuilt from SRPM. The
646 requisite build environment is quite extensive and numerous
647 assumptions are made throughout the PlanetLab source code base,
648 that the build environment is based on Fedora Core 4 and that
649 access to a complete Fedora Core 4 mirror is available.</para>
651 <para>For this reason, it is recommended that you only rebuild
652 MyPLC (or any of its components) from within the MyPLC development
653 environment. The MyPLC development environment is similar to MyPLC
654 itself in that it is a portable filesystem contained within a
655 <command>chroot</command> jail. The filesystem contains all the
656 necessary tools required to rebuild MyPLC, as well as a snapshot
657 of the PlanetLab source code base in the form of a local CVS
661 <title>Installation</title>
663 <para>Install the MyPLC development environment similarly to how
664 you would install MyPLC. You may install both packages on the same
665 host system if you wish. As with MyPLC, the MyPLC development
666 environment should be treated as a monolithic software
667 application, and any files present in the
668 <command>chroot</command> jail should not be modified directly, as
669 they are subject to upgrade.</para>
672 <listitem> <para>If your distribution supports RPM:</para>
673 <programlisting><![CDATA[# rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm]]></programlisting></listitem>
675 <listitem> <para>If your distribution does not support RPM:</para>
676 <programlisting><![CDATA[# cd /tmp
677 # wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
679 # rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu]]></programlisting></listitem>
684 <title>Configuration</title>
686 <para> The default configuration should work as-is on most
687 sites. Configuring the development package can be achieved in a
688 similar way as for <emphasis>myplc</emphasis>, as described in
689 <xref linkend="Configuration"
690 />. <command>plc-config-tty</command> supports a
691 <emphasis>-d</emphasis> option for supporting the
692 <emphasis>myplc-devel</emphasis> case, that can be useful in a
693 context where it would not guess it by itself. Refer to <xref
694 linkend="VariablesDevel" /> for a list of variables.</para>
697 <section id="FilesInvolvedDevel"> <title> Files and directories
698 involved in <emphasis>myplc-devl</emphasis></title>
700 <para>The MyPLC development environment installs the following
701 files and directories:</para>
704 <listitem><para><filename>/plc/devel/root.img</filename>: The
705 main root filesystem of the MyPLC development environment. This
706 file is an uncompressed ext3 filesystem that is loopback mounted
707 on <filename>/plc/devel/root</filename> when the MyPLC
708 development environment is initialized. This filesystem, even
709 when mounted, should be treated as an opaque binary that can and
710 will be replaced in its entirety by any upgrade of the MyPLC
711 development environment.</para></listitem>
713 <listitem><para><filename>/plc/devel/root</filename>: The mount
715 <filename>/plc/devel/root.img</filename>.</para></listitem>
718 <para><filename>/plc/devel/data</filename>: The directory
719 where user data and generated files are stored. This directory
720 is bind mounted onto <filename>/plc/devel/root/data</filename>
721 so that it is accessible as <filename>/data</filename> from
722 within the <command>chroot</command> jail. Files in this
723 directory are marked with
724 <command>%config(noreplace)</command> in the RPM. Symlinks
725 ensure that the following directories (relative to
726 <filename>/plc/devel/root</filename>) are stored outside the
727 root filesystem image:</para>
730 <listitem><para><filename>/etc/planetlab</filename>: This
731 directory contains the configuration files that define your
732 MyPLC development environment.</para></listitem>
734 <listitem><para><filename>/cvs</filename>: A
735 snapshot of the PlanetLab source code is stored as a CVS
736 repository in this directory. Files in this directory will
737 <emphasis role="bold">not</emphasis> be updated by an upgrade of
738 <filename>myplc-devel</filename>. See <xref
739 linkend="UpdatingCVS" /> for more information about updating
740 PlanetLab source code.</para></listitem>
742 <listitem><para><filename>/build</filename>:
743 Builds are stored in this directory. This directory is bind
744 mounted onto <filename>/plc/devel/root/build</filename> so that
745 it is accessible as <filename>/build</filename> from within the
746 <command>chroot</command> jail. The build scripts in this
747 directory are themselves source controlled; see <xref
748 linkend="BuildingMyPLC" /> for more information about executing
749 builds.</para></listitem>
754 <para><filename>/etc/init.d/plc-devel</filename>: This file is
755 a System V init script installed on your host filesystem, that
756 allows you to start up and shut down the MyPLC development
757 environment with a single command.</para>
763 <title>Fedora Core 4 mirror requirement</title>
765 <para>The MyPLC development environment requires access to a
766 complete Fedora Core 4 i386 RPM repository, because several
767 different filesystems based upon Fedora Core 4 are constructed
768 during the process of building MyPLC. You may configure the
769 location of this repository via the
770 <envar>PLC_DEVEL_FEDORA_URL</envar> variable in
771 <filename>/plc/devel/data/etc/planetlab/plc_config.xml</filename>. The
772 value of the variable should be a URL that points to the top
773 level of a Fedora mirror that provides the
774 <filename>base</filename>, <filename>updates</filename>, and
775 <filename>extras</filename> repositories, e.g.,</para>
778 <listitem><para><filename>file:///data/fedora</filename></para></listitem>
779 <listitem><para><filename>http://coblitz.planet-lab.org/pub/fedora</filename></para></listitem>
780 <listitem><para><filename>ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</filename></para></listitem>
781 <listitem><para><filename>ftp://mirror.stanford.edu/pub/mirrors/fedora</filename></para></listitem>
782 <listitem><para><filename>http://rpmfind.net/linux/fedora</filename></para></listitem>
785 <para>As implied by the list, the repository may be located on
786 the local filesystem, or it may be located on a remote FTP or
787 HTTP server. URLs beginning with <filename>file://</filename>
788 should exist at the specified location relative to the root of
789 the <command>chroot</command> jail. For optimum performance and
790 reproducibility, specify
791 <envar>PLC_DEVEL_FEDORA_URL=file:///data/fedora</envar> and
792 download all Fedora Core 4 RPMS into
793 <filename>/plc/devel/data/fedora</filename> on the host system
794 after installing <filename>myplc-devel</filename>. Use a tool
795 such as <command>wget</command> or <command>rsync</command> to
796 download the RPMS from a public mirror:</para>
799 <title>Setting up a local Fedora Core 4 repository.</title>
801 <programlisting><![CDATA[# mkdir -p /plc/devel/data/fedora
802 # cd /plc/devel/data/fedora
804 # for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
805 > wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
806 > done]]></programlisting>
809 <para>Change the repository URI and <command>--cut-dirs</command>
810 level as needed to produce a hierarchy that resembles:</para>
812 <programlisting><![CDATA[/plc/devel/data/fedora/core/4/i386/os
813 /plc/devel/data/fedora/core/updates/4/i386
814 /plc/devel/data/fedora/extras/4/i386]]></programlisting>
816 <para>A list of additional Fedora Core 4 mirrors is available at
817 <ulink url="http://fedora.redhat.com/Download/mirrors.html">http://fedora.redhat.com/Download/mirrors.html</ulink>.</para>
820 <section id="BuildingMyPLC">
821 <title>Building MyPLC</title>
823 <para>All PlanetLab source code modules are built and installed
824 as RPMS. A set of build scripts, checked into the
825 <filename>build/</filename> directory of the PlanetLab CVS
826 repository, eases the task of rebuilding PlanetLab source
829 <para> Before you try building MyPLC, you might check the
830 configuration, in a file named
831 <emphasis>plc_config.xml</emphasis> that relies on a very
832 similar model as MyPLC, located in
833 <emphasis>/etc/planetlab</emphasis> within the chroot jail, or
834 in <emphasis>/plc/devel/data/etc/planetlab</emphasis> from the
835 root context. The set of applicable variables is described in
836 <xref linkend="VariablesDevel" />. </para>
838 <para>To build MyPLC, or any PlanetLab source code module, from
839 within the MyPLC development environment, execute the following
840 commands as root:</para>
843 <title>Building MyPLC.</title>
845 <programlisting><![CDATA[# Initialize MyPLC development environment
846 service plc-devel start
848 # Enter development environment
849 chroot /plc/devel/root su -
851 # Check out build scripts into a directory named after the current
852 # date. This is simply a convention, it need not be followed
853 # exactly. See build/build.sh for an example of a build script that
854 # names build directories after CVS tags.
855 DATE=$(date +%Y.%m.%d)
857 cvs -d /cvs checkout -d $DATE build
860 make -C $DATE]]></programlisting>
863 <para>If the build succeeds, a set of binary RPMS will be
865 <filename>/plc/devel/data/build/$DATE/RPMS/</filename> that you
867 <filename>/var/www/html/install-rpms/planetlab</filename>
868 directory of your MyPLC installation (see <xref
869 linkend="Installation" />).</para>
872 <section id="UpdatingCVS">
873 <title>Updating CVS</title>
875 <para>A complete snapshot of the PlanetLab source code is included
876 with the MyPLC development environment as a CVS repository in
877 <filename>/plc/devel/data/cvs</filename>. This CVS repository may
878 be accessed like any other CVS repository. It may be accessed
879 using an interface such as <ulink
880 url="http://www.freebsd.org/projects/cvsweb.html">CVSweb</ulink>,
881 and file permissions may be altered to allow for fine-grained
882 access control. Although the files are included with the
883 <filename>myplc-devel</filename> RPM, they are <emphasis
884 role="bold">not</emphasis> subject to upgrade once installed. New
885 versions of the <filename>myplc-devel</filename> RPM will install
886 updated snapshot repositories in
887 <filename>/plc/devel/data/cvs-%{version}-%{release}</filename>,
888 where <literal>%{version}-%{release}</literal> is replaced with
889 the version number of the RPM.</para>
891 <para>Because the CVS repository is not automatically upgraded,
892 if you wish to keep your local repository synchronized with the
893 public PlanetLab repository, it is highly recommended that you
894 use CVS's support for <ulink
895 url="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources">vendor
896 branches</ulink> to track changes. Vendor branches ease the task
897 of merging upstream changes with your local modifications. To
898 import a new snapshot into your local repository (for example,
899 if you have just upgraded from
900 <filename>myplc-devel-0.4-2</filename> to
901 <filename>myplc-devel-0.4-3</filename> and you notice the new
902 repository in <filename>/plc/devel/data/cvs-0.4-3</filename>),
903 execute the following commands as root from within the MyPLC
904 development environment:</para>
907 <title>Updating /data/cvs from /data/cvs-0.4-3.</title>
909 <para><emphasis role="bold">Warning</emphasis>: This may cause
910 severe, irreversible changes to be made to your local
911 repository. Always tag your local repository before
914 <programlisting><![CDATA[# Initialize MyPLC development environment
915 service plc-devel start
917 # Enter development environment
918 chroot /plc/devel/root su -
921 cvs -d /cvs rtag before-myplc-0_4-3-merge
924 TMP=$(mktemp -d /data/export.XXXXXX)
926 cvs -d /data/cvs-0.4-3 export -r HEAD .
927 cvs -d /cvs import -m "PlanetLab sources from myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
929 rm -rf $TMP]]></programlisting>
932 <para>If there any merge conflicts, use the command suggested by
933 CVS to help the merge. Explaining how to fix merge conflicts is
934 beyond the scope of this document; consult the CVS documentation
935 for more information on how to use CVS.</para>
939 <appendix id="VariablesRuntime">
940 <title>Configuration variables (for <emphasis>myplc</emphasis>)</title>
942 <para>Listed below is the set of standard configuration variables
943 and their default values, defined in the template
944 <filename>/etc/planetlab/default_config.xml</filename>. Additional
945 variables and their defaults may be defined in site-specific XML
946 templates that should be placed in
947 <filename>/etc/planetlab/configs/</filename>.</para>
949 <para>This information is available online within
950 <command>plc-config-tty</command>, e.g.:</para>
952 <example><title>Advanced usage of plc-config-tty</title>
953 <programlisting><![CDATA[<plc> # plc-config-tty
954 Enter command (u for usual changes, w to save, ? for help) V plc_dns
955 ========== Category = PLC_DNS
957 # Enable the internal DNS server. The server does not provide reverse
958 # resolution and is not a production quality or scalable DNS solution.
959 # Use the internal DNS server only for small deployments or for testing.
961 ]]></programlisting></example>
964 <para> List of the <command>myplc</command> configuration variables:</para>
968 <appendix id="VariablesDevel">
969 <title>Development configuration variables (for <emphasis>myplc-devel</emphasis>)</title>
975 <title>Bibliography</title>
977 <biblioentry id="TechsGuide">
978 <author><firstname>Mark</firstname><surname>Huang</surname></author>
980 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
981 Technical Contact's Guide</ulink></title>