1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
4 <!ENTITY Variables SYSTEM "plc_variables.xml">
5 <!ENTITY DevelVariables SYSTEM "plc_devel_variables.xml">
9 <title>MyPLC User's Guide</title>
12 <firstname>Mark Huang</firstname>
16 <orgname>Princeton University</orgname>
20 <para>This document describes the design, installation, and
21 administration of MyPLC, a complete PlanetLab Central (PLC)
22 portable installation contained within a
23 <command>chroot</command> jail. This document assumes advanced
24 knowledge of the PlanetLab architecture and Linux system
25 administration.</para>
30 <revnumber>1.0</revnumber>
31 <date>April 7, 2006</date>
32 <authorinitials>MLH</authorinitials>
33 <revdescription><para>Initial draft.</para></revdescription>
36 <revnumber>1.1</revnumber>
37 <date>July 19, 2006</date>
38 <authorinitials>MLH</authorinitials>
39 <revdescription><para>Add development environment.</para></revdescription>
45 <title>Overview</title>
47 <para>MyPLC is a complete PlanetLab Central (PLC) portable
48 installation contained within a <command>chroot</command>
49 jail. The default installation consists of a web server, an
50 XML-RPC API server, a boot server, and a database server: the core
51 components of PLC. The installation is customized through an
52 easy-to-use graphical interface. All PLC services are started up
53 and shut down through a single script installed on the host
54 system. The usually complex process of installing and
55 administering the PlanetLab backend is reduced by containing PLC
56 services within a virtual filesystem. By packaging it in such a
57 manner, MyPLC may also be run on any modern Linux distribution,
58 and could conceivably even run in a PlanetLab slice.</para>
60 <figure id="Architecture">
61 <title>MyPLC architecture</title>
64 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
67 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
70 <phrase>MyPLC architecture</phrase>
73 <para>MyPLC should be viewed as a single application that
74 provides multiple functions and can run on any host
80 <section> <title> Purpose of the <emphasis> myplc-devel
81 </emphasis> package </title>
82 <para> The <emphasis>myplc</emphasis> package comes with all
83 required node software, rebuilt from the public PlanetLab CVS
84 repository. If for any reason you need to implement your own
85 customized version of this software, you can use the
86 <emphasis>myplc-devel</emphasis> package instead, for setting up
87 your own development environment, including a local CVS
88 repository; you can then freely manage your changes and rebuild
89 your customized version of <emphasis>myplc</emphasis>. We also
90 provide good practices, that will then allow you to resync your local
91 CVS repository with any further evolution on the mainstream public
92 PlanetLab software. </para> </section>
97 <section id="Requirements"> <title> Requirements </title>
99 <para> <emphasis>myplc</emphasis> and
100 <emphasis>myplc-devel</emphasis> were designed as
101 <command>chroot</command> jails so as to reduce the requirements on
102 your host operating system. So in theory, these distributions should
103 work on virtually any Linux 2.6 based distribution, whether it
104 supports rpm or not. </para>
106 <para> However, things are never that simple and there indeed are
107 some known limitations to this, so here are a couple notes as a
108 recommended reading before you proceed with the installation.</para>
110 <para> As of 9 August 2006 (i.e <emphasis>myplc-0.5</emphasis>) :</para>
113 <listitem><para> The software is vastly based on <emphasis>Fedora
114 Core 4</emphasis>. Please note that the build server at Princeton
115 runs <emphasis>Fedora Core 2</emphasis>, togother with a upgraded
119 <listitem><para> myplc and myplc-devel are known to work on both
120 <emphasis>Fedora Core 2</emphasis> and <emphasis>Fedora Core
121 4</emphasis>. Please note however that, on fc4 at least, it is
122 highly recommended to use the <application>Security Level
123 Configuration</application> utility and to <emphasis>switch off
124 SElinux</emphasis> on your box because : </para>
128 myplc requires you to run SElinux as 'Permissive' at most
131 myplc-devel requires you to turn SElinux Off.
136 <listitem> <para> In addition, as far as myplc is concerned, you
137 need to check your firewall configuration since you need, of course,
138 to open up the <emphasis>http</emphasis> and
139 <emphasis>https</emphasis> ports, so as to accept connections from
140 the managed nodes and from the users desktops. </para> </listitem>
145 <section id="Installation">
146 <title>Installating and using MyPLC</title>
148 <para>Though internally composed of commodity software
149 subpackages, MyPLC should be treated as a monolithic software
150 application. MyPLC is distributed as single RPM package that has
151 no external dependencies, allowing it to be installed on
152 practically any Linux 2.6 based distribution.</para>
155 <title>Installing MyPLC.</title>
158 <listitem> <para>If your distribution supports RPM:</para>
159 <programlisting><![CDATA[# rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm]]></programlisting></listitem>
161 <listitem> <para>If your distribution does not support RPM:</para>
162 <programlisting><![CDATA[# cd /tmp
163 # wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
165 # rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu]]></programlisting></listitem>
168 <para> The <xref linkend="FilesInvolvedRuntime" /> below explains in
169 details the installation strategy and the miscellaneous files and
170 directories involved.</para>
174 <section id="QuickStart"> <title> QuickStart </title>
176 <para> On a Red Hat or Fedora host system, it is customary to use
177 the <command>service</command> command to invoke System V init
178 scripts. As the examples suggest, the service must be started as root:</para>
180 <example><title>Starting MyPLC:</title>
181 <programlisting><![CDATA[# service plc start]]></programlisting>
183 <example><title>Stopping MyPLC:</title>
184 <programlisting><![CDATA[# service plc stop]]></programlisting>
187 <para> In <xref linkend="StartupSequence" />, we provide greater
188 details that might be helpful in the case where the service does
189 not seem to take off correctly.</para>
191 <para>Like all other registered System V init services, MyPLC is
192 started and shut down automatically when your host system boots
193 and powers off. You may disable automatic startup by invoking the
194 <command>chkconfig</command> command on a Red Hat or Fedora host
197 <example> <title>Disabling automatic startup of MyPLC.</title>
198 <programlisting><![CDATA[# chkconfig plc off]]></programlisting></example>
199 <example> <title>Re-enabling automatic startup of MyPLC.</title>
200 <programlisting><![CDATA[# chkconfig plc on]]></programlisting></example>
204 <section id="ChangingTheConfiguration">
205 <title>Changing the configuration</title>
207 <para>After verifying that MyPLC is working correctly, shut it
208 down and begin changing some of the default variable
209 values. Shut down MyPLC with <command>service plc stop</command>
210 (see <xref linkend="QuickStart" />). With a text
211 editor, open the file
212 <filename>/etc/planetlab/plc_config.xml</filename>. This file is
213 a self-documenting configuration file written in XML. Variables
214 are divided into categories. Variable identifiers must be
215 alphanumeric, plus underscore. A variable is referred to
216 canonically as the uppercase concatenation of its category
217 identifier, an underscore, and its variable identifier. Thus, a
218 variable with an <literal>id</literal> of
219 <literal>slice_prefix</literal> in the <literal>plc</literal>
220 category is referred to canonically as
221 <envar>PLC_SLICE_PREFIX</envar>.</para>
223 <para>The reason for this convention is that during MyPLC
224 startup, <filename>plc_config.xml</filename> is translated into
225 several different languages—shell, PHP, and
226 Python—so that scripts written in each of these languages
227 can refer to the same underlying configuration. Most MyPLC
228 scripts are written in shell, so the convention for shell
229 variables predominates.</para>
231 <para>The variables that you should change immediately are:</para>
234 <listitem><para><envar>PLC_NAME</envar>: Change this to the
235 name of your PLC installation.</para></listitem>
236 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
237 to a more secure password.</para></listitem>
239 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
240 Change this to the e-mail address at which you would like to
241 receive support requests.</para></listitem>
243 <listitem><para><envar>PLC_DB_HOST</envar>,
244 <envar>PLC_DB_IP</envar>, <envar>PLC_API_HOST</envar>,
245 <envar>PLC_API_IP</envar>, <envar>PLC_WWW_HOST</envar>,
246 <envar>PLC_WWW_IP</envar>, <envar>PLC_BOOT_HOST</envar>,
247 <envar>PLC_BOOT_IP</envar>: Change all of these to the
248 preferred FQDN and external IP address of your host
249 system.</para></listitem>
252 <para>The full set of applicable variables is described in <xref
253 linkend="VariablesDevel" />. After changing these variables,
254 save the file, then restart MyPLC with <command>service plc
255 start</command>. You should notice that the password of the
256 default administrator account is no longer
257 <literal>root</literal>, and that the default site name includes
258 the name of your PLC installation instead of PlanetLab.</para>
262 <title>Installing nodes</title>
264 <para>Install your first node by clicking <literal>Add
265 Node</literal> under the <literal>Nodes</literal> tab. Fill in
266 all the appropriate details, then click
267 <literal>Add</literal>. Download the node's configuration file
268 by clicking <literal>Download configuration file</literal> on
269 the <emphasis role="bold">Node Details</emphasis> page for the
270 node. Save it to a floppy disk or USB key as detailed in <xref
271 linkend="TechsGuide" />.</para>
273 <para>Follow the rest of the instructions in <xref
274 linkend="TechsGuide" /> for creating a Boot CD and installing
275 the node, except download the Boot CD image from the
276 <filename>/download</filename> directory of your PLC
277 installation, not from PlanetLab Central. The images located
278 here are customized for your installation. If you change the
279 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
280 if the SSL certificate of your boot server expires, MyPLC will
281 regenerate it and rebuild the Boot CD with the new
282 certificate. If this occurs, you must replace all Boot CDs
283 created before the certificate was regenerated.</para>
285 <para>The installation process for a node has significantly
286 improved since PlanetLab 3.3. It should now take only a few
287 seconds for a new node to become ready to create slices.</para>
291 <title>Administering nodes</title>
293 <para>You may administer nodes as <literal>root</literal> by
294 using the SSH key stored in
295 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
298 <title>Accessing nodes via SSH. Replace
299 <literal>node</literal> with the hostname of the node.</title>
301 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
304 <para>Besides the standard Linux log files located in
305 <filename>/var/log</filename>, several other files can give you
306 clues about any problems with active processes:</para>
309 <listitem><para><filename>/var/log/pl_nm</filename>: The log
310 file for the Node Manager.</para></listitem>
312 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
313 The log file for the Slice Creation Service.</para></listitem>
315 <listitem><para><filename>/var/log/propd</filename>: The log
316 file for Proper, the service which allows certain slices to
317 perform certain privileged operations in the root
318 context.</para></listitem>
320 <listitem><para><filename>/vservers/pl_netflow/var/log/netflow.log</filename>:
321 The log file for PlanetFlow, the network traffic auditing
322 service.</para></listitem>
327 <title>Creating a slice</title>
329 <para>Create a slice by clicking <literal>Create Slice</literal>
330 under the <literal>Slices</literal> tab. Fill in all the
331 appropriate details, then click <literal>Create</literal>. Add
332 nodes to the slice by clicking <literal>Manage Nodes</literal>
333 on the <emphasis role="bold">Slice Details</emphasis> page for
336 <para>A <command>cron</command> job runs every five minutes and
338 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
339 with information about current slice state. The Slice Creation
340 Service running on every node polls this file every ten minutes
341 to determine if it needs to create or delete any slices. You may
342 accelerate this process manually if desired.</para>
345 <title>Forcing slice creation on a node.</title>
347 <programlisting><![CDATA[# Update slices.xml immediately
348 service plc start crond
350 # Kick the Slice Creation Service on a particular node.
351 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
352 vserver pl_conf exec service pl_conf restart]]></programlisting>
356 <section id="StartupSequence">
357 <title>Understanding the startup sequence</title>
359 <para>During service startup described in <xref
360 linkend="QuickStart" />, observe the output of this command for
361 any failures. If no failures occur, you should see output similar
362 to the following:</para>
365 <title>A successful MyPLC startup.</title>
367 <programlisting><![CDATA[Mounting PLC: [ OK ]
368 PLC: Generating network files: [ OK ]
369 PLC: Starting system logger: [ OK ]
370 PLC: Starting database server: [ OK ]
371 PLC: Generating SSL certificates: [ OK ]
372 PLC: Configuring the API: [ OK ]
373 PLC: Updating GPG keys: [ OK ]
374 PLC: Generating SSH keys: [ OK ]
375 PLC: Starting web server: [ OK ]
376 PLC: Bootstrapping the database: [ OK ]
377 PLC: Starting DNS server: [ OK ]
378 PLC: Starting crond: [ OK ]
379 PLC: Rebuilding Boot CD: [ OK ]
380 PLC: Rebuilding Boot Manager: [ OK ]
381 PLC: Signing node packages: [ OK ]
385 <para>If <filename>/plc/root</filename> is mounted successfully, a
386 complete log file of the startup process may be found at
387 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
388 for failure of each step include:</para>
391 <listitem><para><literal>Mounting PLC</literal>: If this step
392 fails, first ensure that you started MyPLC as root. Check
393 <filename>/etc/sysconfig/plc</filename> to ensure that
394 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
395 right locations. You may also have too many existing loopback
396 mounts, or your kernel may not support loopback mounting, bind
397 mounting, or the ext3 filesystem. Try freeing at least one
398 loopback device, or re-compiling your kernel to support loopback
399 mounting, bind mounting, and the ext3 filesystem. If you see an
400 error similar to <literal>Permission denied while trying to open
401 /plc/root.img</literal>, then SELinux may be enabled. See <xref
402 linkend="Requirements" /> above for details.</para></listitem>
404 <listitem><para><literal>Starting database server</literal>: If
405 this step fails, check
406 <filename>/plc/root/var/log/pgsql</filename> and
407 <filename>/plc/root/var/log/boot.log</filename>. The most common
408 reason for failure is that the default PostgreSQL port, TCP port
409 5432, is already in use. Check that you are not running a
410 PostgreSQL server on the host system.</para></listitem>
412 <listitem><para><literal>Starting web server</literal>: If this
414 <filename>/plc/root/var/log/httpd/error_log</filename> and
415 <filename>/plc/root/var/log/boot.log</filename> for obvious
416 errors. The most common reason for failure is that the default
417 web ports, TCP ports 80 and 443, are already in use. Check that
418 you are not running a web server on the host
419 system.</para></listitem>
421 <listitem><para><literal>Bootstrapping the database</literal>:
422 If this step fails, it is likely that the previous step
423 (<literal>Starting web server</literal>) also failed. Another
424 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
425 <xref linkend="ChangingTheConfiguration" />) does not resolve to
426 the host on which the API server has been enabled. By default,
427 all services, including the API server, are enabled and run on
428 the same host, so check that <envar>PLC_API_HOST</envar> is
429 either <filename>localhost</filename> or resolves to a local IP
430 address.</para></listitem>
432 <listitem><para><literal>Starting crond</literal>: If this step
433 fails, it is likely that the previous steps (<literal>Starting
434 web server</literal> and <literal>Bootstrapping the
435 database</literal>) also failed. If not, check
436 <filename>/plc/root/var/log/boot.log</filename> for obvious
437 errors. This step starts the <command>cron</command> service and
438 generates the initial set of XML files that the Slice Creation
439 Service uses to determine slice state.</para></listitem>
442 <para>If no failures occur, then MyPLC should be active with a
443 default configuration. Open a web browser on the host system and
444 visit <literal>http://localhost/</literal>, which should bring you
445 to the front page of your PLC installation. The password of the
446 default administrator account
447 <literal>root@localhost.localdomain</literal> (set by
448 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
449 <envar>PLC_ROOT_PASSWORD</envar>).</para>
452 <section id="FilesInvolvedRuntime"> <title> Files and directories
453 involved in <emphasis>myplc</emphasis></title>
454 <para>MyPLC installs the following files and directories:</para>
458 <listitem><para><filename>/plc/root.img</filename>: The main
459 root filesystem of the MyPLC application. This file is an
460 uncompressed ext3 filesystem that is loopback mounted on
461 <filename>/plc/root</filename> when MyPLC starts. This
462 filesystem, even when mounted, should be treated as an opaque
463 binary that can and will be replaced in its entirety by any
464 upgrade of MyPLC.</para></listitem>
466 <listitem><para><filename>/plc/root</filename>: The mount point
467 for <filename>/plc/root.img</filename>. Once the root filesystem
468 is mounted, all MyPLC services run in a
469 <command>chroot</command> jail based in this
470 directory.</para></listitem>
473 <para><filename>/plc/data</filename>: The directory where user
474 data and generated files are stored. This directory is bind
475 mounted onto <filename>/plc/root/data</filename> so that it is
476 accessible as <filename>/data</filename> from within the
477 <command>chroot</command> jail. Files in this directory are
478 marked with <command>%config(noreplace)</command> in the
479 RPM. That is, during an upgrade of MyPLC, if a file has not
480 changed since the last installation or upgrade of MyPLC, it is
481 subject to upgrade and replacement. If the file has changed,
482 the new version of the file will be created with a
483 <filename>.rpmnew</filename> extension. Symlinks within the
484 MyPLC root filesystem ensure that the following directories
485 (relative to <filename>/plc/root</filename>) are stored
486 outside the MyPLC filesystem image:</para>
489 <listitem><para><filename>/etc/planetlab</filename>: This
490 directory contains the configuration files, keys, and
491 certificates that define your MyPLC
492 installation.</para></listitem>
494 <listitem><para><filename>/var/lib/pgsql</filename>: This
495 directory contains PostgreSQL database
496 files.</para></listitem>
498 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
499 directory contains node installation logs.</para></listitem>
501 <listitem><para><filename>/var/www/html/boot</filename>: This
502 directory contains the Boot Manager, customized for your MyPLC
503 installation, and its data files.</para></listitem>
505 <listitem><para><filename>/var/www/html/download</filename>: This
506 directory contains Boot CD images, customized for your MyPLC
507 installation.</para></listitem>
509 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
510 directory is where you should install node package updates,
511 if any. By default, nodes are installed from the tarball
513 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
514 which is pre-built from the latest PlanetLab Central
515 sources, and installed as part of your MyPLC
516 installation. However, nodes will attempt to install any
517 newer RPMs located in
518 <filename>/var/www/html/install-rpms/planetlab</filename>,
519 after initial installation and periodically thereafter. You
520 must run <command>yum-arch</command> and
521 <command>createrepo</command> to update the
522 <command>yum</command> caches in this directory after
523 installing a new RPM. PlanetLab Central cannot support any
524 changes to this directory.</para></listitem>
526 <listitem><para><filename>/var/www/html/xml</filename>: This
527 directory contains various XML files that the Slice Creation
528 Service uses to determine the state of slices. These XML
529 files are refreshed periodically by <command>cron</command>
530 jobs running in the MyPLC root.</para></listitem>
534 <listitem id="MyplcInitScripts">
535 <para><filename>/etc/init.d/plc</filename>: This file
536 is a System V init script installed on your host filesystem,
537 that allows you to start up and shut down MyPLC with a single
538 command, as described in <xref linkend="QuickStart" />.</para>
541 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
542 file is a shell script fragment that defines the variables
543 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
544 the values of these variables are <filename>/plc/root</filename>
545 and <filename>/plc/data</filename>, respectively. If you wish,
546 you may move your MyPLC installation to another location on your
547 host filesystem and edit the values of these variables
548 appropriately, but you will break the RPM upgrade
549 process. PlanetLab Central cannot support any changes to this
550 file.</para></listitem>
552 <listitem><para><filename>/etc/planetlab</filename>: This
553 symlink to <filename>/plc/data/etc/planetlab</filename> is
554 installed on the host system for convenience.</para></listitem>
559 <section id="DevelopmentEnvironment">
560 <title>Rebuilding and customizing MyPLC</title>
562 <para>The MyPLC package, though distributed as an RPM, is not a
563 traditional package that can be easily rebuilt from SRPM. The
564 requisite build environment is quite extensive and numerous
565 assumptions are made throughout the PlanetLab source code base,
566 that the build environment is based on Fedora Core 4 and that
567 access to a complete Fedora Core 4 mirror is available.</para>
569 <para>For this reason, it is recommended that you only rebuild
570 MyPLC (or any of its components) from within the MyPLC development
571 environment. The MyPLC development environment is similar to MyPLC
572 itself in that it is a portable filesystem contained within a
573 <command>chroot</command> jail. The filesystem contains all the
574 necessary tools required to rebuild MyPLC, as well as a snapshot
575 of the PlanetLab source code base in the form of a local CVS
579 <title>Installation</title>
581 <para>Install the MyPLC development environment similarly to how
582 you would install MyPLC. You may install both packages on the same
583 host system if you wish. As with MyPLC, the MyPLC development
584 environment should be treated as a monolithic software
585 application, and any files present in the
586 <command>chroot</command> jail should not be modified directly, as
587 they are subject to upgrade.</para>
590 <listitem> <para>If your distribution supports RPM:</para>
591 <programlisting><![CDATA[# rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm]]></programlisting></listitem>
593 <listitem> <para>If your distribution does not support RPM:</para>
594 <programlisting><![CDATA[# cd /tmp
595 # wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
597 # rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu]]></programlisting></listitem>
601 <section id="FilesInvolvedDevel"> <title> Files and directories
602 involved in <emphasis>myplc-devl</emphasis></title>
604 <para>The MyPLC development environment installs the following
605 files and directories:</para>
608 <listitem><para><filename>/plc/devel/root.img</filename>: The
609 main root filesystem of the MyPLC development environment. This
610 file is an uncompressed ext3 filesystem that is loopback mounted
611 on <filename>/plc/devel/root</filename> when the MyPLC
612 development environment is initialized. This filesystem, even
613 when mounted, should be treated as an opaque binary that can and
614 will be replaced in its entirety by any upgrade of the MyPLC
615 development environment.</para></listitem>
617 <listitem><para><filename>/plc/devel/root</filename>: The mount
619 <filename>/plc/devel/root.img</filename>.</para></listitem>
622 <para><filename>/plc/devel/data</filename>: The directory
623 where user data and generated files are stored. This directory
624 is bind mounted onto <filename>/plc/devel/root/data</filename>
625 so that it is accessible as <filename>/data</filename> from
626 within the <command>chroot</command> jail. Files in this
627 directory are marked with
628 <command>%config(noreplace)</command> in the RPM. Symlinks
629 ensure that the following directories (relative to
630 <filename>/plc/devel/root</filename>) are stored outside the
631 root filesystem image:</para>
634 <listitem><para><filename>/etc/planetlab</filename>: This
635 directory contains the configuration files that define your
636 MyPLC development environment.</para></listitem>
638 <listitem><para><filename>/cvs</filename>: A
639 snapshot of the PlanetLab source code is stored as a CVS
640 repository in this directory. Files in this directory will
641 <emphasis role="bold">not</emphasis> be updated by an upgrade of
642 <filename>myplc-devel</filename>. See <xref
643 linkend="UpdatingCVS" /> for more information about updating
644 PlanetLab source code.</para></listitem>
646 <listitem><para><filename>/build</filename>:
647 Builds are stored in this directory. This directory is bind
648 mounted onto <filename>/plc/devel/root/build</filename> so that
649 it is accessible as <filename>/build</filename> from within the
650 <command>chroot</command> jail. The build scripts in this
651 directory are themselves source controlled; see <xref
652 linkend="BuildingMyPLC" /> for more information about executing
653 builds.</para></listitem>
658 <para><filename>/etc/init.d/plc-devel</filename>: This file is
659 a System V init script installed on your host filesystem, that
660 allows you to start up and shut down the MyPLC development
661 environment with a single command.</para>
667 <title>Fedora Core 4 mirror requirement</title>
669 <para>The MyPLC development environment requires access to a
670 complete Fedora Core 4 i386 RPM repository, because several
671 different filesystems based upon Fedora Core 4 are constructed
672 during the process of building MyPLC. You may configure the
673 location of this repository via the
674 <envar>PLC_DEVEL_FEDORA_URL</envar> variable in
675 <filename>/plc/devel/data/etc/planetlab/plc_config.xml</filename>. The
676 value of the variable should be a URL that points to the top
677 level of a Fedora mirror that provides the
678 <filename>base</filename>, <filename>updates</filename>, and
679 <filename>extras</filename> repositories, e.g.,</para>
682 <listitem><para><filename>file:///data/fedora</filename></para></listitem>
683 <listitem><para><filename>http://coblitz.planet-lab.org/pub/fedora</filename></para></listitem>
684 <listitem><para><filename>ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</filename></para></listitem>
685 <listitem><para><filename>ftp://mirror.stanford.edu/pub/mirrors/fedora</filename></para></listitem>
686 <listitem><para><filename>http://rpmfind.net/linux/fedora</filename></para></listitem>
689 <para>As implied by the list, the repository may be located on
690 the local filesystem, or it may be located on a remote FTP or
691 HTTP server. URLs beginning with <filename>file://</filename>
692 should exist at the specified location relative to the root of
693 the <command>chroot</command> jail. For optimum performance and
694 reproducibility, specify
695 <envar>PLC_DEVEL_FEDORA_URL=file:///data/fedora</envar> and
696 download all Fedora Core 4 RPMS into
697 <filename>/plc/devel/data/fedora</filename> on the host system
698 after installing <filename>myplc-devel</filename>. Use a tool
699 such as <command>wget</command> or <command>rsync</command> to
700 download the RPMS from a public mirror:</para>
703 <title>Setting up a local Fedora Core 4 repository.</title>
705 <programlisting><![CDATA[# mkdir -p /plc/devel/data/fedora
706 # cd /plc/devel/data/fedora
708 # for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
709 > wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
710 > done]]></programlisting>
713 <para>Change the repository URI and <command>--cut-dirs</command>
714 level as needed to produce a hierarchy that resembles:</para>
716 <programlisting><![CDATA[/plc/devel/data/fedora/core/4/i386/os
717 /plc/devel/data/fedora/core/updates/4/i386
718 /plc/devel/data/fedora/extras/4/i386]]></programlisting>
720 <para>A list of additional Fedora Core 4 mirrors is available at
721 <ulink url="http://fedora.redhat.com/Download/mirrors.html">http://fedora.redhat.com/Download/mirrors.html</ulink>.</para>
724 <section id="BuildingMyPLC">
725 <title>Building MyPLC</title>
727 <para>All PlanetLab source code modules are built and installed
728 as RPMS. A set of build scripts, checked into the
729 <filename>build/</filename> directory of the PlanetLab CVS
730 repository, eases the task of rebuilding PlanetLab source
733 <para> Before you try building MyPLC, you might check the
734 configuration, in a file named
735 <emphasis>plc_config.xml</emphasis> that relies on a very
736 similar model as MyPLC, located in
737 <emphasis>/etc/planetlab</emphasis> within the chroot jail, or
738 in <emphasis>/plc/devel/data/etc/planetlab</emphasis> from the
739 root context. The set of applicable variables is described in
740 <xref linkend="VariablesDevel" />. </para>
742 <para>To build MyPLC, or any PlanetLab source code module, from
743 within the MyPLC development environment, execute the following
744 commands as root:</para>
747 <title>Building MyPLC.</title>
749 <programlisting><![CDATA[# Initialize MyPLC development environment
750 service plc-devel start
752 # Enter development environment
753 chroot /plc/devel/root su -
755 # Check out build scripts into a directory named after the current
756 # date. This is simply a convention, it need not be followed
757 # exactly. See build/build.sh for an example of a build script that
758 # names build directories after CVS tags.
759 DATE=$(date +%Y.%m.%d)
761 cvs -d /cvs checkout -d $DATE build
764 make -C $DATE]]></programlisting>
767 <para>If the build succeeds, a set of binary RPMS will be
769 <filename>/plc/devel/data/build/$DATE/RPMS/</filename> that you
771 <filename>/var/www/html/install-rpms/planetlab</filename>
772 directory of your MyPLC installation (see <xref
773 linkend="Installation" />).</para>
776 <section id="UpdatingCVS">
777 <title>Updating CVS</title>
779 <para>A complete snapshot of the PlanetLab source code is included
780 with the MyPLC development environment as a CVS repository in
781 <filename>/plc/devel/data/cvs</filename>. This CVS repository may
782 be accessed like any other CVS repository. It may be accessed
783 using an interface such as <ulink
784 url="http://www.freebsd.org/projects/cvsweb.html">CVSweb</ulink>,
785 and file permissions may be altered to allow for fine-grained
786 access control. Although the files are included with the
787 <filename>myplc-devel</filename> RPM, they are <emphasis
788 role="bold">not</emphasis> subject to upgrade once installed. New
789 versions of the <filename>myplc-devel</filename> RPM will install
790 updated snapshot repositories in
791 <filename>/plc/devel/data/cvs-%{version}-%{release}</filename>,
792 where <literal>%{version}-%{release}</literal> is replaced with
793 the version number of the RPM.</para>
795 <para>Because the CVS repository is not automatically upgraded,
796 if you wish to keep your local repository synchronized with the
797 public PlanetLab repository, it is highly recommended that you
798 use CVS's support for <ulink
799 url="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources">vendor
800 branches</ulink> to track changes. Vendor branches ease the task
801 of merging upstream changes with your local modifications. To
802 import a new snapshot into your local repository (for example,
803 if you have just upgraded from
804 <filename>myplc-devel-0.4-2</filename> to
805 <filename>myplc-devel-0.4-3</filename> and you notice the new
806 repository in <filename>/plc/devel/data/cvs-0.4-3</filename>),
807 execute the following commands as root from within the MyPLC
808 development environment:</para>
811 <title>Updating /data/cvs from /data/cvs-0.4-3.</title>
813 <para><emphasis role="bold">Warning</emphasis>: This may cause
814 severe, irreversible changes to be made to your local
815 repository. Always tag your local repository before
818 <programlisting><![CDATA[# Initialize MyPLC development environment
819 service plc-devel start
821 # Enter development environment
822 chroot /plc/devel/root su -
825 cvs -d /cvs rtag before-myplc-0_4-3-merge
828 TMP=$(mktemp -d /data/export.XXXXXX)
830 cvs -d /data/cvs-0.4-3 export -r HEAD .
831 cvs -d /cvs import -m "PlanetLab sources from myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
833 rm -rf $TMP]]></programlisting>
836 <para>If there any merge conflicts, use the command suggested by
837 CVS to help the merge. Explaining how to fix merge conflicts is
838 beyond the scope of this document; consult the CVS documentation
839 for more information on how to use CVS.</para>
843 <appendix id="VariablesRuntime">
844 <title>Configuration variables (for <emphasis>myplc</emphasis>)</title>
846 <para>Listed below is the set of standard configuration variables
847 and their default values, defined in the template
848 <filename>/etc/planetlab/default_config.xml</filename>. Additional
849 variables and their defaults may be defined in site-specific XML
850 templates that should be placed in
851 <filename>/etc/planetlab/configs/</filename>.</para>
856 <appendix id="VariablesDevel">
857 <title>Development configuration variables (for <emphasis>myplc-devel</emphasis>)</title>
863 <title>Bibliography</title>
865 <biblioentry id="TechsGuide">
866 <author><firstname>Mark</firstname><surname>Huang</surname></author>
868 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
869 Technical Contact's Guide</ulink></title>