1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
4 <!ENTITY Variables SYSTEM "plc_variables.xml">
5 <!ENTITY DevelVariables SYSTEM "plc_devel_variables.xml">
9 <title>MyPLC User's Guide</title>
12 <firstname>Mark Huang</firstname>
16 <orgname>Princeton University</orgname>
20 <para>This document describes the design, installation, and
21 administration of MyPLC, a complete PlanetLab Central (PLC)
22 portable installation contained within a
23 <command>chroot</command> jail. This document assumes advanced
24 knowledge of the PlanetLab architecture and Linux system
25 administration.</para>
30 <revnumber>1.0</revnumber>
31 <date>April 7, 2006</date>
32 <authorinitials>MLH</authorinitials>
33 <revdescription><para>Initial draft.</para></revdescription>
36 <revnumber>1.1</revnumber>
37 <date>July 19, 2006</date>
38 <authorinitials>MLH</authorinitials>
39 <revdescription><para>Add development environment.</para></revdescription>
45 <title>Overview</title>
47 <para>MyPLC is a complete PlanetLab Central (PLC) portable
48 installation contained within a <command>chroot</command>
49 jail. The default installation consists of a web server, an
50 XML-RPC API server, a boot server, and a database server: the core
51 components of PLC. The installation is customized through an
52 easy-to-use graphical interface. All PLC services are started up
53 and shut down through a single script installed on the host
54 system. The usually complex process of installing and
55 administering the PlanetLab backend is reduced by containing PLC
56 services within a virtual filesystem. By packaging it in such a
57 manner, MyPLC may also be run on any modern Linux distribution,
58 and could conceivably even run in a PlanetLab slice.</para>
60 <figure id="Architecture">
61 <title>MyPLC architecture</title>
64 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
67 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
70 <phrase>MyPLC architecture</phrase>
73 <para>MyPLC should be viewed as a single application that
74 provides multiple functions and can run on any host
80 <section> <title> Purpose of the <emphasis> myplc-devel
81 </emphasis> package </title>
82 <para> The <emphasis>myplc</emphasis> package comes with all
83 required node software, rebuilt from the public PlanetLab CVS
84 repository. If for any reason you need to implement your own
85 customized version of this software, you can use the
86 <emphasis>myplc-devel</emphasis> package instead, for setting up
87 your own development environment, including a local CVS
88 repository; you can then freely manage your changes and rebuild
89 your customized version of <emphasis>myplc</emphasis>. We also
90 provide good practices, that will then allow you to resync your local
91 CVS repository with any further evolution on the mainstream public
92 PlanetLab software. </para> </section>
97 <section id="Requirements"> <title> Requirements </title>
99 <para> <emphasis>myplc</emphasis> and
100 <emphasis>myplc-devel</emphasis> were designed as
101 <command>chroot</command> jails so as to reduce the requirements on
102 your host operating system. So in theory, these distributions should
103 work on virtually any Linux 2.6 based distribution, whether it
104 supports rpm or not. </para>
106 <para> However, things are never that simple and there indeed are
107 some known limitations to this, so here are a couple notes as a
108 recommended reading before you proceed with installing</para>
110 <para> As of August 2006 9, so this should apply to
115 <listitem><para> The software is vastly based on <emphasis>Fedora
116 Core 4</emphasis>. Please note that the build server at Princeton
117 runs <emphasis>Fedora Core 2</emphasis>, togother with a upgraded
121 <listitem><para> myplc and myplc-devel are known to work on both
122 <emphasis>Fedora Core 2</emphasis> and <emphasis>Fedora Core
123 4</emphasis>. Please note however that, on fc4 at least, it is
124 highly recommended to use the <application>Security Level
125 Configuration</application> utility and to <emphasis>switch off
126 SElinux</emphasis> on your box because : </para>
130 myplc requires you to run SElinux as 'Permissive' at most
133 myplc-devel requires you to turn SElinux Off.
140 <section id="Installation">
141 <title>Installation</title>
143 <para>Though internally composed of commodity software
144 subpackages, MyPLC should be treated as a monolithic software
145 application. MyPLC is distributed as single RPM package that has
146 no external dependencies, allowing it to be installed on
147 practically any Linux 2.6 based distribution:</para>
150 <title>Installing MyPLC.</title>
152 <programlisting><![CDATA[# If your distribution supports RPM
153 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
155 # If your distribution does not support RPM
157 wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
159 rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu]]></programlisting>
162 <para>MyPLC installs the following files and directories:</para>
166 <listitem><para><filename>/plc/root.img</filename>: The main
167 root filesystem of the MyPLC application. This file is an
168 uncompressed ext3 filesystem that is loopback mounted on
169 <filename>/plc/root</filename> when MyPLC starts. This
170 filesystem, even when mounted, should be treated as an opaque
171 binary that can and will be replaced in its entirety by any
172 upgrade of MyPLC.</para></listitem>
174 <listitem><para><filename>/plc/root</filename>: The mount point
175 for <filename>/plc/root.img</filename>. Once the root filesystem
176 is mounted, all MyPLC services run in a
177 <command>chroot</command> jail based in this
178 directory.</para></listitem>
181 <para><filename>/plc/data</filename>: The directory where user
182 data and generated files are stored. This directory is bind
183 mounted onto <filename>/plc/root/data</filename> so that it is
184 accessible as <filename>/data</filename> from within the
185 <command>chroot</command> jail. Files in this directory are
186 marked with <command>%config(noreplace)</command> in the
187 RPM. That is, during an upgrade of MyPLC, if a file has not
188 changed since the last installation or upgrade of MyPLC, it is
189 subject to upgrade and replacement. If the file has changed,
190 the new version of the file will be created with a
191 <filename>.rpmnew</filename> extension. Symlinks within the
192 MyPLC root filesystem ensure that the following directories
193 (relative to <filename>/plc/root</filename>) are stored
194 outside the MyPLC filesystem image:</para>
197 <listitem><para><filename>/etc/planetlab</filename>: This
198 directory contains the configuration files, keys, and
199 certificates that define your MyPLC
200 installation.</para></listitem>
202 <listitem><para><filename>/var/lib/pgsql</filename>: This
203 directory contains PostgreSQL database
204 files.</para></listitem>
206 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
207 directory contains node installation logs.</para></listitem>
209 <listitem><para><filename>/var/www/html/boot</filename>: This
210 directory contains the Boot Manager, customized for your MyPLC
211 installation, and its data files.</para></listitem>
213 <listitem><para><filename>/var/www/html/download</filename>: This
214 directory contains Boot CD images, customized for your MyPLC
215 installation.</para></listitem>
217 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
218 directory is where you should install node package updates,
219 if any. By default, nodes are installed from the tarball
221 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
222 which is pre-built from the latest PlanetLab Central
223 sources, and installed as part of your MyPLC
224 installation. However, nodes will attempt to install any
225 newer RPMs located in
226 <filename>/var/www/html/install-rpms/planetlab</filename>,
227 after initial installation and periodically thereafter. You
228 must run <command>yum-arch</command> and
229 <command>createrepo</command> to update the
230 <command>yum</command> caches in this directory after
231 installing a new RPM. PlanetLab Central cannot support any
232 changes to this directory.</para></listitem>
234 <listitem><para><filename>/var/www/html/xml</filename>: This
235 directory contains various XML files that the Slice Creation
236 Service uses to determine the state of slices. These XML
237 files are refreshed periodically by <command>cron</command>
238 jobs running in the MyPLC root.</para></listitem>
243 <para><filename>/etc/init.d/plc</filename>: This file
244 is a System V init script installed on your host filesystem,
245 that allows you to start up and shut down MyPLC with a single
246 command. On a Red Hat or Fedora host system, it is customary to
247 use the <command>service</command> command to invoke System V
250 <example id="StartingAndStoppingMyPLC">
251 <title>Starting and stopping MyPLC.</title>
253 <programlisting><![CDATA[# Starting MyPLC
257 service plc stop]]></programlisting>
260 <para>Like all other registered System V init services, MyPLC is
261 started and shut down automatically when your host system boots
262 and powers off. You may disable automatic startup by invoking
263 the <command>chkconfig</command> command on a Red Hat or Fedora
267 <title>Disabling automatic startup of MyPLC.</title>
269 <programlisting><![CDATA[# Disable automatic startup
272 # Enable automatic startup
273 chkconfig plc on]]></programlisting>
277 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
278 file is a shell script fragment that defines the variables
279 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
280 the values of these variables are <filename>/plc/root</filename>
281 and <filename>/plc/data</filename>, respectively. If you wish,
282 you may move your MyPLC installation to another location on your
283 host filesystem and edit the values of these variables
284 appropriately, but you will break the RPM upgrade
285 process. PlanetLab Central cannot support any changes to this
286 file.</para></listitem>
288 <listitem><para><filename>/etc/planetlab</filename>: This
289 symlink to <filename>/plc/data/etc/planetlab</filename> is
290 installed on the host system for convenience.</para></listitem>
295 <title>Quickstart</title>
297 <para>Once installed, start MyPLC (see <xref
298 linkend="StartingAndStoppingMyPLC" />). MyPLC must be started as
299 root. Observe the output of this command for any failures. If no
300 failures occur, you should see output similar to the
304 <title>A successful MyPLC startup.</title>
306 <programlisting><![CDATA[Mounting PLC: [ OK ]
307 PLC: Generating network files: [ OK ]
308 PLC: Starting system logger: [ OK ]
309 PLC: Starting database server: [ OK ]
310 PLC: Generating SSL certificates: [ OK ]
311 PLC: Configuring the API: [ OK ]
312 PLC: Updating GPG keys: [ OK ]
313 PLC: Generating SSH keys: [ OK ]
314 PLC: Starting web server: [ OK ]
315 PLC: Bootstrapping the database: [ OK ]
316 PLC: Starting DNS server: [ OK ]
317 PLC: Starting crond: [ OK ]
318 PLC: Rebuilding Boot CD: [ OK ]
319 PLC: Rebuilding Boot Manager: [ OK ]
320 PLC: Signing node packages: [ OK ]
324 <para>If <filename>/plc/root</filename> is mounted successfully, a
325 complete log file of the startup process may be found at
326 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
327 for failure of each step include:</para>
330 <listitem><para><literal>Mounting PLC</literal>: If this step
331 fails, first ensure that you started MyPLC as root. Check
332 <filename>/etc/sysconfig/plc</filename> to ensure that
333 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
334 right locations. You may also have too many existing loopback
335 mounts, or your kernel may not support loopback mounting, bind
336 mounting, or the ext3 filesystem. Try freeing at least one
337 loopback device, or re-compiling your kernel to support loopback
338 mounting, bind mounting, and the ext3 filesystem. If you see an
339 error similar to <literal>Permission denied while trying to open
340 /plc/root.img</literal>, then SELinux may be enabled. See <xref
341 linkend="Requirements" /> above for details.</para></listitem>
343 <listitem><para><literal>Starting database server</literal>: If
344 this step fails, check
345 <filename>/plc/root/var/log/pgsql</filename> and
346 <filename>/plc/root/var/log/boot.log</filename>. The most common
347 reason for failure is that the default PostgreSQL port, TCP port
348 5432, is already in use. Check that you are not running a
349 PostgreSQL server on the host system.</para></listitem>
351 <listitem><para><literal>Starting web server</literal>: If this
353 <filename>/plc/root/var/log/httpd/error_log</filename> and
354 <filename>/plc/root/var/log/boot.log</filename> for obvious
355 errors. The most common reason for failure is that the default
356 web ports, TCP ports 80 and 443, are already in use. Check that
357 you are not running a web server on the host
358 system.</para></listitem>
360 <listitem><para><literal>Bootstrapping the database</literal>:
361 If this step fails, it is likely that the previous step
362 (<literal>Starting web server</literal>) also failed. Another
363 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
364 <xref linkend="ChangingTheConfiguration" />) does not resolve to
365 the host on which the API server has been enabled. By default,
366 all services, including the API server, are enabled and run on
367 the same host, so check that <envar>PLC_API_HOST</envar> is
368 either <filename>localhost</filename> or resolves to a local IP
369 address.</para></listitem>
371 <listitem><para><literal>Starting crond</literal>: If this step
372 fails, it is likely that the previous steps (<literal>Starting
373 web server</literal> and <literal>Bootstrapping the
374 database</literal>) also failed. If not, check
375 <filename>/plc/root/var/log/boot.log</filename> for obvious
376 errors. This step starts the <command>cron</command> service and
377 generates the initial set of XML files that the Slice Creation
378 Service uses to determine slice state.</para></listitem>
381 <para>If no failures occur, then MyPLC should be active with a
382 default configuration. Open a web browser on the host system and
383 visit <literal>http://localhost/</literal>, which should bring you
384 to the front page of your PLC installation. The password of the
385 default administrator account
386 <literal>root@localhost.localdomain</literal> (set by
387 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
388 <envar>PLC_ROOT_PASSWORD</envar>).</para>
390 <section id="ChangingTheConfiguration">
391 <title>Changing the configuration</title>
393 <para>After verifying that MyPLC is working correctly, shut it
394 down and begin changing some of the default variable
395 values. Shut down MyPLC with <command>service plc stop</command>
396 (see <xref linkend="StartingAndStoppingMyPLC" />). With a text
397 editor, open the file
398 <filename>/etc/planetlab/plc_config.xml</filename>. This file is
399 a self-documenting configuration file written in XML. Variables
400 are divided into categories. Variable identifiers must be
401 alphanumeric, plus underscore. A variable is referred to
402 canonically as the uppercase concatenation of its category
403 identifier, an underscore, and its variable identifier. Thus, a
404 variable with an <literal>id</literal> of
405 <literal>slice_prefix</literal> in the <literal>plc</literal>
406 category is referred to canonically as
407 <envar>PLC_SLICE_PREFIX</envar>.</para>
409 <para>The reason for this convention is that during MyPLC
410 startup, <filename>plc_config.xml</filename> is translated into
411 several different languages—shell, PHP, and
412 Python—so that scripts written in each of these languages
413 can refer to the same underlying configuration. Most MyPLC
414 scripts are written in shell, so the convention for shell
415 variables predominates.</para>
417 <para>The variables that you should change immediately are:</para>
420 <listitem><para><envar>PLC_NAME</envar>: Change this to the
421 name of your PLC installation.</para></listitem>
422 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
423 to a more secure password.</para></listitem>
425 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
426 Change this to the e-mail address at which you would like to
427 receive support requests.</para></listitem>
429 <listitem><para><envar>PLC_DB_HOST</envar>,
430 <envar>PLC_DB_IP</envar>, <envar>PLC_API_HOST</envar>,
431 <envar>PLC_API_IP</envar>, <envar>PLC_WWW_HOST</envar>,
432 <envar>PLC_WWW_IP</envar>, <envar>PLC_BOOT_HOST</envar>,
433 <envar>PLC_BOOT_IP</envar>: Change all of these to the
434 preferred FQDN and external IP address of your host
435 system.</para></listitem>
438 <para>After changing these variables, save the file, then
439 restart MyPLC with <command>service plc start</command>. You
440 should notice that the password of the default administrator
441 account is no longer <literal>root</literal>, and that the
442 default site name includes the name of your PLC installation
443 instead of PlanetLab.</para>
447 <title>Installing nodes</title>
449 <para>Install your first node by clicking <literal>Add
450 Node</literal> under the <literal>Nodes</literal> tab. Fill in
451 all the appropriate details, then click
452 <literal>Add</literal>. Download the node's configuration file
453 by clicking <literal>Download configuration file</literal> on
454 the <emphasis role="bold">Node Details</emphasis> page for the
455 node. Save it to a floppy disk or USB key as detailed in <xref
456 linkend="TechsGuide" />.</para>
458 <para>Follow the rest of the instructions in <xref
459 linkend="TechsGuide" /> for creating a Boot CD and installing
460 the node, except download the Boot CD image from the
461 <filename>/download</filename> directory of your PLC
462 installation, not from PlanetLab Central. The images located
463 here are customized for your installation. If you change the
464 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
465 if the SSL certificate of your boot server expires, MyPLC will
466 regenerate it and rebuild the Boot CD with the new
467 certificate. If this occurs, you must replace all Boot CDs
468 created before the certificate was regenerated.</para>
470 <para>The installation process for a node has significantly
471 improved since PlanetLab 3.3. It should now take only a few
472 seconds for a new node to become ready to create slices.</para>
476 <title>Administering nodes</title>
478 <para>You may administer nodes as <literal>root</literal> by
479 using the SSH key stored in
480 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
483 <title>Accessing nodes via SSH. Replace
484 <literal>node</literal> with the hostname of the node.</title>
486 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
489 <para>Besides the standard Linux log files located in
490 <filename>/var/log</filename>, several other files can give you
491 clues about any problems with active processes:</para>
494 <listitem><para><filename>/var/log/pl_nm</filename>: The log
495 file for the Node Manager.</para></listitem>
497 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
498 The log file for the Slice Creation Service.</para></listitem>
500 <listitem><para><filename>/var/log/propd</filename>: The log
501 file for Proper, the service which allows certain slices to
502 perform certain privileged operations in the root
503 context.</para></listitem>
505 <listitem><para><filename>/vservers/pl_netflow/var/log/netflow.log</filename>:
506 The log file for PlanetFlow, the network traffic auditing
507 service.</para></listitem>
512 <title>Creating a slice</title>
514 <para>Create a slice by clicking <literal>Create Slice</literal>
515 under the <literal>Slices</literal> tab. Fill in all the
516 appropriate details, then click <literal>Create</literal>. Add
517 nodes to the slice by clicking <literal>Manage Nodes</literal>
518 on the <emphasis role="bold">Slice Details</emphasis> page for
521 <para>A <command>cron</command> job runs every five minutes and
523 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
524 with information about current slice state. The Slice Creation
525 Service running on every node polls this file every ten minutes
526 to determine if it needs to create or delete any slices. You may
527 accelerate this process manually if desired.</para>
530 <title>Forcing slice creation on a node.</title>
532 <programlisting><![CDATA[# Update slices.xml immediately
533 service plc start crond
535 # Kick the Slice Creation Service on a particular node.
536 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
537 vserver pl_conf exec service pl_conf restart]]></programlisting>
542 <section id="DevelopmentEnvironment">
543 <title>Rebuilding and customizing MyPLC</title>
545 <para>The MyPLC package, though distributed as an RPM, is not a
546 traditional package that can be easily rebuilt from SRPM. The
547 requisite build environment is quite extensive and numerous
548 assumptions are made throughout the PlanetLab source code base,
549 that the build environment is based on Fedora Core 4 and that
550 access to a complete Fedora Core 4 mirror is available.</para>
552 <para>For this reason, it is recommended that you only rebuild
553 MyPLC (or any of its components) from within the MyPLC development
554 environment. The MyPLC development environment is similar to MyPLC
555 itself in that it is a portable filesystem contained within a
556 <command>chroot</command> jail. The filesystem contains all the
557 necessary tools required to rebuild MyPLC, as well as a snapshot
558 of the PlanetLab source code base in the form of a local CVS
562 <title>Installation</title>
564 <para>Install the MyPLC development environment similarly to how
565 you would install MyPLC. You may install both packages on the same
566 host system if you wish. As with MyPLC, the MyPLC development
567 environment should be treated as a monolithic software
568 application, and any files present in the
569 <command>chroot</command> jail should not be modified directly, as
570 they are subject to upgrade.</para>
573 <title>Installing the MyPLC development environment.</title>
575 <programlisting><![CDATA[# If your distribution supports RPM
576 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
578 # If your distribution does not support RPM
580 wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
582 rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu]]></programlisting>
585 <para>The MyPLC development environment installs the following
586 files and directories:</para>
589 <listitem><para><filename>/plc/devel/root.img</filename>: The
590 main root filesystem of the MyPLC development environment. This
591 file is an uncompressed ext3 filesystem that is loopback mounted
592 on <filename>/plc/devel/root</filename> when the MyPLC
593 development environment is initialized. This filesystem, even
594 when mounted, should be treated as an opaque binary that can and
595 will be replaced in its entirety by any upgrade of the MyPLC
596 development environment.</para></listitem>
598 <listitem><para><filename>/plc/devel/root</filename>: The mount
600 <filename>/plc/devel/root.img</filename>.</para></listitem>
603 <para><filename>/plc/devel/data</filename>: The directory
604 where user data and generated files are stored. This directory
605 is bind mounted onto <filename>/plc/devel/root/data</filename>
606 so that it is accessible as <filename>/data</filename> from
607 within the <command>chroot</command> jail. Files in this
608 directory are marked with
609 <command>%config(noreplace)</command> in the RPM. Symlinks
610 ensure that the following directories (relative to
611 <filename>/plc/devel/root</filename>) are stored outside the
612 root filesystem image:</para>
615 <listitem><para><filename>/etc/planetlab</filename>: This
616 directory contains the configuration files that define your
617 MyPLC development environment.</para></listitem>
619 <listitem><para><filename>/cvs</filename>: A
620 snapshot of the PlanetLab source code is stored as a CVS
621 repository in this directory. Files in this directory will
622 <emphasis role="bold">not</emphasis> be updated by an upgrade of
623 <filename>myplc-devel</filename>. See <xref
624 linkend="UpdatingCVS" /> for more information about updating
625 PlanetLab source code.</para></listitem>
627 <listitem><para><filename>/build</filename>:
628 Builds are stored in this directory. This directory is bind
629 mounted onto <filename>/plc/devel/root/build</filename> so that
630 it is accessible as <filename>/build</filename> from within the
631 <command>chroot</command> jail. The build scripts in this
632 directory are themselves source controlled; see <xref
633 linkend="BuildingMyPLC" /> for more information about executing
634 builds.</para></listitem>
639 <para><filename>/etc/init.d/plc-devel</filename>: This file is
640 a System V init script installed on your host filesystem, that
641 allows you to start up and shut down the MyPLC development
642 environment with a single command.</para>
648 <title>Fedora Core 4 mirror requirement</title>
650 <para>The MyPLC development environment requires access to a
651 complete Fedora Core 4 i386 RPM repository, because several
652 different filesystems based upon Fedora Core 4 are constructed
653 during the process of building MyPLC. You may configure the
654 location of this repository via the
655 <envar>PLC_DEVEL_FEDORA_URL</envar> variable in
656 <filename>/plc/devel/data/etc/planetlab/plc_config.xml</filename>. The
657 value of the variable should be a URL that points to the top
658 level of a Fedora mirror that provides the
659 <filename>base</filename>, <filename>updates</filename>, and
660 <filename>extras</filename> repositories, e.g.,</para>
663 <listitem><para><filename>file:///data/fedora</filename></para></listitem>
664 <listitem><para><filename>http://coblitz.planet-lab.org/pub/fedora</filename></para></listitem>
665 <listitem><para><filename>ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</filename></para></listitem>
666 <listitem><para><filename>ftp://mirror.stanford.edu/pub/mirrors/fedora</filename></para></listitem>
667 <listitem><para><filename>http://rpmfind.net/linux/fedora</filename></para></listitem>
670 <para>As implied by the list, the repository may be located on
671 the local filesystem, or it may be located on a remote FTP or
672 HTTP server. URLs beginning with <filename>file://</filename>
673 should exist at the specified location relative to the root of
674 the <command>chroot</command> jail. For optimum performance and
675 reproducibility, specify
676 <envar>PLC_DEVEL_FEDORA_URL=file:///data/fedora</envar> and
677 download all Fedora Core 4 RPMS into
678 <filename>/plc/devel/data/fedora</filename> on the host system
679 after installing <filename>myplc-devel</filename>. Use a tool
680 such as <command>wget</command> or <command>rsync</command> to
681 download the RPMS from a public mirror:</para>
684 <title>Setting up a local Fedora Core 4 repository.</title>
686 <programlisting><![CDATA[mkdir -p /plc/devel/data/fedora
687 cd /plc/devel/data/fedora
689 for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
690 wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
691 done]]></programlisting>
694 <para>Change the repository URI and <command>--cut-dirs</command>
695 level as needed to produce a hierarchy that resembles:</para>
697 <programlisting><![CDATA[/plc/devel/data/fedora/core/4/i386/os
698 /plc/devel/data/fedora/core/updates/4/i386
699 /plc/devel/data/fedora/extras/4/i386]]></programlisting>
701 <para>A list of additional Fedora Core 4 mirrors is available at
702 <ulink url="http://fedora.redhat.com/Download/mirrors.html">http://fedora.redhat.com/Download/mirrors.html</ulink>.</para>
705 <section id="BuildingMyPLC">
706 <title>Building MyPLC</title>
708 <para>All PlanetLab source code modules are built and installed
709 as RPMS. A set of build scripts, checked into the
710 <filename>build/</filename> directory of the PlanetLab CVS
711 repository, eases the task of rebuilding PlanetLab source
714 <para>To build MyPLC, or any PlanetLab source code module, from
715 within the MyPLC development environment, execute the following
716 commands as root:</para>
719 <title>Building MyPLC.</title>
721 <programlisting><![CDATA[# Initialize MyPLC development environment
722 service plc-devel start
724 # Enter development environment
725 chroot /plc/devel/root su -
727 # Check out build scripts into a directory named after the current
728 # date. This is simply a convention, it need not be followed
729 # exactly. See build/build.sh for an example of a build script that
730 # names build directories after CVS tags.
731 DATE=$(date +%Y.%m.%d)
733 cvs -d /cvs checkout -d $DATE build
736 make -C $DATE]]></programlisting>
739 <para>If the build succeeds, a set of binary RPMS will be
741 <filename>/plc/devel/data/build/$DATE/RPMS/</filename> that you
743 <filename>/var/www/html/install-rpms/planetlab</filename>
744 directory of your MyPLC installation (see <xref
745 linkend="Installation" />).</para>
748 <section id="UpdatingCVS">
749 <title>Updating CVS</title>
751 <para>A complete snapshot of the PlanetLab source code is included
752 with the MyPLC development environment as a CVS repository in
753 <filename>/plc/devel/data/cvs</filename>. This CVS repository may
754 be accessed like any other CVS repository. It may be accessed
755 using an interface such as <ulink
756 url="http://www.freebsd.org/projects/cvsweb.html">CVSweb</ulink>,
757 and file permissions may be altered to allow for fine-grained
758 access control. Although the files are included with the
759 <filename>myplc-devel</filename> RPM, they are <emphasis
760 role="bold">not</emphasis> subject to upgrade once installed. New
761 versions of the <filename>myplc-devel</filename> RPM will install
762 updated snapshot repositories in
763 <filename>/plc/devel/data/cvs-%{version}-%{release}</filename>,
764 where <literal>%{version}-%{release}</literal> is replaced with
765 the version number of the RPM.</para>
767 <para>Because the CVS repository is not automatically upgraded,
768 if you wish to keep your local repository synchronized with the
769 public PlanetLab repository, it is highly recommended that you
770 use CVS's support for <ulink
771 url="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources">vendor
772 branches</ulink> to track changes. Vendor branches ease the task
773 of merging upstream changes with your local modifications. To
774 import a new snapshot into your local repository (for example,
775 if you have just upgraded from
776 <filename>myplc-devel-0.4-2</filename> to
777 <filename>myplc-devel-0.4-3</filename> and you notice the new
778 repository in <filename>/plc/devel/data/cvs-0.4-3</filename>),
779 execute the following commands as root from within the MyPLC
780 development environment:</para>
783 <title>Updating /data/cvs from /data/cvs-0.4-3.</title>
785 <para><emphasis role="bold">Warning</emphasis>: This may cause
786 severe, irreversible changes to be made to your local
787 repository. Always tag your local repository before
790 <programlisting><![CDATA[# Initialize MyPLC development environment
791 service plc-devel start
793 # Enter development environment
794 chroot /plc/devel/root su -
797 cvs -d /cvs rtag before-myplc-0_4-3-merge
800 TMP=$(mktemp -d /data/export.XXXXXX)
802 cvs -d /data/cvs-0.4-3 export -r HEAD .
803 cvs -d /cvs import -m "PlanetLab sources from myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
805 rm -rf $TMP]]></programlisting>
808 <para>If there any merge conflicts, use the command suggested by
809 CVS to help the merge. Explaining how to fix merge conflicts is
810 beyond the scope of this document; consult the CVS documentation
811 for more information on how to use CVS.</para>
816 <title>Configuration variables (for <emphasis>myplc</emphasis>)</title>
818 <para>Listed below is the set of standard configuration variables
819 and their default values, defined in the template
820 <filename>/etc/planetlab/default_config.xml</filename>. Additional
821 variables and their defaults may be defined in site-specific XML
822 templates that should be placed in
823 <filename>/etc/planetlab/configs/</filename>.</para>
829 <title>Development configuration variables(for <emphasis>myplc-devel</emphasis>)</title>
835 <title>Bibliography</title>
837 <biblioentry id="TechsGuide">
838 <author><firstname>Mark</firstname><surname>Huang</surname></author>
840 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
841 Technical Contact's Guide</ulink></title>