1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
4 <!ENTITY Variables SYSTEM "plc_variables.xml">
5 <!ENTITY DevelVariables SYSTEM "plc_devel_variables.xml">
9 <title>MyPLC User's Guide</title>
12 <firstname>Mark Huang</firstname>
16 <orgname>Princeton University</orgname>
20 <para>This document describes the design, installation, and
21 administration of MyPLC, a complete PlanetLab Central (PLC)
22 portable installation contained within a
23 <command>chroot</command> jail. This document assumes advanced
24 knowledge of the PlanetLab architecture and Linux system
25 administration.</para>
30 <revnumber>1.0</revnumber>
31 <date>April 7, 2006</date>
32 <authorinitials>MLH</authorinitials>
33 <revdescription><para>Initial draft.</para></revdescription>
36 <revnumber>1.1</revnumber>
37 <date>July 19, 2006</date>
38 <authorinitials>MLH</authorinitials>
39 <revdescription><para>Add development environment.</para></revdescription>
45 <title>Overview</title>
47 <para>MyPLC is a complete PlanetLab Central (PLC) portable
48 installation contained within a <command>chroot</command>
49 jail. The default installation consists of a web server, an
50 XML-RPC API server, a boot server, and a database server: the core
51 components of PLC. The installation is customized through an
52 easy-to-use graphical interface. All PLC services are started up
53 and shut down through a single script installed on the host
54 system. The usually complex process of installing and
55 administering the PlanetLab backend is reduced by containing PLC
56 services within a virtual filesystem. By packaging it in such a
57 manner, MyPLC may also be run on any modern Linux distribution,
58 and could conceivably even run in a PlanetLab slice.</para>
60 <figure id="Architecture">
61 <title>MyPLC architecture</title>
64 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
67 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
70 <phrase>MyPLC architecture</phrase>
73 <para>MyPLC should be viewed as a single application that
74 provides multiple functions and can run on any host
80 <section> <title> Purpose of the <emphasis> myplc-devel
81 </emphasis> package </title>
82 <para> The <emphasis>myplc</emphasis> package comes with all
83 required node software, rebuilt from the public PlanetLab CVS
84 repository. If for any reason you need to implement your own
85 customized version of this software, you can use the
86 <emphasis>myplc-devel</emphasis> package instead, for setting up
87 your own development environment, including a local CVS
88 repository; you can then freely manage your changes and rebuild
89 your customized version of <emphasis>myplc</emphasis>. We also
90 provide good practices, that will then allow you to resync your local
91 CVS repository with any further evolution on the mainstream public
92 PlanetLab software. </para> </section>
97 <section id="Requirements"> <title> Requirements </title>
99 <para> <emphasis>myplc</emphasis> and
100 <emphasis>myplc-devel</emphasis> were designed as
101 <command>chroot</command> jails so as to reduce the requirements on
102 your host operating system. So in theory, these distributions should
103 work on virtually any Linux 2.6 based distribution, whether it
104 supports rpm or not. </para>
106 <para> However, things are never that simple and there indeed are
107 some known limitations to this, so here are a couple notes as a
108 recommended reading before you proceed with the installation.</para>
110 <para> As of August 2006 9 (i.e <emphasis>myplc-0.5</emphasis>) :</para>
113 <listitem><para> The software is vastly based on <emphasis>Fedora
114 Core 4</emphasis>. Please note that the build server at Princeton
115 runs <emphasis>Fedora Core 2</emphasis>, togother with a upgraded
119 <listitem><para> myplc and myplc-devel are known to work on both
120 <emphasis>Fedora Core 2</emphasis> and <emphasis>Fedora Core
121 4</emphasis>. Please note however that, on fc4 at least, it is
122 highly recommended to use the <application>Security Level
123 Configuration</application> utility and to <emphasis>switch off
124 SElinux</emphasis> on your box because : </para>
128 myplc requires you to run SElinux as 'Permissive' at most
131 myplc-devel requires you to turn SElinux Off.
136 <listitem> <para> In addition, as far as myplc is concerned, you
137 need to check your firewall configuration since you need, of course,
138 to open up the <emphasis>http</emphasis> and
139 <emphasis>https</emphasis> ports, so as to accept connections from
140 the managed nodes, and from the users desktops. </para> </listitem>
145 <section id="Installation">
146 <title>Installation</title>
148 <para>Though internally composed of commodity software
149 subpackages, MyPLC should be treated as a monolithic software
150 application. MyPLC is distributed as single RPM package that has
151 no external dependencies, allowing it to be installed on
152 practically any Linux 2.6 based distribution:</para>
155 <title>Installing MyPLC.</title>
157 <programlisting><![CDATA[# If your distribution supports RPM
158 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
160 # If your distribution does not support RPM
162 wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
164 rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu]]></programlisting>
167 <para>MyPLC installs the following files and directories:</para>
171 <listitem><para><filename>/plc/root.img</filename>: The main
172 root filesystem of the MyPLC application. This file is an
173 uncompressed ext3 filesystem that is loopback mounted on
174 <filename>/plc/root</filename> when MyPLC starts. This
175 filesystem, even when mounted, should be treated as an opaque
176 binary that can and will be replaced in its entirety by any
177 upgrade of MyPLC.</para></listitem>
179 <listitem><para><filename>/plc/root</filename>: The mount point
180 for <filename>/plc/root.img</filename>. Once the root filesystem
181 is mounted, all MyPLC services run in a
182 <command>chroot</command> jail based in this
183 directory.</para></listitem>
186 <para><filename>/plc/data</filename>: The directory where user
187 data and generated files are stored. This directory is bind
188 mounted onto <filename>/plc/root/data</filename> so that it is
189 accessible as <filename>/data</filename> from within the
190 <command>chroot</command> jail. Files in this directory are
191 marked with <command>%config(noreplace)</command> in the
192 RPM. That is, during an upgrade of MyPLC, if a file has not
193 changed since the last installation or upgrade of MyPLC, it is
194 subject to upgrade and replacement. If the file has changed,
195 the new version of the file will be created with a
196 <filename>.rpmnew</filename> extension. Symlinks within the
197 MyPLC root filesystem ensure that the following directories
198 (relative to <filename>/plc/root</filename>) are stored
199 outside the MyPLC filesystem image:</para>
202 <listitem><para><filename>/etc/planetlab</filename>: This
203 directory contains the configuration files, keys, and
204 certificates that define your MyPLC
205 installation.</para></listitem>
207 <listitem><para><filename>/var/lib/pgsql</filename>: This
208 directory contains PostgreSQL database
209 files.</para></listitem>
211 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
212 directory contains node installation logs.</para></listitem>
214 <listitem><para><filename>/var/www/html/boot</filename>: This
215 directory contains the Boot Manager, customized for your MyPLC
216 installation, and its data files.</para></listitem>
218 <listitem><para><filename>/var/www/html/download</filename>: This
219 directory contains Boot CD images, customized for your MyPLC
220 installation.</para></listitem>
222 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
223 directory is where you should install node package updates,
224 if any. By default, nodes are installed from the tarball
226 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
227 which is pre-built from the latest PlanetLab Central
228 sources, and installed as part of your MyPLC
229 installation. However, nodes will attempt to install any
230 newer RPMs located in
231 <filename>/var/www/html/install-rpms/planetlab</filename>,
232 after initial installation and periodically thereafter. You
233 must run <command>yum-arch</command> and
234 <command>createrepo</command> to update the
235 <command>yum</command> caches in this directory after
236 installing a new RPM. PlanetLab Central cannot support any
237 changes to this directory.</para></listitem>
239 <listitem><para><filename>/var/www/html/xml</filename>: This
240 directory contains various XML files that the Slice Creation
241 Service uses to determine the state of slices. These XML
242 files are refreshed periodically by <command>cron</command>
243 jobs running in the MyPLC root.</para></listitem>
248 <para><filename>/etc/init.d/plc</filename>: This file
249 is a System V init script installed on your host filesystem,
250 that allows you to start up and shut down MyPLC with a single
251 command. On a Red Hat or Fedora host system, it is customary to
252 use the <command>service</command> command to invoke System V
255 <example id="StartingAndStoppingMyPLC">
256 <title>Starting and stopping MyPLC.</title>
258 <programlisting><![CDATA[# Starting MyPLC
262 service plc stop]]></programlisting>
265 <para>Like all other registered System V init services, MyPLC is
266 started and shut down automatically when your host system boots
267 and powers off. You may disable automatic startup by invoking
268 the <command>chkconfig</command> command on a Red Hat or Fedora
272 <title>Disabling automatic startup of MyPLC.</title>
274 <programlisting><![CDATA[# Disable automatic startup
277 # Enable automatic startup
278 chkconfig plc on]]></programlisting>
282 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
283 file is a shell script fragment that defines the variables
284 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
285 the values of these variables are <filename>/plc/root</filename>
286 and <filename>/plc/data</filename>, respectively. If you wish,
287 you may move your MyPLC installation to another location on your
288 host filesystem and edit the values of these variables
289 appropriately, but you will break the RPM upgrade
290 process. PlanetLab Central cannot support any changes to this
291 file.</para></listitem>
293 <listitem><para><filename>/etc/planetlab</filename>: This
294 symlink to <filename>/plc/data/etc/planetlab</filename> is
295 installed on the host system for convenience.</para></listitem>
300 <title>Quickstart</title>
302 <para>Once installed, start MyPLC (see <xref
303 linkend="StartingAndStoppingMyPLC" />). MyPLC must be started as
304 root. Observe the output of this command for any failures. If no
305 failures occur, you should see output similar to the
309 <title>A successful MyPLC startup.</title>
311 <programlisting><![CDATA[Mounting PLC: [ OK ]
312 PLC: Generating network files: [ OK ]
313 PLC: Starting system logger: [ OK ]
314 PLC: Starting database server: [ OK ]
315 PLC: Generating SSL certificates: [ OK ]
316 PLC: Configuring the API: [ OK ]
317 PLC: Updating GPG keys: [ OK ]
318 PLC: Generating SSH keys: [ OK ]
319 PLC: Starting web server: [ OK ]
320 PLC: Bootstrapping the database: [ OK ]
321 PLC: Starting DNS server: [ OK ]
322 PLC: Starting crond: [ OK ]
323 PLC: Rebuilding Boot CD: [ OK ]
324 PLC: Rebuilding Boot Manager: [ OK ]
325 PLC: Signing node packages: [ OK ]
329 <para>If <filename>/plc/root</filename> is mounted successfully, a
330 complete log file of the startup process may be found at
331 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
332 for failure of each step include:</para>
335 <listitem><para><literal>Mounting PLC</literal>: If this step
336 fails, first ensure that you started MyPLC as root. Check
337 <filename>/etc/sysconfig/plc</filename> to ensure that
338 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
339 right locations. You may also have too many existing loopback
340 mounts, or your kernel may not support loopback mounting, bind
341 mounting, or the ext3 filesystem. Try freeing at least one
342 loopback device, or re-compiling your kernel to support loopback
343 mounting, bind mounting, and the ext3 filesystem. If you see an
344 error similar to <literal>Permission denied while trying to open
345 /plc/root.img</literal>, then SELinux may be enabled. See <xref
346 linkend="Requirements" /> above for details.</para></listitem>
348 <listitem><para><literal>Starting database server</literal>: If
349 this step fails, check
350 <filename>/plc/root/var/log/pgsql</filename> and
351 <filename>/plc/root/var/log/boot.log</filename>. The most common
352 reason for failure is that the default PostgreSQL port, TCP port
353 5432, is already in use. Check that you are not running a
354 PostgreSQL server on the host system.</para></listitem>
356 <listitem><para><literal>Starting web server</literal>: If this
358 <filename>/plc/root/var/log/httpd/error_log</filename> and
359 <filename>/plc/root/var/log/boot.log</filename> for obvious
360 errors. The most common reason for failure is that the default
361 web ports, TCP ports 80 and 443, are already in use. Check that
362 you are not running a web server on the host
363 system.</para></listitem>
365 <listitem><para><literal>Bootstrapping the database</literal>:
366 If this step fails, it is likely that the previous step
367 (<literal>Starting web server</literal>) also failed. Another
368 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
369 <xref linkend="ChangingTheConfiguration" />) does not resolve to
370 the host on which the API server has been enabled. By default,
371 all services, including the API server, are enabled and run on
372 the same host, so check that <envar>PLC_API_HOST</envar> is
373 either <filename>localhost</filename> or resolves to a local IP
374 address.</para></listitem>
376 <listitem><para><literal>Starting crond</literal>: If this step
377 fails, it is likely that the previous steps (<literal>Starting
378 web server</literal> and <literal>Bootstrapping the
379 database</literal>) also failed. If not, check
380 <filename>/plc/root/var/log/boot.log</filename> for obvious
381 errors. This step starts the <command>cron</command> service and
382 generates the initial set of XML files that the Slice Creation
383 Service uses to determine slice state.</para></listitem>
386 <para>If no failures occur, then MyPLC should be active with a
387 default configuration. Open a web browser on the host system and
388 visit <literal>http://localhost/</literal>, which should bring you
389 to the front page of your PLC installation. The password of the
390 default administrator account
391 <literal>root@localhost.localdomain</literal> (set by
392 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
393 <envar>PLC_ROOT_PASSWORD</envar>).</para>
395 <section id="ChangingTheConfiguration">
396 <title>Changing the configuration</title>
398 <para>After verifying that MyPLC is working correctly, shut it
399 down and begin changing some of the default variable
400 values. Shut down MyPLC with <command>service plc stop</command>
401 (see <xref linkend="StartingAndStoppingMyPLC" />). With a text
402 editor, open the file
403 <filename>/etc/planetlab/plc_config.xml</filename>. This file is
404 a self-documenting configuration file written in XML. Variables
405 are divided into categories. Variable identifiers must be
406 alphanumeric, plus underscore. A variable is referred to
407 canonically as the uppercase concatenation of its category
408 identifier, an underscore, and its variable identifier. Thus, a
409 variable with an <literal>id</literal> of
410 <literal>slice_prefix</literal> in the <literal>plc</literal>
411 category is referred to canonically as
412 <envar>PLC_SLICE_PREFIX</envar>.</para>
414 <para>The reason for this convention is that during MyPLC
415 startup, <filename>plc_config.xml</filename> is translated into
416 several different languages—shell, PHP, and
417 Python—so that scripts written in each of these languages
418 can refer to the same underlying configuration. Most MyPLC
419 scripts are written in shell, so the convention for shell
420 variables predominates.</para>
422 <para>The variables that you should change immediately are:</para>
425 <listitem><para><envar>PLC_NAME</envar>: Change this to the
426 name of your PLC installation.</para></listitem>
427 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
428 to a more secure password.</para></listitem>
430 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
431 Change this to the e-mail address at which you would like to
432 receive support requests.</para></listitem>
434 <listitem><para><envar>PLC_DB_HOST</envar>,
435 <envar>PLC_DB_IP</envar>, <envar>PLC_API_HOST</envar>,
436 <envar>PLC_API_IP</envar>, <envar>PLC_WWW_HOST</envar>,
437 <envar>PLC_WWW_IP</envar>, <envar>PLC_BOOT_HOST</envar>,
438 <envar>PLC_BOOT_IP</envar>: Change all of these to the
439 preferred FQDN and external IP address of your host
440 system.</para></listitem>
443 <para>After changing these variables, save the file, then
444 restart MyPLC with <command>service plc start</command>. You
445 should notice that the password of the default administrator
446 account is no longer <literal>root</literal>, and that the
447 default site name includes the name of your PLC installation
448 instead of PlanetLab.</para>
452 <title>Installing nodes</title>
454 <para>Install your first node by clicking <literal>Add
455 Node</literal> under the <literal>Nodes</literal> tab. Fill in
456 all the appropriate details, then click
457 <literal>Add</literal>. Download the node's configuration file
458 by clicking <literal>Download configuration file</literal> on
459 the <emphasis role="bold">Node Details</emphasis> page for the
460 node. Save it to a floppy disk or USB key as detailed in <xref
461 linkend="TechsGuide" />.</para>
463 <para>Follow the rest of the instructions in <xref
464 linkend="TechsGuide" /> for creating a Boot CD and installing
465 the node, except download the Boot CD image from the
466 <filename>/download</filename> directory of your PLC
467 installation, not from PlanetLab Central. The images located
468 here are customized for your installation. If you change the
469 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
470 if the SSL certificate of your boot server expires, MyPLC will
471 regenerate it and rebuild the Boot CD with the new
472 certificate. If this occurs, you must replace all Boot CDs
473 created before the certificate was regenerated.</para>
475 <para>The installation process for a node has significantly
476 improved since PlanetLab 3.3. It should now take only a few
477 seconds for a new node to become ready to create slices.</para>
481 <title>Administering nodes</title>
483 <para>You may administer nodes as <literal>root</literal> by
484 using the SSH key stored in
485 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
488 <title>Accessing nodes via SSH. Replace
489 <literal>node</literal> with the hostname of the node.</title>
491 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
494 <para>Besides the standard Linux log files located in
495 <filename>/var/log</filename>, several other files can give you
496 clues about any problems with active processes:</para>
499 <listitem><para><filename>/var/log/pl_nm</filename>: The log
500 file for the Node Manager.</para></listitem>
502 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
503 The log file for the Slice Creation Service.</para></listitem>
505 <listitem><para><filename>/var/log/propd</filename>: The log
506 file for Proper, the service which allows certain slices to
507 perform certain privileged operations in the root
508 context.</para></listitem>
510 <listitem><para><filename>/vservers/pl_netflow/var/log/netflow.log</filename>:
511 The log file for PlanetFlow, the network traffic auditing
512 service.</para></listitem>
517 <title>Creating a slice</title>
519 <para>Create a slice by clicking <literal>Create Slice</literal>
520 under the <literal>Slices</literal> tab. Fill in all the
521 appropriate details, then click <literal>Create</literal>. Add
522 nodes to the slice by clicking <literal>Manage Nodes</literal>
523 on the <emphasis role="bold">Slice Details</emphasis> page for
526 <para>A <command>cron</command> job runs every five minutes and
528 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
529 with information about current slice state. The Slice Creation
530 Service running on every node polls this file every ten minutes
531 to determine if it needs to create or delete any slices. You may
532 accelerate this process manually if desired.</para>
535 <title>Forcing slice creation on a node.</title>
537 <programlisting><![CDATA[# Update slices.xml immediately
538 service plc start crond
540 # Kick the Slice Creation Service on a particular node.
541 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
542 vserver pl_conf exec service pl_conf restart]]></programlisting>
547 <section id="DevelopmentEnvironment">
548 <title>Rebuilding and customizing MyPLC</title>
550 <para>The MyPLC package, though distributed as an RPM, is not a
551 traditional package that can be easily rebuilt from SRPM. The
552 requisite build environment is quite extensive and numerous
553 assumptions are made throughout the PlanetLab source code base,
554 that the build environment is based on Fedora Core 4 and that
555 access to a complete Fedora Core 4 mirror is available.</para>
557 <para>For this reason, it is recommended that you only rebuild
558 MyPLC (or any of its components) from within the MyPLC development
559 environment. The MyPLC development environment is similar to MyPLC
560 itself in that it is a portable filesystem contained within a
561 <command>chroot</command> jail. The filesystem contains all the
562 necessary tools required to rebuild MyPLC, as well as a snapshot
563 of the PlanetLab source code base in the form of a local CVS
567 <title>Installation</title>
569 <para>Install the MyPLC development environment similarly to how
570 you would install MyPLC. You may install both packages on the same
571 host system if you wish. As with MyPLC, the MyPLC development
572 environment should be treated as a monolithic software
573 application, and any files present in the
574 <command>chroot</command> jail should not be modified directly, as
575 they are subject to upgrade.</para>
578 <title>Installing the MyPLC development environment.</title>
580 <programlisting><![CDATA[# If your distribution supports RPM
581 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
583 # If your distribution does not support RPM
585 wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
587 rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu]]></programlisting>
590 <para>The MyPLC development environment installs the following
591 files and directories:</para>
594 <listitem><para><filename>/plc/devel/root.img</filename>: The
595 main root filesystem of the MyPLC development environment. This
596 file is an uncompressed ext3 filesystem that is loopback mounted
597 on <filename>/plc/devel/root</filename> when the MyPLC
598 development environment is initialized. This filesystem, even
599 when mounted, should be treated as an opaque binary that can and
600 will be replaced in its entirety by any upgrade of the MyPLC
601 development environment.</para></listitem>
603 <listitem><para><filename>/plc/devel/root</filename>: The mount
605 <filename>/plc/devel/root.img</filename>.</para></listitem>
608 <para><filename>/plc/devel/data</filename>: The directory
609 where user data and generated files are stored. This directory
610 is bind mounted onto <filename>/plc/devel/root/data</filename>
611 so that it is accessible as <filename>/data</filename> from
612 within the <command>chroot</command> jail. Files in this
613 directory are marked with
614 <command>%config(noreplace)</command> in the RPM. Symlinks
615 ensure that the following directories (relative to
616 <filename>/plc/devel/root</filename>) are stored outside the
617 root filesystem image:</para>
620 <listitem><para><filename>/etc/planetlab</filename>: This
621 directory contains the configuration files that define your
622 MyPLC development environment.</para></listitem>
624 <listitem><para><filename>/cvs</filename>: A
625 snapshot of the PlanetLab source code is stored as a CVS
626 repository in this directory. Files in this directory will
627 <emphasis role="bold">not</emphasis> be updated by an upgrade of
628 <filename>myplc-devel</filename>. See <xref
629 linkend="UpdatingCVS" /> for more information about updating
630 PlanetLab source code.</para></listitem>
632 <listitem><para><filename>/build</filename>:
633 Builds are stored in this directory. This directory is bind
634 mounted onto <filename>/plc/devel/root/build</filename> so that
635 it is accessible as <filename>/build</filename> from within the
636 <command>chroot</command> jail. The build scripts in this
637 directory are themselves source controlled; see <xref
638 linkend="BuildingMyPLC" /> for more information about executing
639 builds.</para></listitem>
644 <para><filename>/etc/init.d/plc-devel</filename>: This file is
645 a System V init script installed on your host filesystem, that
646 allows you to start up and shut down the MyPLC development
647 environment with a single command.</para>
653 <title>Fedora Core 4 mirror requirement</title>
655 <para>The MyPLC development environment requires access to a
656 complete Fedora Core 4 i386 RPM repository, because several
657 different filesystems based upon Fedora Core 4 are constructed
658 during the process of building MyPLC. You may configure the
659 location of this repository via the
660 <envar>PLC_DEVEL_FEDORA_URL</envar> variable in
661 <filename>/plc/devel/data/etc/planetlab/plc_config.xml</filename>. The
662 value of the variable should be a URL that points to the top
663 level of a Fedora mirror that provides the
664 <filename>base</filename>, <filename>updates</filename>, and
665 <filename>extras</filename> repositories, e.g.,</para>
668 <listitem><para><filename>file:///data/fedora</filename></para></listitem>
669 <listitem><para><filename>http://coblitz.planet-lab.org/pub/fedora</filename></para></listitem>
670 <listitem><para><filename>ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</filename></para></listitem>
671 <listitem><para><filename>ftp://mirror.stanford.edu/pub/mirrors/fedora</filename></para></listitem>
672 <listitem><para><filename>http://rpmfind.net/linux/fedora</filename></para></listitem>
675 <para>As implied by the list, the repository may be located on
676 the local filesystem, or it may be located on a remote FTP or
677 HTTP server. URLs beginning with <filename>file://</filename>
678 should exist at the specified location relative to the root of
679 the <command>chroot</command> jail. For optimum performance and
680 reproducibility, specify
681 <envar>PLC_DEVEL_FEDORA_URL=file:///data/fedora</envar> and
682 download all Fedora Core 4 RPMS into
683 <filename>/plc/devel/data/fedora</filename> on the host system
684 after installing <filename>myplc-devel</filename>. Use a tool
685 such as <command>wget</command> or <command>rsync</command> to
686 download the RPMS from a public mirror:</para>
689 <title>Setting up a local Fedora Core 4 repository.</title>
691 <programlisting><![CDATA[mkdir -p /plc/devel/data/fedora
692 cd /plc/devel/data/fedora
694 for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
695 wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
696 done]]></programlisting>
699 <para>Change the repository URI and <command>--cut-dirs</command>
700 level as needed to produce a hierarchy that resembles:</para>
702 <programlisting><![CDATA[/plc/devel/data/fedora/core/4/i386/os
703 /plc/devel/data/fedora/core/updates/4/i386
704 /plc/devel/data/fedora/extras/4/i386]]></programlisting>
706 <para>A list of additional Fedora Core 4 mirrors is available at
707 <ulink url="http://fedora.redhat.com/Download/mirrors.html">http://fedora.redhat.com/Download/mirrors.html</ulink>.</para>
710 <section id="BuildingMyPLC">
711 <title>Building MyPLC</title>
713 <para>All PlanetLab source code modules are built and installed
714 as RPMS. A set of build scripts, checked into the
715 <filename>build/</filename> directory of the PlanetLab CVS
716 repository, eases the task of rebuilding PlanetLab source
719 <para>To build MyPLC, or any PlanetLab source code module, from
720 within the MyPLC development environment, execute the following
721 commands as root:</para>
724 <title>Building MyPLC.</title>
726 <programlisting><![CDATA[# Initialize MyPLC development environment
727 service plc-devel start
729 # Enter development environment
730 chroot /plc/devel/root su -
732 # Check out build scripts into a directory named after the current
733 # date. This is simply a convention, it need not be followed
734 # exactly. See build/build.sh for an example of a build script that
735 # names build directories after CVS tags.
736 DATE=$(date +%Y.%m.%d)
738 cvs -d /cvs checkout -d $DATE build
741 make -C $DATE]]></programlisting>
744 <para>If the build succeeds, a set of binary RPMS will be
746 <filename>/plc/devel/data/build/$DATE/RPMS/</filename> that you
748 <filename>/var/www/html/install-rpms/planetlab</filename>
749 directory of your MyPLC installation (see <xref
750 linkend="Installation" />).</para>
753 <section id="UpdatingCVS">
754 <title>Updating CVS</title>
756 <para>A complete snapshot of the PlanetLab source code is included
757 with the MyPLC development environment as a CVS repository in
758 <filename>/plc/devel/data/cvs</filename>. This CVS repository may
759 be accessed like any other CVS repository. It may be accessed
760 using an interface such as <ulink
761 url="http://www.freebsd.org/projects/cvsweb.html">CVSweb</ulink>,
762 and file permissions may be altered to allow for fine-grained
763 access control. Although the files are included with the
764 <filename>myplc-devel</filename> RPM, they are <emphasis
765 role="bold">not</emphasis> subject to upgrade once installed. New
766 versions of the <filename>myplc-devel</filename> RPM will install
767 updated snapshot repositories in
768 <filename>/plc/devel/data/cvs-%{version}-%{release}</filename>,
769 where <literal>%{version}-%{release}</literal> is replaced with
770 the version number of the RPM.</para>
772 <para>Because the CVS repository is not automatically upgraded,
773 if you wish to keep your local repository synchronized with the
774 public PlanetLab repository, it is highly recommended that you
775 use CVS's support for <ulink
776 url="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources">vendor
777 branches</ulink> to track changes. Vendor branches ease the task
778 of merging upstream changes with your local modifications. To
779 import a new snapshot into your local repository (for example,
780 if you have just upgraded from
781 <filename>myplc-devel-0.4-2</filename> to
782 <filename>myplc-devel-0.4-3</filename> and you notice the new
783 repository in <filename>/plc/devel/data/cvs-0.4-3</filename>),
784 execute the following commands as root from within the MyPLC
785 development environment:</para>
788 <title>Updating /data/cvs from /data/cvs-0.4-3.</title>
790 <para><emphasis role="bold">Warning</emphasis>: This may cause
791 severe, irreversible changes to be made to your local
792 repository. Always tag your local repository before
795 <programlisting><![CDATA[# Initialize MyPLC development environment
796 service plc-devel start
798 # Enter development environment
799 chroot /plc/devel/root su -
802 cvs -d /cvs rtag before-myplc-0_4-3-merge
805 TMP=$(mktemp -d /data/export.XXXXXX)
807 cvs -d /data/cvs-0.4-3 export -r HEAD .
808 cvs -d /cvs import -m "PlanetLab sources from myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
810 rm -rf $TMP]]></programlisting>
813 <para>If there any merge conflicts, use the command suggested by
814 CVS to help the merge. Explaining how to fix merge conflicts is
815 beyond the scope of this document; consult the CVS documentation
816 for more information on how to use CVS.</para>
821 <title>Configuration variables (for <emphasis>myplc</emphasis>)</title>
823 <para>Listed below is the set of standard configuration variables
824 and their default values, defined in the template
825 <filename>/etc/planetlab/default_config.xml</filename>. Additional
826 variables and their defaults may be defined in site-specific XML
827 templates that should be placed in
828 <filename>/etc/planetlab/configs/</filename>.</para>
834 <title>Development configuration variables (for <emphasis>myplc-devel</emphasis>)</title>
840 <title>Bibliography</title>
842 <biblioentry id="TechsGuide">
843 <author><firstname>Mark</firstname><surname>Huang</surname></author>
845 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
846 Technical Contact's Guide</ulink></title>