1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
4 <!ENTITY Variables SYSTEM "plc_variables.xml">
5 <!ENTITY DevelVariables SYSTEM "plc_devel_variables.xml">
9 <title>MyPLC User's Guide</title>
12 <firstname>Mark Huang</firstname>
16 <orgname>Princeton University</orgname>
20 <para>This document describes the design, installation, and
21 administration of MyPLC, a complete PlanetLab Central (PLC)
22 portable installation contained within a
23 <command>chroot</command> jail. This document assumes advanced
24 knowledge of the PlanetLab architecture and Linux system
25 administration.</para>
30 <revnumber>1.0</revnumber>
32 <date>April 7, 2006</date>
34 <authorinitials>MLH</authorinitials>
37 <para>Initial draft.</para>
44 <title>Overview</title>
46 <para>MyPLC is a complete PlanetLab Central (PLC) portable
47 installation contained within a <command>chroot</command>
48 jail. The default installation consists of a web server, an
49 XML-RPC API server, a boot server, and a database server: the core
50 components of PLC. The installation is customized through an
51 easy-to-use graphical interface. All PLC services are started up
52 and shut down through a single script installed on the host
53 system. The usually complex process of installing and
54 administering the PlanetLab backend is reduced by containing PLC
55 services within a virtual filesystem. By packaging it in such a
56 manner, MyPLC may also be run on any modern Linux distribution,
57 and could conceivably even run in a PlanetLab slice.</para>
59 <figure id="Architecture">
60 <title>MyPLC architecture</title>
63 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
66 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
69 <phrase>MyPLC architecture</phrase>
72 <para>MyPLC should be viewed as a single application that
73 provides multiple functions and can run on any host
80 <section id="Installation">
81 <title>Installation</title>
83 <para>Though internally composed of commodity software
84 subpackages, MyPLC should be treated as a monolithic software
85 application. MyPLC is distributed as single RPM package that has
86 no external dependencies, allowing it to be installed on
87 practically any Linux 2.6 based distribution:</para>
90 <title>Installing MyPLC.</title>
92 <programlisting><![CDATA[# If your distribution supports RPM
93 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
95 # If your distribution does not support RPM
97 wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
99 rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu]]></programlisting>
102 <para>MyPLC installs the following files and directories:</para>
106 <listitem><para><filename>/plc/root.img</filename>: The main
107 root filesystem of the MyPLC application. This file is an
108 uncompressed ext3 filesystem that is loopback mounted on
109 <filename>/plc/root</filename> when MyPLC starts. This
110 filesystem, even when mounted, should be treated as an opaque
111 binary that can and will be replaced in its entirety by any
112 upgrade of MyPLC.</para></listitem>
114 <listitem><para><filename>/plc/root</filename>: The mount point
115 for <filename>/plc/root.img</filename>. Once the root filesystem
116 is mounted, all MyPLC services run in a
117 <command>chroot</command> jail based in this
118 directory.</para></listitem>
121 <para><filename>/plc/data</filename>: The directory where user
122 data and generated files are stored. This directory is bind
123 mounted onto <filename>/plc/root/data</filename> so that it is
124 accessible as <filename>/data</filename> from within the
125 <command>chroot</command> jail. Files in this directory are
126 marked with <command>%config(noreplace)</command> in the
127 RPM. That is, during an upgrade of MyPLC, if a file has not
128 changed since the last installation or upgrade of MyPLC, it is
129 subject to upgrade and replacement. If the file has changed,
130 the new version of the file will be created with a
131 <filename>.rpmnew</filename> extension. Symlinks within the
132 MyPLC root filesystem ensure that the following directories
133 (relative to <filename>/plc/root</filename>) are stored
134 outside the MyPLC filesystem image:</para>
137 <listitem><para><filename>/etc/planetlab</filename>: This
138 directory contains the configuration files, keys, and
139 certificates that define your MyPLC
140 installation.</para></listitem>
142 <listitem><para><filename>/var/lib/pgsql</filename>: This
143 directory contains PostgreSQL database
144 files.</para></listitem>
146 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
147 directory contains node installation logs.</para></listitem>
149 <listitem><para><filename>/var/www/html/boot</filename>: This
150 directory contains the Boot Manager, customized for your MyPLC
151 installation, and its data files.</para></listitem>
153 <listitem><para><filename>/var/www/html/download</filename>: This
154 directory contains Boot CD images, customized for your MyPLC
155 installation.</para></listitem>
157 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
158 directory is where you should install node package updates,
159 if any. By default, nodes are installed from the tarball
161 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
162 which is pre-built from the latest PlanetLab Central
163 sources, and installed as part of your MyPLC
164 installation. However, nodes will attempt to install any
165 newer RPMs located in
166 <filename>/var/www/html/install-rpms/planetlab</filename>,
167 after initial installation and periodically thereafter. You
168 must run <command>yum-arch</command> and
169 <command>createrepo</command> to update the
170 <command>yum</command> caches in this directory after
171 installing a new RPM. PlanetLab Central cannot support any
172 changes to this directory.</para></listitem>
174 <listitem><para><filename>/var/www/html/xml</filename>: This
175 directory contains various XML files that the Slice Creation
176 Service uses to determine the state of slices. These XML
177 files are refreshed periodically by <command>cron</command>
178 jobs running in the MyPLC root.</para></listitem>
183 <para><filename>/etc/init.d/plc</filename>: This file
184 is a System V init script installed on your host filesystem,
185 that allows you to start up and shut down MyPLC with a single
186 command. On a Red Hat or Fedora host system, it is customary to
187 use the <command>service</command> command to invoke System V
190 <example id="StartingAndStoppingMyPLC">
191 <title>Starting and stopping MyPLC.</title>
193 <programlisting><![CDATA[# Starting MyPLC
197 service plc stop]]></programlisting>
200 <para>Like all other registered System V init services, MyPLC is
201 started and shut down automatically when your host system boots
202 and powers off. You may disable automatic startup by invoking
203 the <command>chkconfig</command> command on a Red Hat or Fedora
207 <title>Disabling automatic startup of MyPLC.</title>
209 <programlisting><![CDATA[# Disable automatic startup
212 # Enable automatic startup
213 chkconfig plc on]]></programlisting>
217 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
218 file is a shell script fragment that defines the variables
219 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
220 the values of these variables are <filename>/plc/root</filename>
221 and <filename>/plc/data</filename>, respectively. If you wish,
222 you may move your MyPLC installation to another location on your
223 host filesystem and edit the values of these variables
224 appropriately, but you will break the RPM upgrade
225 process. PlanetLab Central cannot support any changes to this
226 file.</para></listitem>
228 <listitem><para><filename>/etc/planetlab</filename>: This
229 symlink to <filename>/plc/data/etc/planetlab</filename> is
230 installed on the host system for convenience.</para></listitem>
235 <title>Quickstart</title>
237 <para>Once installed, start MyPLC (see <xref
238 linkend="StartingAndStoppingMyPLC" />). MyPLC must be started as
239 root. Observe the output of this command for any failures. If no
240 failures occur, you should see output similar to the
244 <title>A successful MyPLC startup.</title>
246 <programlisting><![CDATA[Mounting PLC: [ OK ]
247 PLC: Generating network files: [ OK ]
248 PLC: Starting system logger: [ OK ]
249 PLC: Starting database server: [ OK ]
250 PLC: Generating SSL certificates: [ OK ]
251 PLC: Configuring the API: [ OK ]
252 PLC: Updating GPG keys: [ OK ]
253 PLC: Generating SSH keys: [ OK ]
254 PLC: Starting web server: [ OK ]
255 PLC: Bootstrapping the database: [ OK ]
256 PLC: Starting DNS server: [ OK ]
257 PLC: Starting crond: [ OK ]
258 PLC: Rebuilding Boot CD: [ OK ]
259 PLC: Rebuilding Boot Manager: [ OK ]
260 PLC: Signing node packages: [ OK ]
264 <para>If <filename>/plc/root</filename> is mounted successfully, a
265 complete log file of the startup process may be found at
266 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
267 for failure of each step include:</para>
270 <listitem><para><literal>Mounting PLC</literal>: If this step
271 fails, first ensure that you started MyPLC as root. Check
272 <filename>/etc/sysconfig/plc</filename> to ensure that
273 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
274 right locations. You may also have too many existing loopback
275 mounts, or your kernel may not support loopback mounting, bind
276 mounting, or the ext3 filesystem. Try freeing at least one
277 loopback device, or re-compiling your kernel to support loopback
278 mounting, bind mounting, and the ext3 filesystem. If you see an
279 error similar to <literal>Permission denied while trying to open
280 /plc/root.img</literal>, then SELinux may be enabled. If you
281 installed MyPLC on Fedora Core 4 or 5, use the
282 <application>Security Level Configuration</application> utility
283 to configure SELinux to be
284 <literal>Permissive</literal>.</para></listitem>
286 <listitem><para><literal>Starting database server</literal>: If
287 this step fails, check
288 <filename>/plc/root/var/log/pgsql</filename> and
289 <filename>/plc/root/var/log/boot.log</filename>. The most common
290 reason for failure is that the default PostgreSQL port, TCP port
291 5432, is already in use. Check that you are not running a
292 PostgreSQL server on the host system.</para></listitem>
294 <listitem><para><literal>Starting web server</literal>: If this
296 <filename>/plc/root/var/log/httpd/error_log</filename> and
297 <filename>/plc/root/var/log/boot.log</filename> for obvious
298 errors. The most common reason for failure is that the default
299 web ports, TCP ports 80 and 443, are already in use. Check that
300 you are not running a web server on the host
301 system.</para></listitem>
303 <listitem><para><literal>Bootstrapping the database</literal>:
304 If this step fails, it is likely that the previous step
305 (<literal>Starting web server</literal>) also failed. Another
306 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
307 <xref linkend="ChangingTheConfiguration" />) does not resolve to
308 the host on which the API server has been enabled. By default,
309 all services, including the API server, are enabled and run on
310 the same host, so check that <envar>PLC_API_HOST</envar> is
311 either <filename>localhost</filename> or resolves to a local IP
312 address.</para></listitem>
314 <listitem><para><literal>Starting crond</literal>: If this step
315 fails, it is likely that the previous steps (<literal>Starting
316 web server</literal> and <literal>Bootstrapping the
317 database</literal>) also failed. If not, check
318 <filename>/plc/root/var/log/boot.log</filename> for obvious
319 errors. This step starts the <command>cron</command> service and
320 generates the initial set of XML files that the Slice Creation
321 Service uses to determine slice state.</para></listitem>
324 <para>If no failures occur, then MyPLC should be active with a
325 default configuration. Open a web browser on the host system and
326 visit <literal>http://localhost/</literal>, which should bring you
327 to the front page of your PLC installation. The password of the
328 default administrator account
329 <literal>root@localhost.localdomain</literal> (set by
330 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
331 <envar>PLC_ROOT_PASSWORD</envar>).</para>
333 <section id="ChangingTheConfiguration">
334 <title>Changing the configuration</title>
336 <para>After verifying that MyPLC is working correctly, shut it
337 down and begin changing some of the default variable
338 values. Shut down MyPLC with <command>service plc stop</command>
339 (see <xref linkend="StartingAndStoppingMyPLC" />). With a text
340 editor, open the file
341 <filename>/etc/planetlab/plc_config.xml</filename>. This file is
342 a self-documenting configuration file written in XML. Variables
343 are divided into categories. Variable identifiers must be
344 alphanumeric, plus underscore. A variable is referred to
345 canonically as the uppercase concatenation of its category
346 identifier, an underscore, and its variable identifier. Thus, a
347 variable with an <literal>id</literal> of
348 <literal>slice_prefix</literal> in the <literal>plc</literal>
349 category is referred to canonically as
350 <envar>PLC_SLICE_PREFIX</envar>.</para>
352 <para>The reason for this convention is that during MyPLC
353 startup, <filename>plc_config.xml</filename> is translated into
354 several different languages—shell, PHP, and
355 Python—so that scripts written in each of these languages
356 can refer to the same underlying configuration. Most MyPLC
357 scripts are written in shell, so the convention for shell
358 variables predominates.</para>
360 <para>The variables that you should change immediately are:</para>
363 <listitem><para><envar>PLC_NAME</envar>: Change this to the
364 name of your PLC installation.</para></listitem>
365 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
366 to a more secure password.</para></listitem>
368 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
369 Change this to the e-mail address at which you would like to
370 receive support requests.</para></listitem>
372 <listitem><para><envar>PLC_DB_HOST</envar>,
373 <envar>PLC_DB_IP</envar>, <envar>PLC_API_HOST</envar>,
374 <envar>PLC_API_IP</envar>, <envar>PLC_WWW_HOST</envar>,
375 <envar>PLC_WWW_IP</envar>, <envar>PLC_BOOT_HOST</envar>,
376 <envar>PLC_BOOT_IP</envar>: Change all of these to the
377 preferred FQDN and external IP address of your host
378 system.</para></listitem>
381 <para>After changing these variables, save the file, then
382 restart MyPLC with <command>service plc start</command>. You
383 should notice that the password of the default administrator
384 account is no longer <literal>root</literal>, and that the
385 default site name includes the name of your PLC installation
386 instead of PlanetLab.</para>
390 <title>Installing nodes</title>
392 <para>Install your first node by clicking <literal>Add
393 Node</literal> under the <literal>Nodes</literal> tab. Fill in
394 all the appropriate details, then click
395 <literal>Add</literal>. Download the node's configuration file
396 by clicking <literal>Download configuration file</literal> on
397 the <emphasis role="bold">Node Details</emphasis> page for the
398 node. Save it to a floppy disk or USB key as detailed in <xref
399 linkend="TechsGuide" />.</para>
401 <para>Follow the rest of the instructions in <xref
402 linkend="TechsGuide" /> for creating a Boot CD and installing
403 the node, except download the Boot CD image from the
404 <filename>/download</filename> directory of your PLC
405 installation, not from PlanetLab Central. The images located
406 here are customized for your installation. If you change the
407 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
408 if the SSL certificate of your boot server expires, MyPLC will
409 regenerate it and rebuild the Boot CD with the new
410 certificate. If this occurs, you must replace all Boot CDs
411 created before the certificate was regenerated.</para>
413 <para>The installation process for a node has significantly
414 improved since PlanetLab 3.3. It should now take only a few
415 seconds for a new node to become ready to create slices.</para>
419 <title>Administering nodes</title>
421 <para>You may administer nodes as <literal>root</literal> by
422 using the SSH key stored in
423 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
426 <title>Accessing nodes via SSH. Replace
427 <literal>node</literal> with the hostname of the node.</title>
429 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
432 <para>Besides the standard Linux log files located in
433 <filename>/var/log</filename>, several other files can give you
434 clues about any problems with active processes:</para>
437 <listitem><para><filename>/var/log/pl_nm</filename>: The log
438 file for the Node Manager.</para></listitem>
440 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
441 The log file for the Slice Creation Service.</para></listitem>
443 <listitem><para><filename>/var/log/propd</filename>: The log
444 file for Proper, the service which allows certain slices to
445 perform certain privileged operations in the root
446 context.</para></listitem>
448 <listitem><para><filename>/vservers/pl_netflow/var/log/netflow.log</filename>:
449 The log file for PlanetFlow, the network traffic auditing
450 service.</para></listitem>
455 <title>Creating a slice</title>
457 <para>Create a slice by clicking <literal>Create Slice</literal>
458 under the <literal>Slices</literal> tab. Fill in all the
459 appropriate details, then click <literal>Create</literal>. Add
460 nodes to the slice by clicking <literal>Manage Nodes</literal>
461 on the <emphasis role="bold">Slice Details</emphasis> page for
464 <para>A <command>cron</command> job runs every five minutes and
466 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
467 with information about current slice state. The Slice Creation
468 Service running on every node polls this file every ten minutes
469 to determine if it needs to create or delete any slices. You may
470 accelerate this process manually if desired.</para>
473 <title>Forcing slice creation on a node.</title>
475 <programlisting><![CDATA[# Update slices.xml immediately
476 service plc start crond
478 # Kick the Slice Creation Service on a particular node.
479 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
480 vserver pl_conf exec service pl_conf restart]]></programlisting>
486 <title>Rebuilding and customizing MyPLC</title>
488 <para>The MyPLC package, though distributed as an RPM, is not a
489 traditional package that can be easily rebuilt from SRPM. The
490 requisite build environment is quite extensive and numerous
491 assumptions are made throughout the PlanetLab source code base,
492 that the build environment is based on Fedora Core 4 and that
493 access to a complete Fedora Core 4 mirror is available.</para>
495 <para>For this reason, it is recommended that you only rebuild
496 MyPLC (or any of its components) from within the MyPLC development
497 environment. The MyPLC development environment is similar to MyPLC
498 itself in that it is a portable filesystem contained within a
499 <command>chroot</command> jail. The filesystem contains all the
500 necessary tools required to rebuild MyPLC, as well as a snapshot
501 of the PlanetLab source code base in the form of a local CVS
505 <title>Installation</title>
507 <para>Install the MyPLC development environment similarly to how
508 you would install MyPLC. You may install both packages on the same
509 host system if you wish. As with MyPLC, the MyPLC development
510 environment should be treated as a monolithic software
511 application, and any files present in the
512 <command>chroot</command> jail should not be modified directly, as
513 they are subject to upgrade.</para>
516 <title>Installing the MyPLC development environment.</title>
518 <programlisting><![CDATA[# If your distribution supports RPM
519 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
521 # If your distribution does not support RPM
523 wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
525 rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu]]></programlisting>
528 <para>The MyPLC development environment installs the following
529 files and directories:</para>
532 <listitem><para><filename>/plc/devel/root.img</filename>: The
533 main root filesystem of the MyPLC development environment. This
534 file is an uncompressed ext3 filesystem that is loopback mounted
535 on <filename>/plc/devel/root</filename> when the MyPLC
536 development environment is initialized. This filesystem, even
537 when mounted, should be treated as an opaque binary that can and
538 will be replaced in its entirety by any upgrade of the MyPLC
539 development environment.</para></listitem>
541 <listitem><para><filename>/plc/devel/root</filename>: The mount
543 <filename>/plc/devel/root.img</filename>.</para></listitem>
546 <para><filename>/plc/devel/data</filename>: The directory
547 where user data and generated files are stored. This directory
548 is bind mounted onto <filename>/plc/devel/root/data</filename>
549 so that it is accessible as <filename>/data</filename> from
550 within the <command>chroot</command> jail. Files in this
551 directory are marked with
552 <command>%config(noreplace)</command> in the RPM. Symlinks
553 ensure that the following directories (relative to
554 <filename>/plc/devel/root</filename>) are stored outside the
555 root filesystem image:</para>
558 <listitem><para><filename>/etc/planetlab</filename>: This
559 directory contains the configuration files that define your
560 MyPLC development environment.</para></listitem>
562 <listitem><para><filename>/cvs</filename>: A
563 snapshot of the PlanetLab source code is stored as a CVS
564 repository in this directory. Files in this directory will
565 <emphasis role="bold">not</emphasis> be updated by an upgrade of
566 <filename>myplc-devel</filename>. See <xref
567 linkend="UpdatingCVS" /> for more information about updating
568 PlanetLab source code.</para></listitem>
570 <listitem><para><filename>/build</filename>:
571 Builds are stored in this directory. This directory is bind
572 mounted onto <filename>/plc/devel/root/build</filename> so that
573 it is accessible as <filename>/build</filename> from within the
574 <command>chroot</command> jail. The build scripts in this
575 directory are themselves source controlled; see <xref
576 linkend="BuildingMyPLC" /> for more information about executing
577 builds.</para></listitem>
582 <para><filename>/etc/init.d/plc-devel</filename>: This file is
583 a System V init script installed on your host filesystem, that
584 allows you to start up and shut down the MyPLC development
585 environment with a single command.</para>
591 <title>Fedora Core 4 mirror requirement</title>
593 <para>The MyPLC development environment requires access to a
594 complete Fedora Core 4 i386 RPM repository, because several
595 different filesystems based upon Fedora Core 4 are constructed
596 during the process of building MyPLC. You may configure the
597 location of this repository via the
598 <envar>PLC_DEVEL_FEDORA_URL</envar> variable in
599 <filename>/plc/devel/data/etc/planetlab/plc_config.xml</filename>. The
600 value of the variable should be a URL that points to the top
601 level of a Fedora mirror that provides the
602 <filename>base</filename>, <filename>updates</filename>, and
603 <filename>extras</filename> repositories, e.g.,</para>
606 <listitem><para><filename>file:///data/fedora</filename></para></listitem>
607 <listitem><para><filename>http://coblitz.planet-lab.org/pub/fedora</filename></para></listitem>
608 <listitem><para><filename>ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</filename></para></listitem>
609 <listitem><para><filename>ftp://mirror.stanford.edu/pub/mirrors/fedora</filename></para></listitem>
610 <listitem><para><filename>http://rpmfind.net/linux/fedora</filename></para></listitem>
613 <para>As implied by the list, the repository may be located on
614 the local filesystem, or it may be located on a remote FTP or
615 HTTP server. URLs beginning with <filename>file://</filename>
616 should exist at the specified location relative to the root of
617 the <command>chroot</command> jail. For optimum performance and
618 reproducibility, specify
619 <envar>PLC_DEVEL_FEDORA_URL=file:///data/fedora</envar> and
620 download all Fedora Core 4 RPMS into
621 <filename>/plc/devel/data/fedora</filename> on the host system
622 after installing <filename>myplc-devel</filename>. Use a tool
623 such as <command>wget</command> or <command>rsync</command> to
624 download the RPMS from a public mirror:</para>
627 <title>Setting up a local Fedora Core 4 repository.</title>
629 <programlisting><![CDATA[mkdir -p /plc/devel/data/fedora
630 cd /plc/devel/data/fedora
632 for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
633 wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
634 done]]></programlisting>
637 <para>Change the repository URI and <command>--cut-dirs</command>
638 level as needed to produce a hierarchy that resembles:</para>
640 <programlisting><![CDATA[/plc/devel/data/fedora/core/4/i386/os
641 /plc/devel/data/fedora/core/updates/4/i386
642 /plc/devel/data/fedora/extras/4/i386]]></programlisting>
644 <para>A list of additional Fedora Core 4 mirrors is available at
645 <ulink url="http://fedora.redhat.com/Download/mirrors.html">http://fedora.redhat.com/Download/mirrors.html</ulink>.</para>
648 <section id="BuildingMyPLC">
649 <title>Building MyPLC</title>
651 <para>All PlanetLab source code modules are built and installed
652 as RPMS. A set of build scripts, checked into the
653 <filename>build/</filename> directory of the PlanetLab CVS
654 repository, eases the task of rebuilding PlanetLab source
657 <para>To build MyPLC, or any PlanetLab source code module, from
658 within the MyPLC development environment, execute the following
659 commands as root:</para>
662 <title>Building MyPLC.</title>
664 <programlisting><![CDATA[# Initialize MyPLC development environment
665 service plc-devel start
667 # Enter development environment
668 chroot /plc/devel/root su -
670 # Check out build scripts into a directory named after the current
671 # date. This is simply a convention, it need not be followed
672 # exactly. See build/build.sh for an example of a build script that
673 # names build directories after CVS tags.
674 DATE=$(date +%Y.%m.%d)
676 cvs -d /cvs checkout -d $DATE build
679 make -C $DATE]]></programlisting>
682 <para>If the build succeeds, a set of binary RPMS will be
684 <filename>/plc/devel/data/build/$DATE/RPMS/</filename> that you
686 <filename>/var/www/html/install-rpms/planetlab</filename>
687 directory of your MyPLC installation (see <xref
688 linkend="Installation" />).</para>
691 <section id="UpdatingCVS">
692 <title>Updating CVS</title>
694 <para>A complete snapshot of the PlanetLab source code is included
695 with the MyPLC development environment as a CVS repository in
696 <filename>/plc/devel/data/cvs</filename>. This CVS repository may
697 be accessed like any other CVS repository. It may be accessed
698 using an interface such as <ulink
699 url="http://www.freebsd.org/projects/cvsweb.html">CVSweb</ulink>,
700 and file permissions may be altered to allow for fine-grained
701 access control. Although the files are included with the
702 <filename>myplc-devel</filename> RPM, they are <emphasis
703 role="bold">not</emphasis> subject to upgrade once installed. New
704 versions of the <filename>myplc-devel</filename> RPM will install
705 updated snapshot repositories in
706 <filename>/plc/devel/data/cvs-%{version}-%{release}</filename>,
707 where <literal>%{version}-%{release}</literal> is replaced with
708 the version number of the RPM.</para>
710 <para>Because the CVS repository is not automatically upgraded,
711 if you wish to keep your local repository synchronized with the
712 public PlanetLab repository, it is highly recommended that you
713 use CVS's support for <ulink
714 url="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources">vendor
715 branches</ulink> to track changes. Vendor branches ease the task
716 of merging upstream changes with your local modifications. To
717 import a new snapshot into your local repository (for example,
718 if you have just upgraded from
719 <filename>myplc-devel-0.4-2</filename> to
720 <filename>myplc-devel-0.4-3</filename> and you notice the new
721 repository in <filename>/plc/devel/data/cvs-0.4-3</filename>),
722 execute the following commands as root from within the MyPLC
723 development environment:</para>
726 <title>Updating /data/cvs from /data/cvs-0.4-3.</title>
728 <para><emphasis role="bold">Warning</emphasis>: This may cause
729 severe, irreversible changes to be made to your local
730 repository. Always tag your local repository before
733 <programlisting><![CDATA[# Initialize MyPLC development environment
734 service plc-devel start
736 # Enter development environment
737 chroot /plc/devel/root su -
740 cvs -d /cvs rtag before-myplc-0_4-3-merge
743 TMP=$(mktemp -d /data/export.XXXXXX)
745 cvs -d /data/cvs-0.4-3 export -r HEAD .
746 cvs -d /cvs import -m "PlanetLab sources from myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
748 rm -rf $TMP]]></programlisting>
751 <para>If there any merge conflicts, use the command suggested by
752 CVS to help the merge. Explaining how to fix merge conflicts is
753 beyond the scope of this document; consult the CVS documentation
754 for more information on how to use CVS.</para>
759 <title>Configuration variables</title>
761 <para>Listed below is the set of standard configuration variables
762 and their default values, defined in the template
763 <filename>/etc/planetlab/default_config.xml</filename>. Additional
764 variables and their defaults may be defined in site-specific XML
765 templates that should be placed in
766 <filename>/etc/planetlab/configs/</filename>.</para>
772 <title>Development environment configuration variables</title>
778 <title>Bibliography</title>
780 <biblioentry id="TechsGuide">
781 <author><firstname>Mark</firstname><surname>Huang</surname></author>
783 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
784 Technical Contact's Guide</ulink></title>