1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
4 <!ENTITY Variables SYSTEM "plc_variables.xml">
5 <!ENTITY DevelVariables SYSTEM "plc_devel_variables.xml">
9 <title>MyPLC User's Guide</title>
12 <firstname>Mark Huang</firstname>
16 <orgname>Princeton University</orgname>
20 <para>This document describes the design, installation, and
21 administration of MyPLC, a complete PlanetLab Central (PLC)
22 portable installation contained within a
23 <command>chroot</command> jail. This document assumes advanced
24 knowledge of the PlanetLab architecture and Linux system
25 administration.</para>
30 <revnumber>1.0</revnumber>
31 <date>April 7, 2006</date>
32 <authorinitials>MLH</authorinitials>
33 <revdescription><para>Initial draft.</para></revdescription>
36 <revnumber>1.1</revnumber>
37 <date>July 19, 2006</date>
38 <authorinitials>MLH</authorinitials>
39 <revdescription><para>Add development environment.</para></revdescription>
45 <title>Overview</title>
47 <para>MyPLC is a complete PlanetLab Central (PLC) portable
48 installation contained within a <command>chroot</command>
49 jail. The default installation consists of a web server, an
50 XML-RPC API server, a boot server, and a database server: the core
51 components of PLC. The installation is customized through an
52 easy-to-use graphical interface. All PLC services are started up
53 and shut down through a single script installed on the host
54 system. The usually complex process of installing and
55 administering the PlanetLab backend is reduced by containing PLC
56 services within a virtual filesystem. By packaging it in such a
57 manner, MyPLC may also be run on any modern Linux distribution,
58 and could conceivably even run in a PlanetLab slice.</para>
60 <figure id="Architecture">
61 <title>MyPLC architecture</title>
64 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
67 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
70 <phrase>MyPLC architecture</phrase>
73 <para>MyPLC should be viewed as a single application that
74 provides multiple functions and can run on any host
81 <section id="Installation">
82 <title>Installation</title>
84 <para>Though internally composed of commodity software
85 subpackages, MyPLC should be treated as a monolithic software
86 application. MyPLC is distributed as single RPM package that has
87 no external dependencies, allowing it to be installed on
88 practically any Linux 2.6 based distribution:</para>
91 <title>Installing MyPLC.</title>
93 <programlisting><![CDATA[# If your distribution supports RPM
94 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
96 # If your distribution does not support RPM
98 wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
100 rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu]]></programlisting>
103 <para>MyPLC installs the following files and directories:</para>
107 <listitem><para><filename>/plc/root.img</filename>: The main
108 root filesystem of the MyPLC application. This file is an
109 uncompressed ext3 filesystem that is loopback mounted on
110 <filename>/plc/root</filename> when MyPLC starts. This
111 filesystem, even when mounted, should be treated as an opaque
112 binary that can and will be replaced in its entirety by any
113 upgrade of MyPLC.</para></listitem>
115 <listitem><para><filename>/plc/root</filename>: The mount point
116 for <filename>/plc/root.img</filename>. Once the root filesystem
117 is mounted, all MyPLC services run in a
118 <command>chroot</command> jail based in this
119 directory.</para></listitem>
122 <para><filename>/plc/data</filename>: The directory where user
123 data and generated files are stored. This directory is bind
124 mounted onto <filename>/plc/root/data</filename> so that it is
125 accessible as <filename>/data</filename> from within the
126 <command>chroot</command> jail. Files in this directory are
127 marked with <command>%config(noreplace)</command> in the
128 RPM. That is, during an upgrade of MyPLC, if a file has not
129 changed since the last installation or upgrade of MyPLC, it is
130 subject to upgrade and replacement. If the file has changed,
131 the new version of the file will be created with a
132 <filename>.rpmnew</filename> extension. Symlinks within the
133 MyPLC root filesystem ensure that the following directories
134 (relative to <filename>/plc/root</filename>) are stored
135 outside the MyPLC filesystem image:</para>
138 <listitem><para><filename>/etc/planetlab</filename>: This
139 directory contains the configuration files, keys, and
140 certificates that define your MyPLC
141 installation.</para></listitem>
143 <listitem><para><filename>/var/lib/pgsql</filename>: This
144 directory contains PostgreSQL database
145 files.</para></listitem>
147 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
148 directory contains node installation logs.</para></listitem>
150 <listitem><para><filename>/var/www/html/boot</filename>: This
151 directory contains the Boot Manager, customized for your MyPLC
152 installation, and its data files.</para></listitem>
154 <listitem><para><filename>/var/www/html/download</filename>: This
155 directory contains Boot CD images, customized for your MyPLC
156 installation.</para></listitem>
158 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
159 directory is where you should install node package updates,
160 if any. By default, nodes are installed from the tarball
162 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
163 which is pre-built from the latest PlanetLab Central
164 sources, and installed as part of your MyPLC
165 installation. However, nodes will attempt to install any
166 newer RPMs located in
167 <filename>/var/www/html/install-rpms/planetlab</filename>,
168 after initial installation and periodically thereafter. You
169 must run <command>yum-arch</command> and
170 <command>createrepo</command> to update the
171 <command>yum</command> caches in this directory after
172 installing a new RPM. PlanetLab Central cannot support any
173 changes to this directory.</para></listitem>
175 <listitem><para><filename>/var/www/html/xml</filename>: This
176 directory contains various XML files that the Slice Creation
177 Service uses to determine the state of slices. These XML
178 files are refreshed periodically by <command>cron</command>
179 jobs running in the MyPLC root.</para></listitem>
184 <para><filename>/etc/init.d/plc</filename>: This file
185 is a System V init script installed on your host filesystem,
186 that allows you to start up and shut down MyPLC with a single
187 command. On a Red Hat or Fedora host system, it is customary to
188 use the <command>service</command> command to invoke System V
191 <example id="StartingAndStoppingMyPLC">
192 <title>Starting and stopping MyPLC.</title>
194 <programlisting><![CDATA[# Starting MyPLC
198 service plc stop]]></programlisting>
201 <para>Like all other registered System V init services, MyPLC is
202 started and shut down automatically when your host system boots
203 and powers off. You may disable automatic startup by invoking
204 the <command>chkconfig</command> command on a Red Hat or Fedora
208 <title>Disabling automatic startup of MyPLC.</title>
210 <programlisting><![CDATA[# Disable automatic startup
213 # Enable automatic startup
214 chkconfig plc on]]></programlisting>
218 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
219 file is a shell script fragment that defines the variables
220 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
221 the values of these variables are <filename>/plc/root</filename>
222 and <filename>/plc/data</filename>, respectively. If you wish,
223 you may move your MyPLC installation to another location on your
224 host filesystem and edit the values of these variables
225 appropriately, but you will break the RPM upgrade
226 process. PlanetLab Central cannot support any changes to this
227 file.</para></listitem>
229 <listitem><para><filename>/etc/planetlab</filename>: This
230 symlink to <filename>/plc/data/etc/planetlab</filename> is
231 installed on the host system for convenience.</para></listitem>
236 <title>Quickstart</title>
238 <para>Once installed, start MyPLC (see <xref
239 linkend="StartingAndStoppingMyPLC" />). MyPLC must be started as
240 root. Observe the output of this command for any failures. If no
241 failures occur, you should see output similar to the
245 <title>A successful MyPLC startup.</title>
247 <programlisting><![CDATA[Mounting PLC: [ OK ]
248 PLC: Generating network files: [ OK ]
249 PLC: Starting system logger: [ OK ]
250 PLC: Starting database server: [ OK ]
251 PLC: Generating SSL certificates: [ OK ]
252 PLC: Configuring the API: [ OK ]
253 PLC: Updating GPG keys: [ OK ]
254 PLC: Generating SSH keys: [ OK ]
255 PLC: Starting web server: [ OK ]
256 PLC: Bootstrapping the database: [ OK ]
257 PLC: Starting DNS server: [ OK ]
258 PLC: Starting crond: [ OK ]
259 PLC: Rebuilding Boot CD: [ OK ]
260 PLC: Rebuilding Boot Manager: [ OK ]
261 PLC: Signing node packages: [ OK ]
265 <para>If <filename>/plc/root</filename> is mounted successfully, a
266 complete log file of the startup process may be found at
267 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
268 for failure of each step include:</para>
271 <listitem><para><literal>Mounting PLC</literal>: If this step
272 fails, first ensure that you started MyPLC as root. Check
273 <filename>/etc/sysconfig/plc</filename> to ensure that
274 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
275 right locations. You may also have too many existing loopback
276 mounts, or your kernel may not support loopback mounting, bind
277 mounting, or the ext3 filesystem. Try freeing at least one
278 loopback device, or re-compiling your kernel to support loopback
279 mounting, bind mounting, and the ext3 filesystem. If you see an
280 error similar to <literal>Permission denied while trying to open
281 /plc/root.img</literal>, then SELinux may be enabled. If you
282 installed MyPLC on Fedora Core 4 or 5, use the
283 <application>Security Level Configuration</application> utility
284 to configure SELinux to be
285 <literal>Permissive</literal>.</para></listitem>
287 <listitem><para><literal>Starting database server</literal>: If
288 this step fails, check
289 <filename>/plc/root/var/log/pgsql</filename> and
290 <filename>/plc/root/var/log/boot.log</filename>. The most common
291 reason for failure is that the default PostgreSQL port, TCP port
292 5432, is already in use. Check that you are not running a
293 PostgreSQL server on the host system.</para></listitem>
295 <listitem><para><literal>Starting web server</literal>: If this
297 <filename>/plc/root/var/log/httpd/error_log</filename> and
298 <filename>/plc/root/var/log/boot.log</filename> for obvious
299 errors. The most common reason for failure is that the default
300 web ports, TCP ports 80 and 443, are already in use. Check that
301 you are not running a web server on the host
302 system.</para></listitem>
304 <listitem><para><literal>Bootstrapping the database</literal>:
305 If this step fails, it is likely that the previous step
306 (<literal>Starting web server</literal>) also failed. Another
307 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
308 <xref linkend="ChangingTheConfiguration" />) does not resolve to
309 the host on which the API server has been enabled. By default,
310 all services, including the API server, are enabled and run on
311 the same host, so check that <envar>PLC_API_HOST</envar> is
312 either <filename>localhost</filename> or resolves to a local IP
313 address.</para></listitem>
315 <listitem><para><literal>Starting crond</literal>: If this step
316 fails, it is likely that the previous steps (<literal>Starting
317 web server</literal> and <literal>Bootstrapping the
318 database</literal>) also failed. If not, check
319 <filename>/plc/root/var/log/boot.log</filename> for obvious
320 errors. This step starts the <command>cron</command> service and
321 generates the initial set of XML files that the Slice Creation
322 Service uses to determine slice state.</para></listitem>
325 <para>If no failures occur, then MyPLC should be active with a
326 default configuration. Open a web browser on the host system and
327 visit <literal>http://localhost/</literal>, which should bring you
328 to the front page of your PLC installation. The password of the
329 default administrator account
330 <literal>root@localhost.localdomain</literal> (set by
331 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
332 <envar>PLC_ROOT_PASSWORD</envar>).</para>
334 <section id="ChangingTheConfiguration">
335 <title>Changing the configuration</title>
337 <para>After verifying that MyPLC is working correctly, shut it
338 down and begin changing some of the default variable
339 values. Shut down MyPLC with <command>service plc stop</command>
340 (see <xref linkend="StartingAndStoppingMyPLC" />). With a text
341 editor, open the file
342 <filename>/etc/planetlab/plc_config.xml</filename>. This file is
343 a self-documenting configuration file written in XML. Variables
344 are divided into categories. Variable identifiers must be
345 alphanumeric, plus underscore. A variable is referred to
346 canonically as the uppercase concatenation of its category
347 identifier, an underscore, and its variable identifier. Thus, a
348 variable with an <literal>id</literal> of
349 <literal>slice_prefix</literal> in the <literal>plc</literal>
350 category is referred to canonically as
351 <envar>PLC_SLICE_PREFIX</envar>.</para>
353 <para>The reason for this convention is that during MyPLC
354 startup, <filename>plc_config.xml</filename> is translated into
355 several different languages—shell, PHP, and
356 Python—so that scripts written in each of these languages
357 can refer to the same underlying configuration. Most MyPLC
358 scripts are written in shell, so the convention for shell
359 variables predominates.</para>
361 <para>The variables that you should change immediately are:</para>
364 <listitem><para><envar>PLC_NAME</envar>: Change this to the
365 name of your PLC installation.</para></listitem>
366 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
367 to a more secure password.</para></listitem>
369 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
370 Change this to the e-mail address at which you would like to
371 receive support requests.</para></listitem>
373 <listitem><para><envar>PLC_DB_HOST</envar>,
374 <envar>PLC_DB_IP</envar>, <envar>PLC_API_HOST</envar>,
375 <envar>PLC_API_IP</envar>, <envar>PLC_WWW_HOST</envar>,
376 <envar>PLC_WWW_IP</envar>, <envar>PLC_BOOT_HOST</envar>,
377 <envar>PLC_BOOT_IP</envar>: Change all of these to the
378 preferred FQDN and external IP address of your host
379 system.</para></listitem>
382 <para>After changing these variables, save the file, then
383 restart MyPLC with <command>service plc start</command>. You
384 should notice that the password of the default administrator
385 account is no longer <literal>root</literal>, and that the
386 default site name includes the name of your PLC installation
387 instead of PlanetLab.</para>
391 <title>Installing nodes</title>
393 <para>Install your first node by clicking <literal>Add
394 Node</literal> under the <literal>Nodes</literal> tab. Fill in
395 all the appropriate details, then click
396 <literal>Add</literal>. Download the node's configuration file
397 by clicking <literal>Download configuration file</literal> on
398 the <emphasis role="bold">Node Details</emphasis> page for the
399 node. Save it to a floppy disk or USB key as detailed in <xref
400 linkend="TechsGuide" />.</para>
402 <para>Follow the rest of the instructions in <xref
403 linkend="TechsGuide" /> for creating a Boot CD and installing
404 the node, except download the Boot CD image from the
405 <filename>/download</filename> directory of your PLC
406 installation, not from PlanetLab Central. The images located
407 here are customized for your installation. If you change the
408 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
409 if the SSL certificate of your boot server expires, MyPLC will
410 regenerate it and rebuild the Boot CD with the new
411 certificate. If this occurs, you must replace all Boot CDs
412 created before the certificate was regenerated.</para>
414 <para>The installation process for a node has significantly
415 improved since PlanetLab 3.3. It should now take only a few
416 seconds for a new node to become ready to create slices.</para>
420 <title>Administering nodes</title>
422 <para>You may administer nodes as <literal>root</literal> by
423 using the SSH key stored in
424 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
427 <title>Accessing nodes via SSH. Replace
428 <literal>node</literal> with the hostname of the node.</title>
430 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
433 <para>Besides the standard Linux log files located in
434 <filename>/var/log</filename>, several other files can give you
435 clues about any problems with active processes:</para>
438 <listitem><para><filename>/var/log/pl_nm</filename>: The log
439 file for the Node Manager.</para></listitem>
441 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
442 The log file for the Slice Creation Service.</para></listitem>
444 <listitem><para><filename>/var/log/propd</filename>: The log
445 file for Proper, the service which allows certain slices to
446 perform certain privileged operations in the root
447 context.</para></listitem>
449 <listitem><para><filename>/vservers/pl_netflow/var/log/netflow.log</filename>:
450 The log file for PlanetFlow, the network traffic auditing
451 service.</para></listitem>
456 <title>Creating a slice</title>
458 <para>Create a slice by clicking <literal>Create Slice</literal>
459 under the <literal>Slices</literal> tab. Fill in all the
460 appropriate details, then click <literal>Create</literal>. Add
461 nodes to the slice by clicking <literal>Manage Nodes</literal>
462 on the <emphasis role="bold">Slice Details</emphasis> page for
465 <para>A <command>cron</command> job runs every five minutes and
467 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
468 with information about current slice state. The Slice Creation
469 Service running on every node polls this file every ten minutes
470 to determine if it needs to create or delete any slices. You may
471 accelerate this process manually if desired.</para>
474 <title>Forcing slice creation on a node.</title>
476 <programlisting><![CDATA[# Update slices.xml immediately
477 service plc start crond
479 # Kick the Slice Creation Service on a particular node.
480 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
481 vserver pl_conf exec service pl_conf restart]]></programlisting>
487 <title>Rebuilding and customizing MyPLC</title>
489 <para>The MyPLC package, though distributed as an RPM, is not a
490 traditional package that can be easily rebuilt from SRPM. The
491 requisite build environment is quite extensive and numerous
492 assumptions are made throughout the PlanetLab source code base,
493 that the build environment is based on Fedora Core 4 and that
494 access to a complete Fedora Core 4 mirror is available.</para>
496 <para>For this reason, it is recommended that you only rebuild
497 MyPLC (or any of its components) from within the MyPLC development
498 environment. The MyPLC development environment is similar to MyPLC
499 itself in that it is a portable filesystem contained within a
500 <command>chroot</command> jail. The filesystem contains all the
501 necessary tools required to rebuild MyPLC, as well as a snapshot
502 of the PlanetLab source code base in the form of a local CVS
506 <title>Installation</title>
508 <para>Install the MyPLC development environment similarly to how
509 you would install MyPLC. You may install both packages on the same
510 host system if you wish. As with MyPLC, the MyPLC development
511 environment should be treated as a monolithic software
512 application, and any files present in the
513 <command>chroot</command> jail should not be modified directly, as
514 they are subject to upgrade.</para>
517 <title>Installing the MyPLC development environment.</title>
519 <programlisting><![CDATA[# If your distribution supports RPM
520 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
522 # If your distribution does not support RPM
524 wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
526 rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu]]></programlisting>
529 <para>The MyPLC development environment installs the following
530 files and directories:</para>
533 <listitem><para><filename>/plc/devel/root.img</filename>: The
534 main root filesystem of the MyPLC development environment. This
535 file is an uncompressed ext3 filesystem that is loopback mounted
536 on <filename>/plc/devel/root</filename> when the MyPLC
537 development environment is initialized. This filesystem, even
538 when mounted, should be treated as an opaque binary that can and
539 will be replaced in its entirety by any upgrade of the MyPLC
540 development environment.</para></listitem>
542 <listitem><para><filename>/plc/devel/root</filename>: The mount
544 <filename>/plc/devel/root.img</filename>.</para></listitem>
547 <para><filename>/plc/devel/data</filename>: The directory
548 where user data and generated files are stored. This directory
549 is bind mounted onto <filename>/plc/devel/root/data</filename>
550 so that it is accessible as <filename>/data</filename> from
551 within the <command>chroot</command> jail. Files in this
552 directory are marked with
553 <command>%config(noreplace)</command> in the RPM. Symlinks
554 ensure that the following directories (relative to
555 <filename>/plc/devel/root</filename>) are stored outside the
556 root filesystem image:</para>
559 <listitem><para><filename>/etc/planetlab</filename>: This
560 directory contains the configuration files that define your
561 MyPLC development environment.</para></listitem>
563 <listitem><para><filename>/cvs</filename>: A
564 snapshot of the PlanetLab source code is stored as a CVS
565 repository in this directory. Files in this directory will
566 <emphasis role="bold">not</emphasis> be updated by an upgrade of
567 <filename>myplc-devel</filename>. See <xref
568 linkend="UpdatingCVS" /> for more information about updating
569 PlanetLab source code.</para></listitem>
571 <listitem><para><filename>/build</filename>:
572 Builds are stored in this directory. This directory is bind
573 mounted onto <filename>/plc/devel/root/build</filename> so that
574 it is accessible as <filename>/build</filename> from within the
575 <command>chroot</command> jail. The build scripts in this
576 directory are themselves source controlled; see <xref
577 linkend="BuildingMyPLC" /> for more information about executing
578 builds.</para></listitem>
583 <para><filename>/etc/init.d/plc-devel</filename>: This file is
584 a System V init script installed on your host filesystem, that
585 allows you to start up and shut down the MyPLC development
586 environment with a single command.</para>
592 <title>Fedora Core 4 mirror requirement</title>
594 <para>The MyPLC development environment requires access to a
595 complete Fedora Core 4 i386 RPM repository, because several
596 different filesystems based upon Fedora Core 4 are constructed
597 during the process of building MyPLC. You may configure the
598 location of this repository via the
599 <envar>PLC_DEVEL_FEDORA_URL</envar> variable in
600 <filename>/plc/devel/data/etc/planetlab/plc_config.xml</filename>. The
601 value of the variable should be a URL that points to the top
602 level of a Fedora mirror that provides the
603 <filename>base</filename>, <filename>updates</filename>, and
604 <filename>extras</filename> repositories, e.g.,</para>
607 <listitem><para><filename>file:///data/fedora</filename></para></listitem>
608 <listitem><para><filename>http://coblitz.planet-lab.org/pub/fedora</filename></para></listitem>
609 <listitem><para><filename>ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</filename></para></listitem>
610 <listitem><para><filename>ftp://mirror.stanford.edu/pub/mirrors/fedora</filename></para></listitem>
611 <listitem><para><filename>http://rpmfind.net/linux/fedora</filename></para></listitem>
614 <para>As implied by the list, the repository may be located on
615 the local filesystem, or it may be located on a remote FTP or
616 HTTP server. URLs beginning with <filename>file://</filename>
617 should exist at the specified location relative to the root of
618 the <command>chroot</command> jail. For optimum performance and
619 reproducibility, specify
620 <envar>PLC_DEVEL_FEDORA_URL=file:///data/fedora</envar> and
621 download all Fedora Core 4 RPMS into
622 <filename>/plc/devel/data/fedora</filename> on the host system
623 after installing <filename>myplc-devel</filename>. Use a tool
624 such as <command>wget</command> or <command>rsync</command> to
625 download the RPMS from a public mirror:</para>
628 <title>Setting up a local Fedora Core 4 repository.</title>
630 <programlisting><![CDATA[mkdir -p /plc/devel/data/fedora
631 cd /plc/devel/data/fedora
633 for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
634 wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
635 done]]></programlisting>
638 <para>Change the repository URI and <command>--cut-dirs</command>
639 level as needed to produce a hierarchy that resembles:</para>
641 <programlisting><![CDATA[/plc/devel/data/fedora/core/4/i386/os
642 /plc/devel/data/fedora/core/updates/4/i386
643 /plc/devel/data/fedora/extras/4/i386]]></programlisting>
645 <para>A list of additional Fedora Core 4 mirrors is available at
646 <ulink url="http://fedora.redhat.com/Download/mirrors.html">http://fedora.redhat.com/Download/mirrors.html</ulink>.</para>
649 <section id="BuildingMyPLC">
650 <title>Building MyPLC</title>
652 <para>All PlanetLab source code modules are built and installed
653 as RPMS. A set of build scripts, checked into the
654 <filename>build/</filename> directory of the PlanetLab CVS
655 repository, eases the task of rebuilding PlanetLab source
658 <para>To build MyPLC, or any PlanetLab source code module, from
659 within the MyPLC development environment, execute the following
660 commands as root:</para>
663 <title>Building MyPLC.</title>
665 <programlisting><![CDATA[# Initialize MyPLC development environment
666 service plc-devel start
668 # Enter development environment
669 chroot /plc/devel/root su -
671 # Check out build scripts into a directory named after the current
672 # date. This is simply a convention, it need not be followed
673 # exactly. See build/build.sh for an example of a build script that
674 # names build directories after CVS tags.
675 DATE=$(date +%Y.%m.%d)
677 cvs -d /cvs checkout -d $DATE build
680 make -C $DATE]]></programlisting>
683 <para>If the build succeeds, a set of binary RPMS will be
685 <filename>/plc/devel/data/build/$DATE/RPMS/</filename> that you
687 <filename>/var/www/html/install-rpms/planetlab</filename>
688 directory of your MyPLC installation (see <xref
689 linkend="Installation" />).</para>
692 <section id="UpdatingCVS">
693 <title>Updating CVS</title>
695 <para>A complete snapshot of the PlanetLab source code is included
696 with the MyPLC development environment as a CVS repository in
697 <filename>/plc/devel/data/cvs</filename>. This CVS repository may
698 be accessed like any other CVS repository. It may be accessed
699 using an interface such as <ulink
700 url="http://www.freebsd.org/projects/cvsweb.html">CVSweb</ulink>,
701 and file permissions may be altered to allow for fine-grained
702 access control. Although the files are included with the
703 <filename>myplc-devel</filename> RPM, they are <emphasis
704 role="bold">not</emphasis> subject to upgrade once installed. New
705 versions of the <filename>myplc-devel</filename> RPM will install
706 updated snapshot repositories in
707 <filename>/plc/devel/data/cvs-%{version}-%{release}</filename>,
708 where <literal>%{version}-%{release}</literal> is replaced with
709 the version number of the RPM.</para>
711 <para>Because the CVS repository is not automatically upgraded,
712 if you wish to keep your local repository synchronized with the
713 public PlanetLab repository, it is highly recommended that you
714 use CVS's support for <ulink
715 url="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources">vendor
716 branches</ulink> to track changes. Vendor branches ease the task
717 of merging upstream changes with your local modifications. To
718 import a new snapshot into your local repository (for example,
719 if you have just upgraded from
720 <filename>myplc-devel-0.4-2</filename> to
721 <filename>myplc-devel-0.4-3</filename> and you notice the new
722 repository in <filename>/plc/devel/data/cvs-0.4-3</filename>),
723 execute the following commands as root from within the MyPLC
724 development environment:</para>
727 <title>Updating /data/cvs from /data/cvs-0.4-3.</title>
729 <para><emphasis role="bold">Warning</emphasis>: This may cause
730 severe, irreversible changes to be made to your local
731 repository. Always tag your local repository before
734 <programlisting><![CDATA[# Initialize MyPLC development environment
735 service plc-devel start
737 # Enter development environment
738 chroot /plc/devel/root su -
741 cvs -d /cvs rtag before-myplc-0_4-3-merge
744 TMP=$(mktemp -d /data/export.XXXXXX)
746 cvs -d /data/cvs-0.4-3 export -r HEAD .
747 cvs -d /cvs import -m "PlanetLab sources from myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
749 rm -rf $TMP]]></programlisting>
752 <para>If there any merge conflicts, use the command suggested by
753 CVS to help the merge. Explaining how to fix merge conflicts is
754 beyond the scope of this document; consult the CVS documentation
755 for more information on how to use CVS.</para>
760 <title>Configuration variables</title>
762 <para>Listed below is the set of standard configuration variables
763 and their default values, defined in the template
764 <filename>/etc/planetlab/default_config.xml</filename>. Additional
765 variables and their defaults may be defined in site-specific XML
766 templates that should be placed in
767 <filename>/etc/planetlab/configs/</filename>.</para>
773 <title>Development environment configuration variables</title>
779 <title>Bibliography</title>
781 <biblioentry id="TechsGuide">
782 <author><firstname>Mark</firstname><surname>Huang</surname></author>
784 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
785 Technical Contact's Guide</ulink></title>