1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
3 "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd" [
4 <!ENTITY Variables SYSTEM "plc_variables.xml">
5 <!ENTITY DevelVariables SYSTEM "plc_devel_variables.xml">
9 <title>MyPLC User's Guide</title>
12 <firstname>Mark Huang</firstname>
16 <orgname>Princeton University</orgname>
20 <para>This document describes the design, installation, and
21 administration of MyPLC, a complete PlanetLab Central (PLC)
22 portable installation contained within a
23 <command>chroot</command> jail. This document assumes advanced
24 knowledge of the PlanetLab architecture and Linux system
25 administration.</para>
30 <revnumber>1.0</revnumber>
31 <date>April 7, 2006</date>
32 <authorinitials>MLH</authorinitials>
33 <revdescription><para>Initial draft.</para></revdescription>
36 <revnumber>1.1</revnumber>
37 <date>July 19, 2006</date>
38 <authorinitials>MLH</authorinitials>
39 <revdescription><para>Add development environment.</para></revdescription>
42 <revnumber>1.2</revnumber>
43 <date>August 18, 2006</date>
44 <authorinitials>TPT</authorinitials>
46 <para>Review section on configuration and introduce <command>plc-config-tty</command>.</para>
47 <para>Present implementation details last.</para>
54 <title>Overview</title>
56 <para>MyPLC is a complete PlanetLab Central (PLC) portable
57 installation contained within a <command>chroot</command>
58 jail. The default installation consists of a web server, an
59 XML-RPC API server, a boot server, and a database server: the core
60 components of PLC. The installation is customized through an
61 easy-to-use graphical interface. All PLC services are started up
62 and shut down through a single script installed on the host
63 system. The usually complex process of installing and
64 administering the PlanetLab backend is reduced by containing PLC
65 services within a virtual filesystem. By packaging it in such a
66 manner, MyPLC may also be run on any modern Linux distribution,
67 and could conceivably even run in a PlanetLab slice.</para>
69 <figure id="Architecture">
70 <title>MyPLC architecture</title>
73 <imagedata fileref="architecture.eps" format="EPS" align="center" scale="50" />
76 <imagedata fileref="architecture.png" format="PNG" align="center" scale="50" />
79 <phrase>MyPLC architecture</phrase>
82 <para>MyPLC should be viewed as a single application that
83 provides multiple functions and can run on any host
89 <section> <title> Purpose of the <emphasis> myplc-devel
90 </emphasis> package </title>
91 <para> The <emphasis>myplc</emphasis> package comes with all
92 required node software, rebuilt from the public PlanetLab CVS
93 repository. If for any reason you need to implement your own
94 customized version of this software, you can use the
95 <emphasis>myplc-devel</emphasis> package instead, for setting up
96 your own development environment, including a local CVS
97 repository; you can then freely manage your changes and rebuild
98 your customized version of <emphasis>myplc</emphasis>. We also
99 provide good practices, that will then allow you to resync your local
100 CVS repository with any further evolution on the mainstream public
101 PlanetLab software. </para> </section>
106 <section id="Requirements"> <title> Requirements </title>
108 <para> <emphasis>myplc</emphasis> and
109 <emphasis>myplc-devel</emphasis> were designed as
110 <command>chroot</command> jails so as to reduce the requirements on
111 your host operating system. So in theory, these distributions should
112 work on virtually any Linux 2.6 based distribution, whether it
113 supports rpm or not. </para>
115 <para> However, things are never that simple and there indeed are
116 some known limitations to this, so here are a couple notes as a
117 recommended reading before you proceed with the installation.</para>
119 <para> As of 17 August 2006 (i.e <emphasis>myplc-0.5-2</emphasis>) :</para>
122 <listitem><para> The software is vastly based on <emphasis>Fedora
123 Core 4</emphasis>. Please note that the build server at Princeton
124 runs <emphasis>Fedora Core 2</emphasis>, togother with a upgraded
128 <listitem><para> myplc and myplc-devel are known to work on both
129 <emphasis>Fedora Core 2</emphasis> and <emphasis>Fedora Core
130 4</emphasis>. Please note however that, on fc4 at least, it is
131 highly recommended to use the <application>Security Level
132 Configuration</application> utility and to <emphasis>switch off
133 SElinux</emphasis> on your box because : </para>
137 myplc requires you to run SElinux as 'Permissive' at most
140 myplc-devel requires you to turn SElinux Off.
145 <listitem> <para> In addition, as far as myplc is concerned, you
146 need to check your firewall configuration since you need, of course,
147 to open up the <emphasis>http</emphasis> and
148 <emphasis>https</emphasis> ports, so as to accept connections from
149 the managed nodes and from the users desktops. </para> </listitem>
154 <section id="Installation">
155 <title>Installating and using MyPLC</title>
157 <para>Though internally composed of commodity software
158 subpackages, MyPLC should be treated as a monolithic software
159 application. MyPLC is distributed as single RPM package that has
160 no external dependencies, allowing it to be installed on
161 practically any Linux 2.6 based distribution.</para>
164 <title>Installing MyPLC.</title>
167 <listitem> <para>If your distribution supports RPM:</para>
168 <programlisting><![CDATA[# rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm]]></programlisting></listitem>
170 <listitem> <para>If your distribution does not support RPM:</para>
171 <programlisting><![CDATA[# cd /tmp
172 # wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
174 # rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu]]></programlisting></listitem>
177 <para> The <xref linkend="FilesInvolvedRuntime" /> below explains in
178 details the installation strategy and the miscellaneous files and
179 directories involved.</para>
183 <section id="QuickStart"> <title> QuickStart </title>
185 <para> On a Red Hat or Fedora host system, it is customary to use
186 the <command>service</command> command to invoke System V init
187 scripts. As the examples suggest, the service must be started as root:</para>
189 <example><title>Starting MyPLC:</title>
190 <programlisting><![CDATA[# service plc start]]></programlisting>
192 <example><title>Stopping MyPLC:</title>
193 <programlisting><![CDATA[# service plc stop]]></programlisting>
196 <para> In <xref linkend="StartupSequence" />, we provide greater
197 details that might be helpful in the case where the service does
198 not seem to take off correctly.</para>
200 <para>Like all other registered System V init services, MyPLC is
201 started and shut down automatically when your host system boots
202 and powers off. You may disable automatic startup by invoking the
203 <command>chkconfig</command> command on a Red Hat or Fedora host
206 <example> <title>Disabling automatic startup of MyPLC.</title>
207 <programlisting><![CDATA[# chkconfig plc off]]></programlisting></example>
208 <example> <title>Re-enabling automatic startup of MyPLC.</title>
209 <programlisting><![CDATA[# chkconfig plc on]]></programlisting></example>
213 <section id="Configuration">
214 <title>Changing the configuration</title>
216 <para>After verifying that MyPLC is working correctly, shut it
217 down and begin changing some of the default variable
218 values. Shut down MyPLC with <command>service plc stop</command>
219 (see <xref linkend="QuickStart" />). </para>
221 <para> The preferred option for changing the configuration is to
222 use the <command>plc-config-tty</command> tool. This tools comes
223 with the root image, so you need to have it mounted first. The
224 full set of applicable variables is described in <xref
225 linkend="VariablesDevel" />, but using the <command>u</command>
226 guides you to the most useful ones. Here is sample session:
229 <example><title>Using plc-config-tty for configuration:</title>
230 <programlisting><![CDATA[# service plc mount
232 # chroot /plc/root su -
233 <plc> # plc-config-tty
234 Config file /etc/planetlab/configs/site.xml located under a non-existing directory
235 Want to create /etc/planetlab/configs [y]/n ? y
236 Created directory /etc/planetlab/configs
237 Enter command (u for usual changes, w to save, ? for help) u
238 == PLC_NAME : [PlanetLab Test] OneLab
239 == PLC_ROOT_USER : [root@localhost.localdomain] root@odie.inria.fr
240 == PLC_ROOT_PASSWORD : [root] plain-passwd
241 == PLC_MAIL_SUPPORT_ADDRESS : [root+support@localhost.localdomain] support@one-lab.org
242 == PLC_DB_HOST : [localhost.localdomain] odie.inria.fr
243 == PLC_API_HOST : [localhost.localdomain] odie.inria.fr
244 == PLC_WWW_HOST : [localhost.localdomain] odie.inria.fr
245 == PLC_BOOT_HOST : [localhost.localdomain] odie.inria.fr
246 == PLC_NET_DNS1 : [127.0.0.1] 138.96.250.248
247 == PLC_NET_DNS2 : [None] 138.96.250.249
248 Enter command (u for usual changes, w to save, ? for help) w
249 Wrote /etc/planetlab/configs/site.xml
251 /etc/planetlab/default_config.xml
252 and /etc/planetlab/configs/site.xml
253 into /etc/planetlab/plc_config.xml
254 You might want to type 'r' (restart plc) or 'q' (quit)
255 Enter command (u for usual changes, w to save, ? for help) r
256 ==================== Stopping plc
258 ==================== Starting plc
260 Enter command (u for usual changes, w to save, ? for help) q
266 <para>If you used this method for configuring, you can skip to
267 the <xref linkend="LoginRealUser" />. As an alternative to using
268 <command>plc-config-tty</command>, you may also use a text
269 editor, but this requires some understanding on how the
270 configuration files are used within myplc. The
271 <emphasis>default</emphasis> configuration is stored in a file
272 named <filename>/etc/planetlab/default_config.xml</filename>,
273 that is designed to remain intact. You may store your local
274 changes in any file located in the <filename>configs/</filename>
275 sub-directory, that are loaded on top of the defaults. Finally
276 the file <filename>/etc/planetlab/plc_config.xml</filename> is
277 loaded, and the resulting configuration is stored in the latter
278 file, that is used as a reference.</para>
280 <para> Using a separate file for storing local changes only, as
281 <command>plc-config-tty</command> does, is not a workable option
282 with a text editor because it would involve tedious xml
283 re-assembling. So your local changes should go in
284 <filename>/etc/planetlab/plc_config.xml</filename>. Be warned
285 however that any change you might do this way could be lost if
286 you use <command>plc-config-tty</command> later on. </para>
288 <para>This file is a self-documenting configuration file written
289 in XML. Variables are divided into categories. Variable
290 identifiers must be alphanumeric, plus underscore. A variable is
291 referred to canonically as the uppercase concatenation of its
292 category identifier, an underscore, and its variable
293 identifier. Thus, a variable with an <literal>id</literal> of
294 <literal>slice_prefix</literal> in the <literal>plc</literal>
295 category is referred to canonically as
296 <envar>PLC_SLICE_PREFIX</envar>.</para>
298 <para>The reason for this convention is that during MyPLC
299 startup, <filename>plc_config.xml</filename> is translated into
300 several different languages—shell, PHP, and
301 Python—so that scripts written in each of these languages
302 can refer to the same underlying configuration. Most MyPLC
303 scripts are written in shell, so the convention for shell
304 variables predominates.</para>
306 <para>The variables that you should change immediately are:</para>
309 <listitem><para><envar>PLC_NAME</envar>: Change this to the
310 name of your PLC installation.</para></listitem>
311 <listitem><para><envar>PLC_ROOT_PASSWORD</envar>: Change this
312 to a more secure password.</para></listitem>
314 <listitem><para><envar>PLC_MAIL_SUPPORT_ADDRESS</envar>:
315 Change this to the e-mail address at which you would like to
316 receive support requests.</para></listitem>
318 <listitem><para><envar>PLC_DB_HOST</envar>,
319 <envar>PLC_DB_IP</envar>, <envar>PLC_API_HOST</envar>,
320 <envar>PLC_API_IP</envar>, <envar>PLC_WWW_HOST</envar>,
321 <envar>PLC_WWW_IP</envar>, <envar>PLC_BOOT_HOST</envar>,
322 <envar>PLC_BOOT_IP</envar>: Change all of these to the
323 preferred FQDN and external IP address of your host
324 system.</para></listitem>
327 <para> After changing these variables,
328 save the file, then restart MyPLC with <command>service plc
329 start</command>. You should notice that the password of the
330 default administrator account is no longer
331 <literal>root</literal>, and that the default site name includes
332 the name of your PLC installation instead of PlanetLab. As a
333 side effect of these changes, the ISO images for the boot CDs
334 now have new names, so that you can freely remove the ones names
335 after 'PlanetLab Test', which is the default value of
336 <envar>PLC_NAME</envar> </para>
339 <section id="LoginRealUser"> <title> Login as a real user </title>
341 <para>Now that myplc is up and running, you can connect to the
342 web site that by default runs on port 80. You can either
343 directly use the default administrator user that you configured
344 in <envar>PLC_ROOT_USER</envar> and
345 <envar>PLC_ROOT_PASSWORD</envar>, or create a real user through
346 the 'Joining' tab. Do not forget to select both PI and tech
347 roles, and to select the only site created at this stage.
348 Login as the administrator to enable this user, then login as
349 the real user.</para>
353 <title>Installing nodes</title>
355 <para>Install your first node by clicking <literal>Add
356 Node</literal> under the <literal>Nodes</literal> tab. Fill in
357 all the appropriate details, then click
358 <literal>Add</literal>. Download the node's configuration file
359 by clicking <literal>Download configuration file</literal> on
360 the <emphasis role="bold">Node Details</emphasis> page for the
361 node. Save it to a floppy disk or USB key as detailed in <xref
362 linkend="TechsGuide" />.</para>
364 <para>Follow the rest of the instructions in <xref
365 linkend="TechsGuide" /> for creating a Boot CD and installing
366 the node, except download the Boot CD image from the
367 <filename>/download</filename> directory of your PLC
368 installation, not from PlanetLab Central. The images located
369 here are customized for your installation. If you change the
370 hostname of your boot server (<envar>PLC_BOOT_HOST</envar>), or
371 if the SSL certificate of your boot server expires, MyPLC will
372 regenerate it and rebuild the Boot CD with the new
373 certificate. If this occurs, you must replace all Boot CDs
374 created before the certificate was regenerated.</para>
376 <para>The installation process for a node has significantly
377 improved since PlanetLab 3.3. It should now take only a few
378 seconds for a new node to become ready to create slices.</para>
382 <title>Administering nodes</title>
384 <para>You may administer nodes as <literal>root</literal> by
385 using the SSH key stored in
386 <filename>/etc/planetlab/root_ssh_key.rsa</filename>.</para>
389 <title>Accessing nodes via SSH. Replace
390 <literal>node</literal> with the hostname of the node.</title>
392 <programlisting>ssh -i /etc/planetlab/root_ssh_key.rsa root@node</programlisting>
395 <para>Besides the standard Linux log files located in
396 <filename>/var/log</filename>, several other files can give you
397 clues about any problems with active processes:</para>
400 <listitem><para><filename>/var/log/pl_nm</filename>: The log
401 file for the Node Manager.</para></listitem>
403 <listitem><para><filename>/vservers/pl_conf/var/log/pl_conf</filename>:
404 The log file for the Slice Creation Service.</para></listitem>
406 <listitem><para><filename>/var/log/propd</filename>: The log
407 file for Proper, the service which allows certain slices to
408 perform certain privileged operations in the root
409 context.</para></listitem>
411 <listitem><para><filename>/vservers/pl_netflow/var/log/netflow.log</filename>:
412 The log file for PlanetFlow, the network traffic auditing
413 service.</para></listitem>
418 <title>Creating a slice</title>
420 <para>Create a slice by clicking <literal>Create Slice</literal>
421 under the <literal>Slices</literal> tab. Fill in all the
422 appropriate details, then click <literal>Create</literal>. Add
423 nodes to the slice by clicking <literal>Manage Nodes</literal>
424 on the <emphasis role="bold">Slice Details</emphasis> page for
427 <para>A <command>cron</command> job runs every five minutes and
429 <filename>/plc/data/var/www/html/xml/slices-0.5.xml</filename>
430 with information about current slice state. The Slice Creation
431 Service running on every node polls this file every ten minutes
432 to determine if it needs to create or delete any slices. You may
433 accelerate this process manually if desired.</para>
436 <title>Forcing slice creation on a node.</title>
438 <programlisting><![CDATA[# Update slices.xml immediately
439 service plc start crond
441 # Kick the Slice Creation Service on a particular node.
442 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
443 vserver pl_conf exec service pl_conf restart]]></programlisting>
447 <section id="StartupSequence">
448 <title>Understanding the startup sequence</title>
450 <para>During service startup described in <xref
451 linkend="QuickStart" />, observe the output of this command for
452 any failures. If no failures occur, you should see output similar
453 to the following:</para>
456 <title>A successful MyPLC startup.</title>
458 <programlisting><![CDATA[Mounting PLC: [ OK ]
459 PLC: Generating network files: [ OK ]
460 PLC: Starting system logger: [ OK ]
461 PLC: Starting database server: [ OK ]
462 PLC: Generating SSL certificates: [ OK ]
463 PLC: Configuring the API: [ OK ]
464 PLC: Updating GPG keys: [ OK ]
465 PLC: Generating SSH keys: [ OK ]
466 PLC: Starting web server: [ OK ]
467 PLC: Bootstrapping the database: [ OK ]
468 PLC: Starting DNS server: [ OK ]
469 PLC: Starting crond: [ OK ]
470 PLC: Rebuilding Boot CD: [ OK ]
471 PLC: Rebuilding Boot Manager: [ OK ]
472 PLC: Signing node packages: [ OK ]
476 <para>If <filename>/plc/root</filename> is mounted successfully, a
477 complete log file of the startup process may be found at
478 <filename>/plc/root/var/log/boot.log</filename>. Possible reasons
479 for failure of each step include:</para>
482 <listitem><para><literal>Mounting PLC</literal>: If this step
483 fails, first ensure that you started MyPLC as root. Check
484 <filename>/etc/sysconfig/plc</filename> to ensure that
485 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar> refer to the
486 right locations. You may also have too many existing loopback
487 mounts, or your kernel may not support loopback mounting, bind
488 mounting, or the ext3 filesystem. Try freeing at least one
489 loopback device, or re-compiling your kernel to support loopback
490 mounting, bind mounting, and the ext3 filesystem. If you see an
491 error similar to <literal>Permission denied while trying to open
492 /plc/root.img</literal>, then SELinux may be enabled. See <xref
493 linkend="Requirements" /> above for details.</para></listitem>
495 <listitem><para><literal>Starting database server</literal>: If
496 this step fails, check
497 <filename>/plc/root/var/log/pgsql</filename> and
498 <filename>/plc/root/var/log/boot.log</filename>. The most common
499 reason for failure is that the default PostgreSQL port, TCP port
500 5432, is already in use. Check that you are not running a
501 PostgreSQL server on the host system.</para></listitem>
503 <listitem><para><literal>Starting web server</literal>: If this
505 <filename>/plc/root/var/log/httpd/error_log</filename> and
506 <filename>/plc/root/var/log/boot.log</filename> for obvious
507 errors. The most common reason for failure is that the default
508 web ports, TCP ports 80 and 443, are already in use. Check that
509 you are not running a web server on the host
510 system.</para></listitem>
512 <listitem><para><literal>Bootstrapping the database</literal>:
513 If this step fails, it is likely that the previous step
514 (<literal>Starting web server</literal>) also failed. Another
515 reason that it could fail is if <envar>PLC_API_HOST</envar> (see
516 <xref linkend="Configuration" />) does not resolve to
517 the host on which the API server has been enabled. By default,
518 all services, including the API server, are enabled and run on
519 the same host, so check that <envar>PLC_API_HOST</envar> is
520 either <filename>localhost</filename> or resolves to a local IP
521 address. Also check that <envar>PLC_ROOT_USER</envar> looks like
522 an e-mail address.</para></listitem>
524 <listitem><para><literal>Starting crond</literal>: If this step
525 fails, it is likely that the previous steps (<literal>Starting
526 web server</literal> and <literal>Bootstrapping the
527 database</literal>) also failed. If not, check
528 <filename>/plc/root/var/log/boot.log</filename> for obvious
529 errors. This step starts the <command>cron</command> service and
530 generates the initial set of XML files that the Slice Creation
531 Service uses to determine slice state.</para></listitem>
534 <para>If no failures occur, then MyPLC should be active with a
535 default configuration. Open a web browser on the host system and
536 visit <literal>http://localhost/</literal>, which should bring you
537 to the front page of your PLC installation. The password of the
538 default administrator account
539 <literal>root@localhost.localdomain</literal> (set by
540 <envar>PLC_ROOT_USER</envar>) is <literal>root</literal> (set by
541 <envar>PLC_ROOT_PASSWORD</envar>).</para>
544 <section id="FilesInvolvedRuntime"> <title> Files and directories
545 involved in <emphasis>myplc</emphasis></title>
546 <para>MyPLC installs the following files and directories:</para>
550 <listitem><para><filename>/plc/root.img</filename>: The main
551 root filesystem of the MyPLC application. This file is an
552 uncompressed ext3 filesystem that is loopback mounted on
553 <filename>/plc/root</filename> when MyPLC starts. This
554 filesystem, even when mounted, should be treated as an opaque
555 binary that can and will be replaced in its entirety by any
556 upgrade of MyPLC.</para></listitem>
558 <listitem><para><filename>/plc/root</filename>: The mount point
559 for <filename>/plc/root.img</filename>. Once the root filesystem
560 is mounted, all MyPLC services run in a
561 <command>chroot</command> jail based in this
562 directory.</para></listitem>
565 <para><filename>/plc/data</filename>: The directory where user
566 data and generated files are stored. This directory is bind
567 mounted onto <filename>/plc/root/data</filename> so that it is
568 accessible as <filename>/data</filename> from within the
569 <command>chroot</command> jail. Files in this directory are
570 marked with <command>%config(noreplace)</command> in the
571 RPM. That is, during an upgrade of MyPLC, if a file has not
572 changed since the last installation or upgrade of MyPLC, it is
573 subject to upgrade and replacement. If the file has changed,
574 the new version of the file will be created with a
575 <filename>.rpmnew</filename> extension. Symlinks within the
576 MyPLC root filesystem ensure that the following directories
577 (relative to <filename>/plc/root</filename>) are stored
578 outside the MyPLC filesystem image:</para>
581 <listitem><para><filename>/etc/planetlab</filename>: This
582 directory contains the configuration files, keys, and
583 certificates that define your MyPLC
584 installation.</para></listitem>
586 <listitem><para><filename>/var/lib/pgsql</filename>: This
587 directory contains PostgreSQL database
588 files.</para></listitem>
590 <listitem><para><filename>/var/www/html/alpina-logs</filename>: This
591 directory contains node installation logs.</para></listitem>
593 <listitem><para><filename>/var/www/html/boot</filename>: This
594 directory contains the Boot Manager, customized for your MyPLC
595 installation, and its data files.</para></listitem>
597 <listitem><para><filename>/var/www/html/download</filename>: This
598 directory contains Boot CD images, customized for your MyPLC
599 installation.</para></listitem>
601 <listitem><para><filename>/var/www/html/install-rpms</filename>: This
602 directory is where you should install node package updates,
603 if any. By default, nodes are installed from the tarball
605 <filename>/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</filename>,
606 which is pre-built from the latest PlanetLab Central
607 sources, and installed as part of your MyPLC
608 installation. However, nodes will attempt to install any
609 newer RPMs located in
610 <filename>/var/www/html/install-rpms/planetlab</filename>,
611 after initial installation and periodically thereafter. You
612 must run <command>yum-arch</command> and
613 <command>createrepo</command> to update the
614 <command>yum</command> caches in this directory after
615 installing a new RPM. PlanetLab Central cannot support any
616 changes to this directory.</para></listitem>
618 <listitem><para><filename>/var/www/html/xml</filename>: This
619 directory contains various XML files that the Slice Creation
620 Service uses to determine the state of slices. These XML
621 files are refreshed periodically by <command>cron</command>
622 jobs running in the MyPLC root.</para></listitem>
624 <listitem><para><filename>/root</filename>: this is the
625 location of the root-user's homedir, and for your
626 convenience is stored under <filename>/data</filename> so
627 that your local customizations survive across
628 updates - this feature is inherited from the
629 <command>myplc-devel</command> package, where it is probably
630 more useful. </para></listitem>
635 <listitem id="MyplcInitScripts">
636 <para><filename>/etc/init.d/plc</filename>: This file
637 is a System V init script installed on your host filesystem,
638 that allows you to start up and shut down MyPLC with a single
639 command, as described in <xref linkend="QuickStart" />.</para>
642 <listitem><para><filename>/etc/sysconfig/plc</filename>: This
643 file is a shell script fragment that defines the variables
644 <envar>PLC_ROOT</envar> and <envar>PLC_DATA</envar>. By default,
645 the values of these variables are <filename>/plc/root</filename>
646 and <filename>/plc/data</filename>, respectively. If you wish,
647 you may move your MyPLC installation to another location on your
648 host filesystem and edit the values of these variables
649 appropriately, but you will break the RPM upgrade
650 process. PlanetLab Central cannot support any changes to this
651 file.</para></listitem>
653 <listitem><para><filename>/etc/planetlab</filename>: This
654 symlink to <filename>/plc/data/etc/planetlab</filename> is
655 installed on the host system for convenience.</para></listitem>
660 <section id="DevelopmentEnvironment">
661 <title>Rebuilding and customizing MyPLC</title>
663 <para>The MyPLC package, though distributed as an RPM, is not a
664 traditional package that can be easily rebuilt from SRPM. The
665 requisite build environment is quite extensive and numerous
666 assumptions are made throughout the PlanetLab source code base,
667 that the build environment is based on Fedora Core 4 and that
668 access to a complete Fedora Core 4 mirror is available.</para>
670 <para>For this reason, it is recommended that you only rebuild
671 MyPLC (or any of its components) from within the MyPLC development
672 environment. The MyPLC development environment is similar to MyPLC
673 itself in that it is a portable filesystem contained within a
674 <command>chroot</command> jail. The filesystem contains all the
675 necessary tools required to rebuild MyPLC, as well as a snapshot
676 of the PlanetLab source code base in the form of a local CVS
680 <title>Installation</title>
682 <para>Install the MyPLC development environment similarly to how
683 you would install MyPLC. You may install both packages on the same
684 host system if you wish. As with MyPLC, the MyPLC development
685 environment should be treated as a monolithic software
686 application, and any files present in the
687 <command>chroot</command> jail should not be modified directly, as
688 they are subject to upgrade.</para>
691 <listitem> <para>If your distribution supports RPM:</para>
692 <programlisting><![CDATA[# rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm]]></programlisting></listitem>
694 <listitem> <para>If your distribution does not support RPM:</para>
695 <programlisting><![CDATA[# cd /tmp
696 # wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
698 # rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu]]></programlisting></listitem>
703 <title>Configuration</title>
705 <para> The default configuration should work as-is on most
706 sites. Configuring the development package can be achieved in a
707 similar way as for <emphasis>myplc</emphasis>, as described in
708 <xref linkend="Configuration"
709 />. <command>plc-config-tty</command> supports a
710 <emphasis>-d</emphasis> option for supporting the
711 <emphasis>myplc-devel</emphasis> case, that can be useful in a
712 context where it would not guess it by itself. Refer to <xref
713 linkend="VariablesDevel" /> for a list of variables.</para>
716 <section id="FilesInvolvedDevel"> <title> Files and directories
717 involved in <emphasis>myplc-devl</emphasis></title>
719 <para>The MyPLC development environment installs the following
720 files and directories:</para>
723 <listitem><para><filename>/plc/devel/root.img</filename>: The
724 main root filesystem of the MyPLC development environment. This
725 file is an uncompressed ext3 filesystem that is loopback mounted
726 on <filename>/plc/devel/root</filename> when the MyPLC
727 development environment is initialized. This filesystem, even
728 when mounted, should be treated as an opaque binary that can and
729 will be replaced in its entirety by any upgrade of the MyPLC
730 development environment.</para></listitem>
732 <listitem><para><filename>/plc/devel/root</filename>: The mount
734 <filename>/plc/devel/root.img</filename>.</para></listitem>
737 <para><filename>/plc/devel/data</filename>: The directory
738 where user data and generated files are stored. This directory
739 is bind mounted onto <filename>/plc/devel/root/data</filename>
740 so that it is accessible as <filename>/data</filename> from
741 within the <command>chroot</command> jail. Files in this
742 directory are marked with
743 <command>%config(noreplace)</command> in the RPM. Symlinks
744 ensure that the following directories (relative to
745 <filename>/plc/devel/root</filename>) are stored outside the
746 root filesystem image:</para>
749 <listitem><para><filename>/etc/planetlab</filename>: This
750 directory contains the configuration files that define your
751 MyPLC development environment.</para></listitem>
753 <listitem><para><filename>/cvs</filename>: A
754 snapshot of the PlanetLab source code is stored as a CVS
755 repository in this directory. Files in this directory will
756 <emphasis role="bold">not</emphasis> be updated by an upgrade of
757 <filename>myplc-devel</filename>. See <xref
758 linkend="UpdatingCVS" /> for more information about updating
759 PlanetLab source code.</para></listitem>
761 <listitem><para><filename>/build</filename>:
762 Builds are stored in this directory. This directory is bind
763 mounted onto <filename>/plc/devel/root/build</filename> so that
764 it is accessible as <filename>/build</filename> from within the
765 <command>chroot</command> jail. The build scripts in this
766 directory are themselves source controlled; see <xref
767 linkend="BuildingMyPLC" /> for more information about executing
768 builds.</para></listitem>
770 <listitem><para><filename>/root</filename>: this is the
771 location of the root-user's homedir, and for your
772 convenience is stored under <filename>/data</filename> so
773 that your local customizations survive across
774 updates. </para></listitem> </itemizedlist> </listitem>
777 <para><filename>/etc/init.d/plc-devel</filename>: This file is
778 a System V init script installed on your host filesystem, that
779 allows you to start up and shut down the MyPLC development
780 environment with a single command.</para>
786 <title>Fedora Core 4 mirror requirement</title>
788 <para>The MyPLC development environment requires access to a
789 complete Fedora Core 4 i386 RPM repository, because several
790 different filesystems based upon Fedora Core 4 are constructed
791 during the process of building MyPLC. You may configure the
792 location of this repository via the
793 <envar>PLC_DEVEL_FEDORA_URL</envar> variable in
794 <filename>/plc/devel/data/etc/planetlab/plc_config.xml</filename>. The
795 value of the variable should be a URL that points to the top
796 level of a Fedora mirror that provides the
797 <filename>base</filename>, <filename>updates</filename>, and
798 <filename>extras</filename> repositories, e.g.,</para>
801 <listitem><para><filename>file:///data/fedora</filename></para></listitem>
802 <listitem><para><filename>http://coblitz.planet-lab.org/pub/fedora</filename></para></listitem>
803 <listitem><para><filename>ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</filename></para></listitem>
804 <listitem><para><filename>ftp://mirror.stanford.edu/pub/mirrors/fedora</filename></para></listitem>
805 <listitem><para><filename>http://rpmfind.net/linux/fedora</filename></para></listitem>
808 <para>As implied by the list, the repository may be located on
809 the local filesystem, or it may be located on a remote FTP or
810 HTTP server. URLs beginning with <filename>file://</filename>
811 should exist at the specified location relative to the root of
812 the <command>chroot</command> jail. For optimum performance and
813 reproducibility, specify
814 <envar>PLC_DEVEL_FEDORA_URL=file:///data/fedora</envar> and
815 download all Fedora Core 4 RPMS into
816 <filename>/plc/devel/data/fedora</filename> on the host system
817 after installing <filename>myplc-devel</filename>. Use a tool
818 such as <command>wget</command> or <command>rsync</command> to
819 download the RPMS from a public mirror:</para>
822 <title>Setting up a local Fedora Core 4 repository.</title>
824 <programlisting><![CDATA[# mkdir -p /plc/devel/data/fedora
825 # cd /plc/devel/data/fedora
827 # for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
828 > wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
829 > done]]></programlisting>
832 <para>Change the repository URI and <command>--cut-dirs</command>
833 level as needed to produce a hierarchy that resembles:</para>
835 <programlisting><![CDATA[/plc/devel/data/fedora/core/4/i386/os
836 /plc/devel/data/fedora/core/updates/4/i386
837 /plc/devel/data/fedora/extras/4/i386]]></programlisting>
839 <para>A list of additional Fedora Core 4 mirrors is available at
840 <ulink url="http://fedora.redhat.com/Download/mirrors.html">http://fedora.redhat.com/Download/mirrors.html</ulink>.</para>
843 <section id="BuildingMyPLC">
844 <title>Building MyPLC</title>
846 <para>All PlanetLab source code modules are built and installed
847 as RPMS. A set of build scripts, checked into the
848 <filename>build/</filename> directory of the PlanetLab CVS
849 repository, eases the task of rebuilding PlanetLab source
852 <para> Before you try building MyPLC, you might check the
853 configuration, in a file named
854 <emphasis>plc_config.xml</emphasis> that relies on a very
855 similar model as MyPLC, located in
856 <emphasis>/etc/planetlab</emphasis> within the chroot jail, or
857 in <emphasis>/plc/devel/data/etc/planetlab</emphasis> from the
858 root context. The set of applicable variables is described in
859 <xref linkend="VariablesDevel" />. </para>
861 <para>To build MyPLC, or any PlanetLab source code module, from
862 within the MyPLC development environment, execute the following
863 commands as root:</para>
866 <title>Building MyPLC.</title>
868 <programlisting><![CDATA[# Initialize MyPLC development environment
869 service plc-devel start
871 # Enter development environment
872 chroot /plc/devel/root su -
874 # Check out build scripts into a directory named after the current
875 # date. This is simply a convention, it need not be followed
876 # exactly. See build/build.sh for an example of a build script that
877 # names build directories after CVS tags.
878 DATE=$(date +%Y.%m.%d)
880 cvs -d /cvs checkout -d $DATE build
883 make -C $DATE]]></programlisting>
886 <para>If the build succeeds, a set of binary RPMS will be
888 <filename>/plc/devel/data/build/$DATE/RPMS/</filename> that you
890 <filename>/var/www/html/install-rpms/planetlab</filename>
891 directory of your MyPLC installation (see <xref
892 linkend="Installation" />).</para>
895 <section id="UpdatingCVS">
896 <title>Updating CVS</title>
898 <para>A complete snapshot of the PlanetLab source code is included
899 with the MyPLC development environment as a CVS repository in
900 <filename>/plc/devel/data/cvs</filename>. This CVS repository may
901 be accessed like any other CVS repository. It may be accessed
902 using an interface such as <ulink
903 url="http://www.freebsd.org/projects/cvsweb.html">CVSweb</ulink>,
904 and file permissions may be altered to allow for fine-grained
905 access control. Although the files are included with the
906 <filename>myplc-devel</filename> RPM, they are <emphasis
907 role="bold">not</emphasis> subject to upgrade once installed. New
908 versions of the <filename>myplc-devel</filename> RPM will install
909 updated snapshot repositories in
910 <filename>/plc/devel/data/cvs-%{version}-%{release}</filename>,
911 where <literal>%{version}-%{release}</literal> is replaced with
912 the version number of the RPM.</para>
914 <para>Because the CVS repository is not automatically upgraded,
915 if you wish to keep your local repository synchronized with the
916 public PlanetLab repository, it is highly recommended that you
917 use CVS's support for vendor branches to track changes, as
919 url="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources">here</ulink>
921 url="http://cvsbook.red-bean.com/cvsbook.html#Tracking%20Third-Party%20Sources%20(Vendor%20Branches)">here</ulink>.
922 Vendor branches ease the task of merging upstream changes with
923 your local modifications. To import a new snapshot into your
924 local repository (for example, if you have just upgraded from
925 <filename>myplc-devel-0.4-2</filename> to
926 <filename>myplc-devel-0.4-3</filename> and you notice the new
927 repository in <filename>/plc/devel/data/cvs-0.4-3</filename>),
928 execute the following commands as root from within the MyPLC
929 development environment:</para>
932 <title>Updating /data/cvs from /data/cvs-0.4-3.</title>
934 <para><emphasis role="bold">Warning</emphasis>: This may cause
935 severe, irreversible changes to be made to your local
936 repository. Always tag your local repository before
939 <programlisting><![CDATA[# Initialize MyPLC development environment
940 service plc-devel start
942 # Enter development environment
943 chroot /plc/devel/root su -
946 cvs -d /cvs rtag before-myplc-0_4-3-merge
949 TMP=$(mktemp -d /data/export.XXXXXX)
951 cvs -d /data/cvs-0.4-3 export -r HEAD .
952 cvs -d /cvs import -m "Merging myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
954 rm -rf $TMP]]></programlisting>
957 <para>If there are any merge conflicts, use the command
958 suggested by CVS to help the merge. Explaining how to fix merge
959 conflicts is beyond the scope of this document; consult the CVS
960 documentation for more information on how to use CVS.</para>
961 </section> </section>
963 <section><title> More information : the FAQ wiki page</title>
965 <para> Please refer to, and feel free to contribute, <ulink
966 url="https://wiki.planet-lab.org/twiki/bin/view/Planetlab/MyplcFAQ">
967 the FAQ page on the Princeton's wiki </ulink>.</para></section>
969 <appendix id="VariablesRuntime">
970 <title>Configuration variables (for <emphasis>myplc</emphasis>)</title>
972 <para>Listed below is the set of standard configuration variables
973 and their default values, defined in the template
974 <filename>/etc/planetlab/default_config.xml</filename>. Additional
975 variables and their defaults may be defined in site-specific XML
976 templates that should be placed in
977 <filename>/etc/planetlab/configs/</filename>.</para>
979 <para>This information is available online within
980 <command>plc-config-tty</command>, e.g.:</para>
982 <example><title>Advanced usage of plc-config-tty</title>
983 <programlisting><![CDATA[<plc> # plc-config-tty
984 Enter command (u for usual changes, w to save, ? for help) V plc_dns
985 ========== Category = PLC_DNS
987 # Enable the internal DNS server. The server does not provide reverse
988 # resolution and is not a production quality or scalable DNS solution.
989 # Use the internal DNS server only for small deployments or for testing.
991 ]]></programlisting></example>
993 <para> List of the <command>myplc</command> configuration variables:</para>
997 <appendix id="VariablesDevel">
998 <title>Development configuration variables (for <emphasis>myplc-devel</emphasis>)</title>
1004 <title>Bibliography</title>
1006 <biblioentry id="TechsGuide">
1007 <author><firstname>Mark</firstname><surname>Huang</surname></author>
1009 url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab
1010 Technical Contact's Guide</ulink></title>