3 // DO NOT EDIT. This file was automatically generated from
4 // DocBook XML. See plc_www/doc/README.
6 $_title= "MyPLC User's Guide";
8 require_once('session.php');
9 require_once('header.php');
10 require_once('nav.php');
12 ?><div class="article" lang="en">
13 <div class="titlepage">
15 <div><h1 class="title">
16 <a name="id224920"></a>MyPLC User's Guide</h1></div>
17 <div><div class="author"><h3 class="author"><span class="firstname">Mark Huang</span></h3></div></div>
18 <div><div class="revhistory"><table border="1" width="100%" summary="Revision history">
19 <tr><th align="left" valign="top" colspan="3"><b>Revision History</b></th></tr>
21 <td align="left">Revision 1.0</td>
22 <td align="left">April 7, 2006</td>
23 <td align="left">MLH</td>
25 <tr><td align="left" colspan="3">
29 <div><div class="abstract">
30 <p class="title"><b>Abstract</b></p>
31 <p>This document describes the design, installation, and
32 administration of MyPLC, a complete PlanetLab Central (PLC)
33 portable installation contained within a
34 <span><strong class="command">chroot</strong></span> jail. This document assumes advanced
35 knowledge of the PlanetLab architecture and Linux system
42 <p><b>Table of Contents</b></p>
44 <dt><span class="section"><a href="#id225359">1. Overview</a></span></dt>
45 <dt><span class="section"><a href="#Installation">2. Installation</a></span></dt>
46 <dt><span class="section"><a href="#id267678">3. Quickstart</a></span></dt>
48 <dt><span class="section"><a href="#ChangingTheConfiguration">3.1. Changing the configuration</a></span></dt>
49 <dt><span class="section"><a href="#id268186">3.2. Installing nodes</a></span></dt>
50 <dt><span class="section"><a href="#id268260">3.3. Administering nodes</a></span></dt>
51 <dt><span class="section"><a href="#id268354">3.4. Creating a slice</a></span></dt>
53 <dt><span class="section"><a href="#id268428">4. Rebuilding and customizing MyPLC</a></span></dt>
55 <dt><span class="section"><a href="#id268453">4.1. Installation</a></span></dt>
56 <dt><span class="section"><a href="#id268661">4.2. Fedora Core 4 mirror requirement</a></span></dt>
57 <dt><span class="section"><a href="#BuildingMyPLC">4.3. Building MyPLC</a></span></dt>
58 <dt><span class="section"><a href="#UpdatingCVS">4.4. Updating CVS</a></span></dt>
60 <dt><span class="appendix"><a href="#id269022">A. Configuration variables</a></span></dt>
61 <dt><span class="appendix"><a href="#id271727">B. Development environment configuration variables</a></span></dt>
62 <dt><span class="bibliography"><a href="#id271809">Bibliography</a></span></dt>
65 <div class="section" lang="en">
66 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
67 <a name="id225359"></a>1. Overview</h2></div></div></div>
68 <p>MyPLC is a complete PlanetLab Central (PLC) portable
69 installation contained within a <span><strong class="command">chroot</strong></span>
70 jail. The default installation consists of a web server, an
71 XML-RPC API server, a boot server, and a database server: the core
72 components of PLC. The installation is customized through an
73 easy-to-use graphical interface. All PLC services are started up
74 and shut down through a single script installed on the host
75 system. The usually complex process of installing and
76 administering the PlanetLab backend is reduced by containing PLC
77 services within a virtual filesystem. By packaging it in such a
78 manner, MyPLC may also be run on any modern Linux distribution,
79 and could conceivably even run in a PlanetLab slice.</p>
81 <a name="Architecture"></a><p class="title"><b>Figure 1. MyPLC architecture</b></p>
82 <div class="mediaobject" align="center">
83 <img src="architecture.png" align="middle" width="270" alt="MyPLC architecture"><div class="caption"><p>MyPLC should be viewed as a single application that
84 provides multiple functions and can run on any host
89 <div class="section" lang="en">
90 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
91 <a name="Installation"></a>2. Installation</h2></div></div></div>
92 <p>Though internally composed of commodity software
93 subpackages, MyPLC should be treated as a monolithic software
94 application. MyPLC is distributed as single RPM package that has
95 no external dependencies, allowing it to be installed on
96 practically any Linux 2.6 based distribution:</p>
98 <a name="id225262"></a><p class="title"><b>Example 1. Installing MyPLC.</b></p>
99 <pre class="programlisting"># If your distribution supports RPM
100 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
102 # If your distribution does not support RPM
104 wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
106 rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu</pre>
108 <p>MyPLC installs the following files and directories:</p>
109 <div class="itemizedlist"><ul type="disc">
110 <li><p><code class="filename">/plc/root.img</code>: The main
111 root filesystem of the MyPLC application. This file is an
112 uncompressed ext3 filesystem that is loopback mounted on
113 <code class="filename">/plc/root</code> when MyPLC starts. This
114 filesystem, even when mounted, should be treated as an opaque
115 binary that can and will be replaced in its entirety by any
116 upgrade of MyPLC.</p></li>
117 <li><p><code class="filename">/plc/root</code>: The mount point
118 for <code class="filename">/plc/root.img</code>. Once the root filesystem
119 is mounted, all MyPLC services run in a
120 <span><strong class="command">chroot</strong></span> jail based in this
123 <p><code class="filename">/plc/data</code>: The directory where user
124 data and generated files are stored. This directory is bind
125 mounted onto <code class="filename">/plc/root/data</code> so that it is
126 accessible as <code class="filename">/data</code> from within the
127 <span><strong class="command">chroot</strong></span> jail. Files in this directory are
128 marked with <span><strong class="command">%config(noreplace)</strong></span> in the
129 RPM. That is, during an upgrade of MyPLC, if a file has not
130 changed since the last installation or upgrade of MyPLC, it is
131 subject to upgrade and replacement. If the file has changed,
132 the new version of the file will be created with a
133 <code class="filename">.rpmnew</code> extension. Symlinks within the
134 MyPLC root filesystem ensure that the following directories
135 (relative to <code class="filename">/plc/root</code>) are stored
136 outside the MyPLC filesystem image:</p>
137 <div class="itemizedlist"><ul type="circle">
138 <li><p><code class="filename">/etc/planetlab</code>: This
139 directory contains the configuration files, keys, and
140 certificates that define your MyPLC
141 installation.</p></li>
142 <li><p><code class="filename">/var/lib/pgsql</code>: This
143 directory contains PostgreSQL database
145 <li><p><code class="filename">/var/www/html/alpina-logs</code>: This
146 directory contains node installation logs.</p></li>
147 <li><p><code class="filename">/var/www/html/boot</code>: This
148 directory contains the Boot Manager, customized for your MyPLC
149 installation, and its data files.</p></li>
150 <li><p><code class="filename">/var/www/html/download</code>: This
151 directory contains Boot CD images, customized for your MyPLC
152 installation.</p></li>
153 <li><p><code class="filename">/var/www/html/install-rpms</code>: This
154 directory is where you should install node package updates,
155 if any. By default, nodes are installed from the tarball
157 <code class="filename">/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</code>,
158 which is pre-built from the latest PlanetLab Central
159 sources, and installed as part of your MyPLC
160 installation. However, nodes will attempt to install any
161 newer RPMs located in
162 <code class="filename">/var/www/html/install-rpms/planetlab</code>,
163 after initial installation and periodically thereafter. You
164 must run <span><strong class="command">yum-arch</strong></span> and
165 <span><strong class="command">createrepo</strong></span> to update the
166 <span><strong class="command">yum</strong></span> caches in this directory after
167 installing a new RPM. PlanetLab Central cannot support any
168 changes to this directory.</p></li>
169 <li><p><code class="filename">/var/www/html/xml</code>: This
170 directory contains various XML files that the Slice Creation
171 Service uses to determine the state of slices. These XML
172 files are refreshed periodically by <span><strong class="command">cron</strong></span>
173 jobs running in the MyPLC root.</p></li>
177 <p><code class="filename">/etc/init.d/plc</code>: This file
178 is a System V init script installed on your host filesystem,
179 that allows you to start up and shut down MyPLC with a single
180 command. On a Red Hat or Fedora host system, it is customary to
181 use the <span><strong class="command">service</strong></span> command to invoke System V
183 <div class="example">
184 <a name="StartingAndStoppingMyPLC"></a><p class="title"><b>Example 2. Starting and stopping MyPLC.</b></p>
185 <pre class="programlisting"># Starting MyPLC
189 service plc stop</pre>
191 <p>Like all other registered System V init services, MyPLC is
192 started and shut down automatically when your host system boots
193 and powers off. You may disable automatic startup by invoking
194 the <span><strong class="command">chkconfig</strong></span> command on a Red Hat or Fedora
196 <div class="example">
197 <a name="id243553"></a><p class="title"><b>Example 3. Disabling automatic startup of MyPLC.</b></p>
198 <pre class="programlisting"># Disable automatic startup
201 # Enable automatic startup
202 chkconfig plc on</pre>
205 <li><p><code class="filename">/etc/sysconfig/plc</code>: This
206 file is a shell script fragment that defines the variables
207 <code class="envar">PLC_ROOT</code> and <code class="envar">PLC_DATA</code>. By default,
208 the values of these variables are <code class="filename">/plc/root</code>
209 and <code class="filename">/plc/data</code>, respectively. If you wish,
210 you may move your MyPLC installation to another location on your
211 host filesystem and edit the values of these variables
212 appropriately, but you will break the RPM upgrade
213 process. PlanetLab Central cannot support any changes to this
215 <li><p><code class="filename">/etc/planetlab</code>: This
216 symlink to <code class="filename">/plc/data/etc/planetlab</code> is
217 installed on the host system for convenience.</p></li>
220 <div class="section" lang="en">
221 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
222 <a name="id267678"></a>3. Quickstart</h2></div></div></div>
223 <p>Once installed, start MyPLC (see <a href="#StartingAndStoppingMyPLC" title="Example 2. Starting and stopping MyPLC.">Example 2, “Starting and stopping MyPLC.”</a>). MyPLC must be started as
224 root. Observe the output of this command for any failures. If no
225 failures occur, you should see output similar to the
227 <div class="example">
228 <a name="id267798"></a><p class="title"><b>Example 4. A successful MyPLC startup.</b></p>
229 <pre class="programlisting">Mounting PLC: [ OK ]
230 PLC: Generating network files: [ OK ]
231 PLC: Starting system logger: [ OK ]
232 PLC: Starting database server: [ OK ]
233 PLC: Generating SSL certificates: [ OK ]
234 PLC: Configuring the API: [ OK ]
235 PLC: Updating GPG keys: [ OK ]
236 PLC: Generating SSH keys: [ OK ]
237 PLC: Starting web server: [ OK ]
238 PLC: Bootstrapping the database: [ OK ]
239 PLC: Starting DNS server: [ OK ]
240 PLC: Starting crond: [ OK ]
241 PLC: Rebuilding Boot CD: [ OK ]
242 PLC: Rebuilding Boot Manager: [ OK ]
243 PLC: Signing node packages: [ OK ]
246 <p>If <code class="filename">/plc/root</code> is mounted successfully, a
247 complete log file of the startup process may be found at
248 <code class="filename">/plc/root/var/log/boot.log</code>. Possible reasons
249 for failure of each step include:</p>
250 <div class="itemizedlist"><ul type="disc">
251 <li><p><code class="literal">Mounting PLC</code>: If this step
252 fails, first ensure that you started MyPLC as root. Check
253 <code class="filename">/etc/sysconfig/plc</code> to ensure that
254 <code class="envar">PLC_ROOT</code> and <code class="envar">PLC_DATA</code> refer to the
255 right locations. You may also have too many existing loopback
256 mounts, or your kernel may not support loopback mounting, bind
257 mounting, or the ext3 filesystem. Try freeing at least one
258 loopback device, or re-compiling your kernel to support loopback
259 mounting, bind mounting, and the ext3 filesystem. If you see an
260 error similar to <code class="literal">Permission denied while trying to open
261 /plc/root.img</code>, then SELinux may be enabled. If you
262 installed MyPLC on Fedora Core 4 or 5, use the
263 <span class="application">Security Level Configuration</span> utility
264 to configure SELinux to be
265 <code class="literal">Permissive</code>.</p></li>
266 <li><p><code class="literal">Starting database server</code>: If
267 this step fails, check
268 <code class="filename">/plc/root/var/log/pgsql</code> and
269 <code class="filename">/plc/root/var/log/boot.log</code>. The most common
270 reason for failure is that the default PostgreSQL port, TCP port
271 5432, is already in use. Check that you are not running a
272 PostgreSQL server on the host system.</p></li>
273 <li><p><code class="literal">Starting web server</code>: If this
275 <code class="filename">/plc/root/var/log/httpd/error_log</code> and
276 <code class="filename">/plc/root/var/log/boot.log</code> for obvious
277 errors. The most common reason for failure is that the default
278 web ports, TCP ports 80 and 443, are already in use. Check that
279 you are not running a web server on the host
281 <li><p><code class="literal">Bootstrapping the database</code>:
282 If this step fails, it is likely that the previous step
283 (<code class="literal">Starting web server</code>) also failed. Another
284 reason that it could fail is if <code class="envar">PLC_API_HOST</code> (see
285 <a href="#ChangingTheConfiguration" title="3.1. Changing the configuration">Section 3.1, “Changing the configuration”</a>) does not resolve to
286 the host on which the API server has been enabled. By default,
287 all services, including the API server, are enabled and run on
288 the same host, so check that <code class="envar">PLC_API_HOST</code> is
289 either <code class="filename">localhost</code> or resolves to a local IP
291 <li><p><code class="literal">Starting crond</code>: If this step
292 fails, it is likely that the previous steps (<code class="literal">Starting
293 web server</code> and <code class="literal">Bootstrapping the
294 database</code>) also failed. If not, check
295 <code class="filename">/plc/root/var/log/boot.log</code> for obvious
296 errors. This step starts the <span><strong class="command">cron</strong></span> service and
297 generates the initial set of XML files that the Slice Creation
298 Service uses to determine slice state.</p></li>
300 <p>If no failures occur, then MyPLC should be active with a
301 default configuration. Open a web browser on the host system and
302 visit <code class="literal">http://localhost/</code>, which should bring you
303 to the front page of your PLC installation. The password of the
304 default administrator account
305 <code class="literal">root@localhost.localdomain</code> (set by
306 <code class="envar">PLC_ROOT_USER</code>) is <code class="literal">root</code> (set by
307 <code class="envar">PLC_ROOT_PASSWORD</code>).</p>
308 <div class="section" lang="en">
309 <div class="titlepage"><div><div><h3 class="title">
310 <a name="ChangingTheConfiguration"></a>3.1. Changing the configuration</h3></div></div></div>
311 <p>After verifying that MyPLC is working correctly, shut it
312 down and begin changing some of the default variable
313 values. Shut down MyPLC with <span><strong class="command">service plc stop</strong></span>
314 (see <a href="#StartingAndStoppingMyPLC" title="Example 2. Starting and stopping MyPLC.">Example 2, “Starting and stopping MyPLC.”</a>). With a text
315 editor, open the file
316 <code class="filename">/etc/planetlab/plc_config.xml</code>. This file is
317 a self-documenting configuration file written in XML. Variables
318 are divided into categories. Variable identifiers must be
319 alphanumeric, plus underscore. A variable is referred to
320 canonically as the uppercase concatenation of its category
321 identifier, an underscore, and its variable identifier. Thus, a
322 variable with an <code class="literal">id</code> of
323 <code class="literal">slice_prefix</code> in the <code class="literal">plc</code>
324 category is referred to canonically as
325 <code class="envar">PLC_SLICE_PREFIX</code>.</p>
326 <p>The reason for this convention is that during MyPLC
327 startup, <code class="filename">plc_config.xml</code> is translated into
328 several different languages—shell, PHP, and
329 Python—so that scripts written in each of these languages
330 can refer to the same underlying configuration. Most MyPLC
331 scripts are written in shell, so the convention for shell
332 variables predominates.</p>
333 <p>The variables that you should change immediately are:</p>
334 <div class="itemizedlist"><ul type="disc">
335 <li><p><code class="envar">PLC_NAME</code>: Change this to the
336 name of your PLC installation.</p></li>
337 <li><p><code class="envar">PLC_ROOT_PASSWORD</code>: Change this
338 to a more secure password.</p></li>
339 <li><p><code class="envar">PLC_MAIL_SUPPORT_ADDRESS</code>:
340 Change this to the e-mail address at which you would like to
341 receive support requests.</p></li>
342 <li><p><code class="envar">PLC_DB_HOST</code>,
343 <code class="envar">PLC_DB_IP</code>, <code class="envar">PLC_API_HOST</code>,
344 <code class="envar">PLC_API_IP</code>, <code class="envar">PLC_WWW_HOST</code>,
345 <code class="envar">PLC_WWW_IP</code>, <code class="envar">PLC_BOOT_HOST</code>,
346 <code class="envar">PLC_BOOT_IP</code>: Change all of these to the
347 preferred FQDN and external IP address of your host
350 <p>After changing these variables, save the file, then
351 restart MyPLC with <span><strong class="command">service plc start</strong></span>. You
352 should notice that the password of the default administrator
353 account is no longer <code class="literal">root</code>, and that the
354 default site name includes the name of your PLC installation
355 instead of PlanetLab.</p>
357 <div class="section" lang="en">
358 <div class="titlepage"><div><div><h3 class="title">
359 <a name="id268186"></a>3.2. Installing nodes</h3></div></div></div>
360 <p>Install your first node by clicking <code class="literal">Add
361 Node</code> under the <code class="literal">Nodes</code> tab. Fill in
362 all the appropriate details, then click
363 <code class="literal">Add</code>. Download the node's configuration file
364 by clicking <code class="literal">Download configuration file</code> on
365 the <span class="bold"><strong>Node Details</strong></span> page for the
366 node. Save it to a floppy disk or USB key as detailed in [<a href="#TechsGuide" title="[TechsGuide]">1</a>].</p>
367 <p>Follow the rest of the instructions in [<a href="#TechsGuide" title="[TechsGuide]">1</a>] for creating a Boot CD and installing
368 the node, except download the Boot CD image from the
369 <code class="filename">/download</code> directory of your PLC
370 installation, not from PlanetLab Central. The images located
371 here are customized for your installation. If you change the
372 hostname of your boot server (<code class="envar">PLC_BOOT_HOST</code>), or
373 if the SSL certificate of your boot server expires, MyPLC will
374 regenerate it and rebuild the Boot CD with the new
375 certificate. If this occurs, you must replace all Boot CDs
376 created before the certificate was regenerated.</p>
377 <p>The installation process for a node has significantly
378 improved since PlanetLab 3.3. It should now take only a few
379 seconds for a new node to become ready to create slices.</p>
381 <div class="section" lang="en">
382 <div class="titlepage"><div><div><h3 class="title">
383 <a name="id268260"></a>3.3. Administering nodes</h3></div></div></div>
384 <p>You may administer nodes as <code class="literal">root</code> by
385 using the SSH key stored in
386 <code class="filename">/etc/planetlab/root_ssh_key.rsa</code>.</p>
387 <div class="example">
388 <a name="id268281"></a><p class="title"><b>Example 5. Accessing nodes via SSH. Replace
389 <code class="literal">node</code> with the hostname of the node.</b></p>
390 <pre class="programlisting">ssh -i /etc/planetlab/root_ssh_key.rsa root@node</pre>
392 <p>Besides the standard Linux log files located in
393 <code class="filename">/var/log</code>, several other files can give you
394 clues about any problems with active processes:</p>
395 <div class="itemizedlist"><ul type="disc">
396 <li><p><code class="filename">/var/log/pl_nm</code>: The log
397 file for the Node Manager.</p></li>
398 <li><p><code class="filename">/vservers/pl_conf/var/log/pl_conf</code>:
399 The log file for the Slice Creation Service.</p></li>
400 <li><p><code class="filename">/var/log/propd</code>: The log
401 file for Proper, the service which allows certain slices to
402 perform certain privileged operations in the root
404 <li><p><code class="filename">/vservers/pl_netflow/var/log/netflow.log</code>:
405 The log file for PlanetFlow, the network traffic auditing
409 <div class="section" lang="en">
410 <div class="titlepage"><div><div><h3 class="title">
411 <a name="id268354"></a>3.4. Creating a slice</h3></div></div></div>
412 <p>Create a slice by clicking <code class="literal">Create Slice</code>
413 under the <code class="literal">Slices</code> tab. Fill in all the
414 appropriate details, then click <code class="literal">Create</code>. Add
415 nodes to the slice by clicking <code class="literal">Manage Nodes</code>
416 on the <span class="bold"><strong>Slice Details</strong></span> page for
418 <p>A <span><strong class="command">cron</strong></span> job runs every five minutes and
420 <code class="filename">/plc/data/var/www/html/xml/slices-0.5.xml</code>
421 with information about current slice state. The Slice Creation
422 Service running on every node polls this file every ten minutes
423 to determine if it needs to create or delete any slices. You may
424 accelerate this process manually if desired.</p>
425 <div class="example">
426 <a name="id268412"></a><p class="title"><b>Example 6. Forcing slice creation on a node.</b></p>
427 <pre class="programlisting"># Update slices.xml immediately
428 service plc start crond
430 # Kick the Slice Creation Service on a particular node.
431 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
432 vserver pl_conf exec service pl_conf restart</pre>
436 <div class="section" lang="en">
437 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
438 <a name="id268428"></a>4. Rebuilding and customizing MyPLC</h2></div></div></div>
439 <p>The MyPLC package, though distributed as an RPM, is not a
440 traditional package that can be easily rebuilt from SRPM. The
441 requisite build environment is quite extensive and numerous
442 assumptions are made throughout the PlanetLab source code base,
443 that the build environment is based on Fedora Core 4 and that
444 access to a complete Fedora Core 4 mirror is available.</p>
445 <p>For this reason, it is recommended that you only rebuild
446 MyPLC (or any of its components) from within the MyPLC development
447 environment. The MyPLC development environment is similar to MyPLC
448 itself in that it is a portable filesystem contained within a
449 <span><strong class="command">chroot</strong></span> jail. The filesystem contains all the
450 necessary tools required to rebuild MyPLC, as well as a snapshot
451 of the PlanetLab source code base in the form of a local CVS
453 <div class="section" lang="en">
454 <div class="titlepage"><div><div><h3 class="title">
455 <a name="id268453"></a>4.1. Installation</h3></div></div></div>
456 <p>Install the MyPLC development environment similarly to how
457 you would install MyPLC. You may install both packages on the same
458 host system if you wish. As with MyPLC, the MyPLC development
459 environment should be treated as a monolithic software
460 application, and any files present in the
461 <span><strong class="command">chroot</strong></span> jail should not be modified directly, as
462 they are subject to upgrade.</p>
463 <div class="example">
464 <a name="id268472"></a><p class="title"><b>Example 7. Installing the MyPLC development environment.</b></p>
465 <pre class="programlisting"># If your distribution supports RPM
466 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
468 # If your distribution does not support RPM
470 wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
472 rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu</pre>
474 <p>The MyPLC development environment installs the following
475 files and directories:</p>
476 <div class="itemizedlist"><ul type="disc">
477 <li><p><code class="filename">/plc/devel/root.img</code>: The
478 main root filesystem of the MyPLC development environment. This
479 file is an uncompressed ext3 filesystem that is loopback mounted
480 on <code class="filename">/plc/devel/root</code> when the MyPLC
481 development environment is initialized. This filesystem, even
482 when mounted, should be treated as an opaque binary that can and
483 will be replaced in its entirety by any upgrade of the MyPLC
484 development environment.</p></li>
485 <li><p><code class="filename">/plc/devel/root</code>: The mount
487 <code class="filename">/plc/devel/root.img</code>.</p></li>
489 <p><code class="filename">/plc/devel/data</code>: The directory
490 where user data and generated files are stored. This directory
491 is bind mounted onto <code class="filename">/plc/devel/root/data</code>
492 so that it is accessible as <code class="filename">/data</code> from
493 within the <span><strong class="command">chroot</strong></span> jail. Files in this
494 directory are marked with
495 <span><strong class="command">%config(noreplace)</strong></span> in the RPM. Symlinks
496 ensure that the following directories (relative to
497 <code class="filename">/plc/devel/root</code>) are stored outside the
498 root filesystem image:</p>
499 <div class="itemizedlist"><ul type="circle">
500 <li><p><code class="filename">/etc/planetlab</code>: This
501 directory contains the configuration files that define your
502 MyPLC development environment.</p></li>
503 <li><p><code class="filename">/cvs</code>: A
504 snapshot of the PlanetLab source code is stored as a CVS
505 repository in this directory. Files in this directory will
506 <span class="bold"><strong>not</strong></span> be updated by an upgrade of
507 <code class="filename">myplc-devel</code>. See <a href="#UpdatingCVS" title="4.4. Updating CVS">Section 4.4, “Updating CVS”</a> for more information about updating
508 PlanetLab source code.</p></li>
509 <li><p><code class="filename">/build</code>:
510 Builds are stored in this directory. This directory is bind
511 mounted onto <code class="filename">/plc/devel/root/build</code> so that
512 it is accessible as <code class="filename">/build</code> from within the
513 <span><strong class="command">chroot</strong></span> jail. The build scripts in this
514 directory are themselves source controlled; see <a href="#BuildingMyPLC" title="4.3. Building MyPLC">Section 4.3, “Building MyPLC”</a> for more information about executing
518 <li><p><code class="filename">/etc/init.d/plc-devel</code>: This file is
519 a System V init script installed on your host filesystem, that
520 allows you to start up and shut down the MyPLC development
521 environment with a single command.</p></li>
524 <div class="section" lang="en">
525 <div class="titlepage"><div><div><h3 class="title">
526 <a name="id268661"></a>4.2. Fedora Core 4 mirror requirement</h3></div></div></div>
527 <p>The MyPLC development environment requires access to a
528 complete Fedora Core 4 i386 RPM repository, because several
529 different filesystems based upon Fedora Core 4 are constructed
530 during the process of building MyPLC. You may configure the
531 location of this repository via the
532 <code class="envar">PLC_DEVEL_FEDORA_URL</code> variable in
533 <code class="filename">/plc/devel/data/etc/planetlab/plc_config.xml</code>. The
534 value of the variable should be a URL that points to the top
535 level of a Fedora mirror that provides the
536 <code class="filename">base</code>, <code class="filename">updates</code>, and
537 <code class="filename">extras</code> repositories, e.g.,</p>
538 <div class="itemizedlist"><ul type="disc">
539 <li><p><code class="filename">file:///data/fedora</code></p></li>
540 <li><p><code class="filename">http://coblitz.planet-lab.org/pub/fedora</code></p></li>
541 <li><p><code class="filename">ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</code></p></li>
542 <li><p><code class="filename">ftp://mirror.stanford.edu/pub/mirrors/fedora</code></p></li>
543 <li><p><code class="filename">http://rpmfind.net/linux/fedora</code></p></li>
545 <p>As implied by the list, the repository may be located on
546 the local filesystem, or it may be located on a remote FTP or
547 HTTP server. URLs beginning with <code class="filename">file://</code>
548 should exist at the specified location relative to the root of
549 the <span><strong class="command">chroot</strong></span> jail. For optimum performance and
550 reproducibility, specify
551 <code class="envar">PLC_DEVEL_FEDORA_URL=file:///data/fedora</code> and
552 download all Fedora Core 4 RPMS into
553 <code class="filename">/plc/devel/data/fedora</code> on the host system
554 after installing <code class="filename">myplc-devel</code>. Use a tool
555 such as <span><strong class="command">wget</strong></span> or <span><strong class="command">rsync</strong></span> to
556 download the RPMS from a public mirror:</p>
557 <div class="example">
558 <a name="id268792"></a><p class="title"><b>Example 8. Setting up a local Fedora Core 4 repository.</b></p>
559 <pre class="programlisting">mkdir -p /plc/devel/data/fedora
560 cd /plc/devel/data/fedora
562 for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
563 wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
566 <p>Change the repository URI and <span><strong class="command">--cut-dirs</strong></span>
567 level as needed to produce a hierarchy that resembles:</p>
568 <pre class="programlisting">/plc/devel/data/fedora/core/4/i386/os
569 /plc/devel/data/fedora/core/updates/4/i386
570 /plc/devel/data/fedora/extras/4/i386</pre>
571 <p>A list of additional Fedora Core 4 mirrors is available at
572 <a href="http://fedora.redhat.com/Download/mirrors.html" target="_top">http://fedora.redhat.com/Download/mirrors.html</a>.</p>
574 <div class="section" lang="en">
575 <div class="titlepage"><div><div><h3 class="title">
576 <a name="BuildingMyPLC"></a>4.3. Building MyPLC</h3></div></div></div>
577 <p>All PlanetLab source code modules are built and installed
578 as RPMS. A set of build scripts, checked into the
579 <code class="filename">build/</code> directory of the PlanetLab CVS
580 repository, eases the task of rebuilding PlanetLab source
582 <p>To build MyPLC, or any PlanetLab source code module, from
583 within the MyPLC development environment, execute the following
584 commands as root:</p>
585 <div class="example">
586 <a name="id268858"></a><p class="title"><b>Example 9. Building MyPLC.</b></p>
587 <pre class="programlisting"># Initialize MyPLC development environment
588 service plc-devel start
590 # Enter development environment
591 chroot /plc/devel/root su -
593 # Check out build scripts into a directory named after the current
594 # date. This is simply a convention, it need not be followed
595 # exactly. See build/build.sh for an example of a build script that
596 # names build directories after CVS tags.
597 DATE=$(date +%Y.%m.%d)
599 cvs -d /cvs checkout -d $DATE build
604 <p>If the build succeeds, a set of binary RPMS will be
606 <code class="filename">/plc/devel/data/build/$DATE/RPMS/</code> that you
608 <code class="filename">/var/www/html/install-rpms/planetlab</code>
609 directory of your MyPLC installation (see <a href="#Installation" title="2. Installation">Section 2, “Installation”</a>).</p>
611 <div class="section" lang="en">
612 <div class="titlepage"><div><div><h3 class="title">
613 <a name="UpdatingCVS"></a>4.4. Updating CVS</h3></div></div></div>
614 <p>A complete snapshot of the PlanetLab source code is included
615 with the MyPLC development environment as a CVS repository in
616 <code class="filename">/plc/devel/data/cvs</code>. This CVS repository may
617 be accessed like any other CVS repository. It may be accessed
618 using an interface such as <a href="http://www.freebsd.org/projects/cvsweb.html" target="_top">CVSweb</a>,
619 and file permissions may be altered to allow for fine-grained
620 access control. Although the files are included with the
621 <code class="filename">myplc-devel</code> RPM, they are <span class="bold"><strong>not</strong></span> subject to upgrade once installed. New
622 versions of the <code class="filename">myplc-devel</code> RPM will install
623 updated snapshot repositories in
624 <code class="filename">/plc/devel/data/cvs-%{version}-%{release}</code>,
625 where <code class="literal">%{version}-%{release}</code> is replaced with
626 the version number of the RPM.</p>
627 <p>Because the CVS repository is not automatically upgraded,
628 if you wish to keep your local repository synchronized with the
629 public PlanetLab repository, it is highly recommended that you
630 use CVS's support for <a href="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources" target="_top">vendor
631 branches</a> to track changes. Vendor branches ease the task
632 of merging upstream changes with your local modifications. To
633 import a new snapshot into your local repository (for example,
634 if you have just upgraded from
635 <code class="filename">myplc-devel-0.4-2</code> to
636 <code class="filename">myplc-devel-0.4-3</code> and you notice the new
637 repository in <code class="filename">/plc/devel/data/cvs-0.4-3</code>),
638 execute the following commands as root from within the MyPLC
639 development environment:</p>
640 <div class="example">
641 <a name="id268989"></a><p class="title"><b>Example 10. Updating /data/cvs from /data/cvs-0.4-3.</b></p>
642 <p><span class="bold"><strong>Warning</strong></span>: This may cause
643 severe, irreversible changes to be made to your local
644 repository. Always tag your local repository before
646 <pre class="programlisting"># Initialize MyPLC development environment
647 service plc-devel start
649 # Enter development environment
650 chroot /plc/devel/root su -
653 cvs -d /cvs rtag before-myplc-0_4-3-merge
656 TMP=$(mktemp -d /data/export.XXXXXX)
658 cvs -d /data/cvs-0.4-3 export -r HEAD .
659 cvs -d /cvs import -m "PlanetLab sources from myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
663 <p>If there any merge conflicts, use the command suggested by
664 CVS to help the merge. Explaining how to fix merge conflicts is
665 beyond the scope of this document; consult the CVS documentation
666 for more information on how to use CVS.</p>
669 <div class="appendix" lang="en">
670 <h2 class="title" style="clear: both">
671 <a name="id269022"></a>A. Configuration variables</h2>
672 <p>Listed below is the set of standard configuration variables
673 and their default values, defined in the template
674 <code class="filename">/etc/planetlab/default_config.xml</code>. Additional
675 variables and their defaults may be defined in site-specific XML
676 templates that should be placed in
677 <code class="filename">/etc/planetlab/configs/</code>.</p>
678 <div class="variablelist"><dl>
679 <dt><span class="term">PLC_NAME</span></dt>
684 Default: PlanetLab Test</p>
685 <p>The name of this PLC installation. It is used in
686 the name of the default system site (e.g., PlanetLab Central)
687 and in the names of various administrative entities (e.g.,
688 PlanetLab Support).</p>
690 <dt><span class="term">PLC_SLICE_PREFIX</span></dt>
696 <p>The abbreviated name of this PLC
697 installation. It is used as the prefix for system slices
698 (e.g., pl_conf). Warning: Currently, this variable should
701 <dt><span class="term">PLC_ROOT_USER</span></dt>
706 Default: root@localhost.localdomain</p>
707 <p>The name of the initial administrative
708 account. We recommend that this account be used only to create
709 additional accounts associated with real
710 administrators, then disabled.</p>
712 <dt><span class="term">PLC_ROOT_PASSWORD</span></dt>
718 <p>The password of the initial administrative
719 account. Also the password of the root account on the Boot
722 <dt><span class="term">PLC_ROOT_SSH_KEY_PUB</span></dt>
727 Default: /etc/planetlab/root_ssh_key.pub</p>
728 <p>The SSH public key used to access the root
729 account on your nodes.</p>
731 <dt><span class="term">PLC_ROOT_SSH_KEY</span></dt>
736 Default: /etc/planetlab/root_ssh_key.rsa</p>
737 <p>The SSH private key used to access the root
738 account on your nodes.</p>
740 <dt><span class="term">PLC_DEBUG_SSH_KEY_PUB</span></dt>
745 Default: /etc/planetlab/debug_ssh_key.pub</p>
746 <p>The SSH public key used to access the root
747 account on your nodes when they are in Debug mode.</p>
749 <dt><span class="term">PLC_DEBUG_SSH_KEY</span></dt>
754 Default: /etc/planetlab/debug_ssh_key.rsa</p>
755 <p>The SSH private key used to access the root
756 account on your nodes when they are in Debug mode.</p>
758 <dt><span class="term">PLC_ROOT_GPG_KEY_PUB</span></dt>
763 Default: /etc/planetlab/pubring.gpg</p>
764 <p>The GPG public keyring used to sign the Boot
765 Manager and all node packages.</p>
767 <dt><span class="term">PLC_ROOT_GPG_KEY</span></dt>
772 Default: /etc/planetlab/secring.gpg</p>
773 <p>The SSH private key used to access the root
774 account on your nodes.</p>
776 <dt><span class="term">PLC_MA_SA_NAMESPACE</span></dt>
782 <p>The namespace of your MA/SA. This should be a
783 globally unique value assigned by PlanetLab
786 <dt><span class="term">PLC_MA_SA_SSL_KEY</span></dt>
791 Default: /etc/planetlab/ma_sa_ssl.key</p>
792 <p>The SSL private key used for signing documents
793 with the signature of your MA/SA. If non-existent, one will
796 <dt><span class="term">PLC_MA_SA_SSL_CRT</span></dt>
801 Default: /etc/planetlab/ma_sa_ssl.crt</p>
802 <p>The corresponding SSL public certificate. By
803 default, this certificate is self-signed. You may replace
804 the certificate later with one signed by the PLC root
807 <dt><span class="term">PLC_MA_SA_CA_SSL_CRT</span></dt>
812 Default: /etc/planetlab/ma_sa_ca_ssl.crt</p>
813 <p>If applicable, the certificate of the PLC root
814 CA. If your MA/SA certificate is self-signed, then this file
815 is the same as your MA/SA certificate.</p>
817 <dt><span class="term">PLC_MA_SA_CA_SSL_KEY_PUB</span></dt>
822 Default: /etc/planetlab/ma_sa_ca_ssl.pub</p>
823 <p>If applicable, the public key of the PLC root
824 CA. If your MA/SA certificate is self-signed, then this file
825 is the same as your MA/SA public key.</p>
827 <dt><span class="term">PLC_MA_SA_API_CRT</span></dt>
832 Default: /etc/planetlab/ma_sa_api.xml</p>
833 <p>The API Certificate is your MA/SA public key
834 embedded in a digitally signed XML document. By default,
835 this document is self-signed. You may replace this
836 certificate later with one signed by the PLC root
839 <dt><span class="term">PLC_NET_DNS1</span></dt>
844 Default: 127.0.0.1</p>
845 <p>Primary DNS server address.</p>
847 <dt><span class="term">PLC_NET_DNS2</span></dt>
853 <p>Secondary DNS server address.</p>
855 <dt><span class="term">PLC_DNS_ENABLED</span></dt>
861 <p>Enable the internal DNS server. The server does
862 not provide reverse resolution and is not a production
863 quality or scalable DNS solution. Use the internal DNS
864 server only for small deployments or for
867 <dt><span class="term">PLC_MAIL_ENABLED</span></dt>
873 <p>Set to false to suppress all e-mail notifications
876 <dt><span class="term">PLC_MAIL_SUPPORT_ADDRESS</span></dt>
881 Default: root+support@localhost.localdomain</p>
882 <p>This address is used for support
883 requests. Support requests may include traffic complaints,
884 security incident reporting, web site malfunctions, and
885 general requests for information. We recommend that the
886 address be aliased to a ticketing system such as Request
889 <dt><span class="term">PLC_MAIL_BOOT_ADDRESS</span></dt>
894 Default: root+install-msgs@localhost.localdomain</p>
895 <p>The API will notify this address when a problem
896 occurs during node installation or boot.</p>
898 <dt><span class="term">PLC_MAIL_SLICE_ADDRESS</span></dt>
903 Default: root+SLICE@localhost.localdomain</p>
904 <p>This address template is used for sending
905 e-mail notifications to slices. SLICE will be replaced with
906 the name of the slice.</p>
908 <dt><span class="term">PLC_DB_ENABLED</span></dt>
914 <p>Enable the database server on this
917 <dt><span class="term">PLC_DB_TYPE</span></dt>
922 Default: postgresql</p>
923 <p>The type of database server. Currently, only
924 postgresql is supported.</p>
926 <dt><span class="term">PLC_DB_HOST</span></dt>
931 Default: localhost.localdomain</p>
932 <p>The fully qualified hostname of the database
935 <dt><span class="term">PLC_DB_IP</span></dt>
940 Default: 127.0.0.1</p>
941 <p>The IP address of the database server, if not
942 resolvable by the configured DNS servers.</p>
944 <dt><span class="term">PLC_DB_PORT</span></dt>
950 <p>The TCP port number through which the database
951 server should be accessed.</p>
953 <dt><span class="term">PLC_DB_NAME</span></dt>
958 Default: planetlab3</p>
959 <p>The name of the database to access.</p>
961 <dt><span class="term">PLC_DB_USER</span></dt>
966 Default: pgsqluser</p>
967 <p>The username to use when accessing the
970 <dt><span class="term">PLC_DB_PASSWORD</span></dt>
976 <p>The password to use when accessing the
977 database. If left blank, one will be
980 <dt><span class="term">PLC_API_ENABLED</span></dt>
986 <p>Enable the API server on this
989 <dt><span class="term">PLC_API_DEBUG</span></dt>
995 <p>Enable verbose API debugging. Do not enable on
996 a production system!</p>
998 <dt><span class="term">PLC_API_HOST</span></dt>
1003 Default: localhost.localdomain</p>
1004 <p>The fully qualified hostname of the API
1007 <dt><span class="term">PLC_API_IP</span></dt>
1012 Default: 127.0.0.1</p>
1013 <p>The IP address of the API server, if not
1014 resolvable by the configured DNS servers.</p>
1016 <dt><span class="term">PLC_API_PORT</span></dt>
1022 <p>The TCP port number through which the API
1023 should be accessed. Warning: SSL (port 443) access is not
1024 fully supported by the website code yet. We recommend that
1025 port 80 be used for now and that the API server either run
1026 on the same machine as the web server, or that they both be
1027 on a secure wired network.</p>
1029 <dt><span class="term">PLC_API_PATH</span></dt>
1034 Default: /PLCAPI/</p>
1035 <p>The base path of the API URL.</p>
1037 <dt><span class="term">PLC_API_MAINTENANCE_USER</span></dt>
1042 Default: maint@localhost.localdomain</p>
1043 <p>The username of the maintenance account. This
1044 account is used by local scripts that perform automated
1045 tasks, and cannot be used for normal logins.</p>
1047 <dt><span class="term">PLC_API_MAINTENANCE_PASSWORD</span></dt>
1053 <p>The password of the maintenance account. If
1054 left blank, one will be generated. We recommend that the
1055 password be changed periodically.</p>
1057 <dt><span class="term">PLC_API_MAINTENANCE_SOURCES</span></dt>
1063 <p>A space-separated list of IP addresses allowed
1064 to access the API through the maintenance account. The value
1065 of this variable is set automatically to allow only the API,
1066 web, and boot servers, and should not be
1069 <dt><span class="term">PLC_API_SSL_KEY</span></dt>
1074 Default: /etc/planetlab/api_ssl.key</p>
1075 <p>The SSL private key to use for encrypting HTTPS
1076 traffic. If non-existent, one will be
1079 <dt><span class="term">PLC_API_SSL_CRT</span></dt>
1084 Default: /etc/planetlab/api_ssl.crt</p>
1085 <p>The corresponding SSL public certificate. By
1086 default, this certificate is self-signed. You may replace
1087 the certificate later with one signed by a root
1090 <dt><span class="term">PLC_API_CA_SSL_CRT</span></dt>
1095 Default: /etc/planetlab/api_ca_ssl.crt</p>
1096 <p>The certificate of the root CA, if any, that
1097 signed your server certificate. If your server certificate is
1098 self-signed, then this file is the same as your server
1101 <dt><span class="term">PLC_WWW_ENABLED</span></dt>
1107 <p>Enable the web server on this
1110 <dt><span class="term">PLC_WWW_DEBUG</span></dt>
1116 <p>Enable debugging output on web pages. Do not
1117 enable on a production system!</p>
1119 <dt><span class="term">PLC_WWW_HOST</span></dt>
1124 Default: localhost.localdomain</p>
1125 <p>The fully qualified hostname of the web
1128 <dt><span class="term">PLC_WWW_IP</span></dt>
1133 Default: 127.0.0.1</p>
1134 <p>The IP address of the web server, if not
1135 resolvable by the configured DNS servers.</p>
1137 <dt><span class="term">PLC_WWW_PORT</span></dt>
1143 <p>The TCP port number through which the
1144 unprotected portions of the web site should be
1147 <dt><span class="term">PLC_WWW_SSL_PORT</span></dt>
1153 <p>The TCP port number through which the protected
1154 portions of the web site should be accessed.</p>
1156 <dt><span class="term">PLC_WWW_SSL_KEY</span></dt>
1161 Default: /etc/planetlab/www_ssl.key</p>
1162 <p>The SSL private key to use for encrypting HTTPS
1163 traffic. If non-existent, one will be
1166 <dt><span class="term">PLC_WWW_SSL_CRT</span></dt>
1171 Default: /etc/planetlab/www_ssl.crt</p>
1172 <p>The corresponding SSL public certificate for
1173 the HTTP server. By default, this certificate is
1174 self-signed. You may replace the certificate later with one
1175 signed by a root CA.</p>
1177 <dt><span class="term">PLC_WWW_CA_SSL_CRT</span></dt>
1182 Default: /etc/planetlab/www_ca_ssl.crt</p>
1183 <p>The certificate of the root CA, if any, that
1184 signed your server certificate. If your server certificate is
1185 self-signed, then this file is the same as your server
1188 <dt><span class="term">PLC_BOOT_ENABLED</span></dt>
1194 <p>Enable the boot server on this
1197 <dt><span class="term">PLC_BOOT_HOST</span></dt>
1202 Default: localhost.localdomain</p>
1203 <p>The fully qualified hostname of the boot
1206 <dt><span class="term">PLC_BOOT_IP</span></dt>
1211 Default: 127.0.0.1</p>
1212 <p>The IP address of the boot server, if not
1213 resolvable by the configured DNS servers.</p>
1215 <dt><span class="term">PLC_BOOT_PORT</span></dt>
1221 <p>The TCP port number through which the
1222 unprotected portions of the boot server should be
1225 <dt><span class="term">PLC_BOOT_SSL_PORT</span></dt>
1231 <p>The TCP port number through which the protected
1232 portions of the boot server should be
1235 <dt><span class="term">PLC_BOOT_SSL_KEY</span></dt>
1240 Default: /etc/planetlab/boot_ssl.key</p>
1241 <p>The SSL private key to use for encrypting HTTPS
1244 <dt><span class="term">PLC_BOOT_SSL_CRT</span></dt>
1249 Default: /etc/planetlab/boot_ssl.crt</p>
1250 <p>The corresponding SSL public certificate for
1251 the HTTP server. By default, this certificate is
1252 self-signed. You may replace the certificate later with one
1253 signed by a root CA.</p>
1255 <dt><span class="term">PLC_BOOT_CA_SSL_CRT</span></dt>
1260 Default: /etc/planetlab/boot_ca_ssl.crt</p>
1261 <p>The certificate of the root CA, if any, that
1262 signed your server certificate. If your server certificate is
1263 self-signed, then this file is the same as your server
1268 <div class="appendix" lang="en">
1269 <h2 class="title" style="clear: both">
1270 <a name="id271727"></a>B. Development environment configuration variables</h2>
1271 <div class="variablelist"><dl>
1272 <dt><span class="term">PLC_DEVEL_FEDORA_RELEASE</span></dt>
1278 <p>Version number of Fedora Core upon which to
1279 base the build environment. Warning: Currently, only Fedora
1280 Core 4 is supported.</p>
1282 <dt><span class="term">PLC_DEVEL_FEDORA_ARCH</span></dt>
1288 <p>Base architecture of the build
1289 environment. Warning: Currently, only i386 is
1292 <dt><span class="term">PLC_DEVEL_FEDORA_URL</span></dt>
1297 Default: file:///usr/share/mirrors/fedora</p>
1298 <p>Fedora Core mirror from which to install
1301 <dt><span class="term">PLC_DEVEL_CVSROOT</span></dt>
1307 <p>CVSROOT to use when checking out code.</p>
1309 <dt><span class="term">PLC_DEVEL_BOOTSTRAP</span></dt>
1315 <p>Controls whether MyPLC should be built inside
1316 of its own development environment.</p>
1320 <div class="bibliography">
1321 <div class="titlepage"><div><div><h2 class="title">
1322 <a name="id271809"></a>Bibliography</h2></div></div></div>
1323 <div class="biblioentry">
1324 <a name="TechsGuide"></a><p>[1] <span class="author"><span class="firstname">Mark</span> <span class="surname">Huang</span>. </span><span class="title"><i><a href="http://www.planet-lab.org/doc/TechsGuide.php" target="_top">PlanetLab
1325 Technical Contact's Guide</a></i>. </span></p>
1328 </div><?php require('footer.php'); ?>