3 // DO NOT EDIT. This file was automatically generated from
4 // DocBook XML. See plc_www/doc/README.
6 $_title= "MyPLC User's Guide";
8 require_once('session.php');
9 require_once('header.php');
10 require_once('nav.php');
12 ?><div class="article" lang="en">
13 <div class="titlepage">
15 <div><h1 class="title">
16 <a name="id2612440"></a>MyPLC User's Guide</h1></div>
17 <div><div class="author"><h3 class="author"><span class="firstname">Mark Huang</span></h3></div></div>
18 <div><div class="revhistory"><table border="1" width="100%" summary="Revision history">
19 <tr><th align="left" valign="top" colspan="3"><b>Revision History</b></th></tr>
21 <td align="left">Revision 1.0</td>
22 <td align="left">April 7, 2006</td>
23 <td align="left">MLH</td>
25 <tr><td align="left" colspan="3"><p>Initial draft.</p></td></tr>
27 <td align="left">Revision 1.1</td>
28 <td align="left">July 19, 2006</td>
29 <td align="left">MLH</td>
31 <tr><td align="left" colspan="3"><p>Add development environment.</p></td></tr>
33 <div><div class="abstract">
34 <p class="title"><b>Abstract</b></p>
35 <p>This document describes the design, installation, and
36 administration of MyPLC, a complete PlanetLab Central (PLC)
37 portable installation contained within a
38 <span><strong class="command">chroot</strong></span> jail. This document assumes advanced
39 knowledge of the PlanetLab architecture and Linux system
46 <p><b>Table of Contents</b></p>
48 <dt><span class="section"><a href="#id2681901">1. Overview</a></span></dt>
49 <dd><dl><dt><span class="section"><a href="#id2659450">1.1. Purpose of the <span class="emphasis"><em> myplc-devel
50 </em></span> package </a></span></dt></dl></dd>
51 <dt><span class="section"><a href="#Requirements">2. Requirements </a></span></dt>
52 <dt><span class="section"><a href="#Installation">3. Installation</a></span></dt>
53 <dt><span class="section"><a href="#id2659848">4. Quickstart</a></span></dt>
55 <dt><span class="section"><a href="#ChangingTheConfiguration">4.1. Changing the configuration</a></span></dt>
56 <dt><span class="section"><a href="#id2711256">4.2. Installing nodes</a></span></dt>
57 <dt><span class="section"><a href="#id2711338">4.3. Administering nodes</a></span></dt>
58 <dt><span class="section"><a href="#id2711439">4.4. Creating a slice</a></span></dt>
60 <dt><span class="section"><a href="#DevelopmentEnvironment">5. Rebuilding and customizing MyPLC</a></span></dt>
62 <dt><span class="section"><a href="#id2711557">5.1. Installation</a></span></dt>
63 <dt><span class="section"><a href="#id2711788">5.2. Fedora Core 4 mirror requirement</a></span></dt>
64 <dt><span class="section"><a href="#BuildingMyPLC">5.3. Building MyPLC</a></span></dt>
65 <dt><span class="section"><a href="#UpdatingCVS">5.4. Updating CVS</a></span></dt>
67 <dt><span class="appendix"><a href="#id2712187">A. Configuration variables (for <span class="emphasis"><em>myplc</em></span>)</a></span></dt>
68 <dt><span class="appendix"><a href="#id2715081">B. Development configuration variables (for <span class="emphasis"><em>myplc-devel</em></span>)</a></span></dt>
69 <dt><span class="bibliography"><a href="#id2715252">Bibliography</a></span></dt>
72 <div class="section" lang="en">
73 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
74 <a name="id2681901"></a>1. Overview</h2></div></div></div>
75 <p>MyPLC is a complete PlanetLab Central (PLC) portable
76 installation contained within a <span><strong class="command">chroot</strong></span>
77 jail. The default installation consists of a web server, an
78 XML-RPC API server, a boot server, and a database server: the core
79 components of PLC. The installation is customized through an
80 easy-to-use graphical interface. All PLC services are started up
81 and shut down through a single script installed on the host
82 system. The usually complex process of installing and
83 administering the PlanetLab backend is reduced by containing PLC
84 services within a virtual filesystem. By packaging it in such a
85 manner, MyPLC may also be run on any modern Linux distribution,
86 and could conceivably even run in a PlanetLab slice.</p>
88 <a name="Architecture"></a><p class="title"><b>Figure 1. MyPLC architecture</b></p>
89 <div class="mediaobject" align="center">
90 <img src="architecture.png" align="middle" width="270" alt="MyPLC architecture"><div class="caption"><p>MyPLC should be viewed as a single application that
91 provides multiple functions and can run on any host
95 <div class="section" lang="en">
96 <div class="titlepage"><div><div><h3 class="title">
97 <a name="id2659450"></a>1.1. Purpose of the <span class="emphasis"><em> myplc-devel
98 </em></span> package </h3></div></div></div>
99 <p> The <span class="emphasis"><em>myplc</em></span> package comes with all
100 required node software, rebuilt from the public PlanetLab CVS
101 repository. If for any reason you need to implement your own
102 customized version of this software, you can use the
103 <span class="emphasis"><em>myplc-devel</em></span> package instead, for setting up
104 your own development environment, including a local CVS
105 repository; you can then freely manage your changes and rebuild
106 your customized version of <span class="emphasis"><em>myplc</em></span>. We also
107 provide good practices, that will then allow you to resync your local
108 CVS repository with any further evolution on the mainstream public
109 PlanetLab software. </p>
112 <div class="section" lang="en">
113 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
114 <a name="Requirements"></a>2. Requirements </h2></div></div></div>
115 <p> <span class="emphasis"><em>myplc</em></span> and
116 <span class="emphasis"><em>myplc-devel</em></span> were designed as
117 <span><strong class="command">chroot</strong></span> jails so as to reduce the requirements on
118 your host operating system. So in theory, these distributions should
119 work on virtually any Linux 2.6 based distribution, whether it
120 supports rpm or not. </p>
121 <p> However, things are never that simple and there indeed are
122 some known limitations to this, so here are a couple notes as a
123 recommended reading before you proceed with the installation.</p>
124 <p> As of August 2006 9 (i.e <span class="emphasis"><em>myplc-0.5</em></span>) :</p>
125 <div class="itemizedlist"><ul type="disc">
126 <li><p> The software is vastly based on <span class="emphasis"><em>Fedora
127 Core 4</em></span>. Please note that the build server at Princeton
128 runs <span class="emphasis"><em>Fedora Core 2</em></span>, togother with a upgraded
132 <p> myplc and myplc-devel are known to work on both
133 <span class="emphasis"><em>Fedora Core 2</em></span> and <span class="emphasis"><em>Fedora Core
134 4</em></span>. Please note however that, on fc4 at least, it is
135 highly recommended to use the <span class="application">Security Level
136 Configuration</span> utility and to <span class="emphasis"><em>switch off
137 SElinux</em></span> on your box because : </p>
138 <div class="itemizedlist"><ul type="circle">
140 myplc requires you to run SElinux as 'Permissive' at most
143 myplc-devel requires you to turn SElinux Off.
147 <li><p> In addition, as far as myplc is concerned, you
148 need to check your firewall configuration since you need, of course,
149 to open up the <span class="emphasis"><em>http</em></span> and
150 <span class="emphasis"><em>https</em></span> ports, so as to accept connections from
151 the managed nodes, and from the users desktops. </p></li>
154 <div class="section" lang="en">
155 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
156 <a name="Installation"></a>3. Installation</h2></div></div></div>
157 <p>Though internally composed of commodity software
158 subpackages, MyPLC should be treated as a monolithic software
159 application. MyPLC is distributed as single RPM package that has
160 no external dependencies, allowing it to be installed on
161 practically any Linux 2.6 based distribution:</p>
162 <div class="example">
163 <a name="id2658859"></a><p class="title"><b>Example 1. Installing MyPLC.</b></p>
164 <pre class="programlisting"># If your distribution supports RPM
165 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
167 # If your distribution does not support RPM
169 wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
171 rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu</pre>
173 <p>MyPLC installs the following files and directories:</p>
174 <div class="itemizedlist"><ul type="disc">
175 <li><p><code class="filename">/plc/root.img</code>: The main
176 root filesystem of the MyPLC application. This file is an
177 uncompressed ext3 filesystem that is loopback mounted on
178 <code class="filename">/plc/root</code> when MyPLC starts. This
179 filesystem, even when mounted, should be treated as an opaque
180 binary that can and will be replaced in its entirety by any
181 upgrade of MyPLC.</p></li>
182 <li><p><code class="filename">/plc/root</code>: The mount point
183 for <code class="filename">/plc/root.img</code>. Once the root filesystem
184 is mounted, all MyPLC services run in a
185 <span><strong class="command">chroot</strong></span> jail based in this
188 <p><code class="filename">/plc/data</code>: The directory where user
189 data and generated files are stored. This directory is bind
190 mounted onto <code class="filename">/plc/root/data</code> so that it is
191 accessible as <code class="filename">/data</code> from within the
192 <span><strong class="command">chroot</strong></span> jail. Files in this directory are
193 marked with <span><strong class="command">%config(noreplace)</strong></span> in the
194 RPM. That is, during an upgrade of MyPLC, if a file has not
195 changed since the last installation or upgrade of MyPLC, it is
196 subject to upgrade and replacement. If the file has changed,
197 the new version of the file will be created with a
198 <code class="filename">.rpmnew</code> extension. Symlinks within the
199 MyPLC root filesystem ensure that the following directories
200 (relative to <code class="filename">/plc/root</code>) are stored
201 outside the MyPLC filesystem image:</p>
202 <div class="itemizedlist"><ul type="circle">
203 <li><p><code class="filename">/etc/planetlab</code>: This
204 directory contains the configuration files, keys, and
205 certificates that define your MyPLC
206 installation.</p></li>
207 <li><p><code class="filename">/var/lib/pgsql</code>: This
208 directory contains PostgreSQL database
210 <li><p><code class="filename">/var/www/html/alpina-logs</code>: This
211 directory contains node installation logs.</p></li>
212 <li><p><code class="filename">/var/www/html/boot</code>: This
213 directory contains the Boot Manager, customized for your MyPLC
214 installation, and its data files.</p></li>
215 <li><p><code class="filename">/var/www/html/download</code>: This
216 directory contains Boot CD images, customized for your MyPLC
217 installation.</p></li>
218 <li><p><code class="filename">/var/www/html/install-rpms</code>: This
219 directory is where you should install node package updates,
220 if any. By default, nodes are installed from the tarball
222 <code class="filename">/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</code>,
223 which is pre-built from the latest PlanetLab Central
224 sources, and installed as part of your MyPLC
225 installation. However, nodes will attempt to install any
226 newer RPMs located in
227 <code class="filename">/var/www/html/install-rpms/planetlab</code>,
228 after initial installation and periodically thereafter. You
229 must run <span><strong class="command">yum-arch</strong></span> and
230 <span><strong class="command">createrepo</strong></span> to update the
231 <span><strong class="command">yum</strong></span> caches in this directory after
232 installing a new RPM. PlanetLab Central cannot support any
233 changes to this directory.</p></li>
234 <li><p><code class="filename">/var/www/html/xml</code>: This
235 directory contains various XML files that the Slice Creation
236 Service uses to determine the state of slices. These XML
237 files are refreshed periodically by <span><strong class="command">cron</strong></span>
238 jobs running in the MyPLC root.</p></li>
242 <p><a name="MyplcInitScripts"></a><code class="filename">/etc/init.d/plc</code>: This file
243 is a System V init script installed on your host filesystem,
244 that allows you to start up and shut down MyPLC with a single
245 command. On a Red Hat or Fedora host system, it is customary to
246 use the <span><strong class="command">service</strong></span> command to invoke System V
248 <div class="example">
249 <a name="StartingAndStoppingMyPLC"></a><p class="title"><b>Example 2. Starting and stopping MyPLC.</b></p>
250 <pre class="programlisting"># Starting MyPLC
254 service plc stop</pre>
256 <p>Like all other registered System V init services, MyPLC is
257 started and shut down automatically when your host system boots
258 and powers off. You may disable automatic startup by invoking
259 the <span><strong class="command">chkconfig</strong></span> command on a Red Hat or Fedora
261 <div class="example">
262 <a name="id2659778"></a><p class="title"><b>Example 3. Disabling automatic startup of MyPLC.</b></p>
263 <pre class="programlisting"># Disable automatic startup
266 # Enable automatic startup
267 chkconfig plc on</pre>
270 <li><p><code class="filename">/etc/sysconfig/plc</code>: This
271 file is a shell script fragment that defines the variables
272 <code class="envar">PLC_ROOT</code> and <code class="envar">PLC_DATA</code>. By default,
273 the values of these variables are <code class="filename">/plc/root</code>
274 and <code class="filename">/plc/data</code>, respectively. If you wish,
275 you may move your MyPLC installation to another location on your
276 host filesystem and edit the values of these variables
277 appropriately, but you will break the RPM upgrade
278 process. PlanetLab Central cannot support any changes to this
280 <li><p><code class="filename">/etc/planetlab</code>: This
281 symlink to <code class="filename">/plc/data/etc/planetlab</code> is
282 installed on the host system for convenience.</p></li>
285 <div class="section" lang="en">
286 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
287 <a name="id2659848"></a>4. Quickstart</h2></div></div></div>
288 <p>Once installed, start MyPLC (see <a href="#StartingAndStoppingMyPLC" title="Example 2. Starting and stopping MyPLC.">Example 2, “Starting and stopping MyPLC.”</a>). MyPLC must be started as
289 root. Observe the output of this command for any failures. If no
290 failures occur, you should see output similar to the
292 <div class="example">
293 <a name="id2659969"></a><p class="title"><b>Example 4. A successful MyPLC startup.</b></p>
294 <pre class="programlisting">Mounting PLC: [ OK ]
295 PLC: Generating network files: [ OK ]
296 PLC: Starting system logger: [ OK ]
297 PLC: Starting database server: [ OK ]
298 PLC: Generating SSL certificates: [ OK ]
299 PLC: Configuring the API: [ OK ]
300 PLC: Updating GPG keys: [ OK ]
301 PLC: Generating SSH keys: [ OK ]
302 PLC: Starting web server: [ OK ]
303 PLC: Bootstrapping the database: [ OK ]
304 PLC: Starting DNS server: [ OK ]
305 PLC: Starting crond: [ OK ]
306 PLC: Rebuilding Boot CD: [ OK ]
307 PLC: Rebuilding Boot Manager: [ OK ]
308 PLC: Signing node packages: [ OK ]
311 <p>If <code class="filename">/plc/root</code> is mounted successfully, a
312 complete log file of the startup process may be found at
313 <code class="filename">/plc/root/var/log/boot.log</code>. Possible reasons
314 for failure of each step include:</p>
315 <div class="itemizedlist"><ul type="disc">
316 <li><p><code class="literal">Mounting PLC</code>: If this step
317 fails, first ensure that you started MyPLC as root. Check
318 <code class="filename">/etc/sysconfig/plc</code> to ensure that
319 <code class="envar">PLC_ROOT</code> and <code class="envar">PLC_DATA</code> refer to the
320 right locations. You may also have too many existing loopback
321 mounts, or your kernel may not support loopback mounting, bind
322 mounting, or the ext3 filesystem. Try freeing at least one
323 loopback device, or re-compiling your kernel to support loopback
324 mounting, bind mounting, and the ext3 filesystem. If you see an
325 error similar to <code class="literal">Permission denied while trying to open
326 /plc/root.img</code>, then SELinux may be enabled. See <a href="#Requirements" title="2. Requirements ">Section 2, “ Requirements ”</a> above for details.</p></li>
327 <li><p><code class="literal">Starting database server</code>: If
328 this step fails, check
329 <code class="filename">/plc/root/var/log/pgsql</code> and
330 <code class="filename">/plc/root/var/log/boot.log</code>. The most common
331 reason for failure is that the default PostgreSQL port, TCP port
332 5432, is already in use. Check that you are not running a
333 PostgreSQL server on the host system.</p></li>
334 <li><p><code class="literal">Starting web server</code>: If this
336 <code class="filename">/plc/root/var/log/httpd/error_log</code> and
337 <code class="filename">/plc/root/var/log/boot.log</code> for obvious
338 errors. The most common reason for failure is that the default
339 web ports, TCP ports 80 and 443, are already in use. Check that
340 you are not running a web server on the host
342 <li><p><code class="literal">Bootstrapping the database</code>:
343 If this step fails, it is likely that the previous step
344 (<code class="literal">Starting web server</code>) also failed. Another
345 reason that it could fail is if <code class="envar">PLC_API_HOST</code> (see
346 <a href="#ChangingTheConfiguration" title="4.1. Changing the configuration">Section 4.1, “Changing the configuration”</a>) does not resolve to
347 the host on which the API server has been enabled. By default,
348 all services, including the API server, are enabled and run on
349 the same host, so check that <code class="envar">PLC_API_HOST</code> is
350 either <code class="filename">localhost</code> or resolves to a local IP
352 <li><p><code class="literal">Starting crond</code>: If this step
353 fails, it is likely that the previous steps (<code class="literal">Starting
354 web server</code> and <code class="literal">Bootstrapping the
355 database</code>) also failed. If not, check
356 <code class="filename">/plc/root/var/log/boot.log</code> for obvious
357 errors. This step starts the <span><strong class="command">cron</strong></span> service and
358 generates the initial set of XML files that the Slice Creation
359 Service uses to determine slice state.</p></li>
361 <p>If no failures occur, then MyPLC should be active with a
362 default configuration. Open a web browser on the host system and
363 visit <code class="literal">http://localhost/</code>, which should bring you
364 to the front page of your PLC installation. The password of the
365 default administrator account
366 <code class="literal">root@localhost.localdomain</code> (set by
367 <code class="envar">PLC_ROOT_USER</code>) is <code class="literal">root</code> (set by
368 <code class="envar">PLC_ROOT_PASSWORD</code>).</p>
369 <div class="section" lang="en">
370 <div class="titlepage"><div><div><h3 class="title">
371 <a name="ChangingTheConfiguration"></a>4.1. Changing the configuration</h3></div></div></div>
372 <p>After verifying that MyPLC is working correctly, shut it
373 down and begin changing some of the default variable
374 values. Shut down MyPLC with <span><strong class="command">service plc stop</strong></span>
375 (see <a href="#StartingAndStoppingMyPLC" title="Example 2. Starting and stopping MyPLC.">Example 2, “Starting and stopping MyPLC.”</a>). With a text
376 editor, open the file
377 <code class="filename">/etc/planetlab/plc_config.xml</code>. This file is
378 a self-documenting configuration file written in XML. Variables
379 are divided into categories. Variable identifiers must be
380 alphanumeric, plus underscore. A variable is referred to
381 canonically as the uppercase concatenation of its category
382 identifier, an underscore, and its variable identifier. Thus, a
383 variable with an <code class="literal">id</code> of
384 <code class="literal">slice_prefix</code> in the <code class="literal">plc</code>
385 category is referred to canonically as
386 <code class="envar">PLC_SLICE_PREFIX</code>.</p>
387 <p>The reason for this convention is that during MyPLC
388 startup, <code class="filename">plc_config.xml</code> is translated into
389 several different languages—shell, PHP, and
390 Python—so that scripts written in each of these languages
391 can refer to the same underlying configuration. Most MyPLC
392 scripts are written in shell, so the convention for shell
393 variables predominates.</p>
394 <p>The variables that you should change immediately are:</p>
395 <div class="itemizedlist"><ul type="disc">
396 <li><p><code class="envar">PLC_NAME</code>: Change this to the
397 name of your PLC installation.</p></li>
398 <li><p><code class="envar">PLC_ROOT_PASSWORD</code>: Change this
399 to a more secure password.</p></li>
400 <li><p><code class="envar">PLC_MAIL_SUPPORT_ADDRESS</code>:
401 Change this to the e-mail address at which you would like to
402 receive support requests.</p></li>
403 <li><p><code class="envar">PLC_DB_HOST</code>,
404 <code class="envar">PLC_DB_IP</code>, <code class="envar">PLC_API_HOST</code>,
405 <code class="envar">PLC_API_IP</code>, <code class="envar">PLC_WWW_HOST</code>,
406 <code class="envar">PLC_WWW_IP</code>, <code class="envar">PLC_BOOT_HOST</code>,
407 <code class="envar">PLC_BOOT_IP</code>: Change all of these to the
408 preferred FQDN and external IP address of your host
411 <p>After changing these variables, save the file, then
412 restart MyPLC with <span><strong class="command">service plc start</strong></span>. You
413 should notice that the password of the default administrator
414 account is no longer <code class="literal">root</code>, and that the
415 default site name includes the name of your PLC installation
416 instead of PlanetLab.</p>
418 <div class="section" lang="en">
419 <div class="titlepage"><div><div><h3 class="title">
420 <a name="id2711256"></a>4.2. Installing nodes</h3></div></div></div>
421 <p>Install your first node by clicking <code class="literal">Add
422 Node</code> under the <code class="literal">Nodes</code> tab. Fill in
423 all the appropriate details, then click
424 <code class="literal">Add</code>. Download the node's configuration file
425 by clicking <code class="literal">Download configuration file</code> on
426 the <span class="bold"><strong>Node Details</strong></span> page for the
427 node. Save it to a floppy disk or USB key as detailed in [<a href="#TechsGuide" title="[TechsGuide]">1</a>].</p>
428 <p>Follow the rest of the instructions in [<a href="#TechsGuide" title="[TechsGuide]">1</a>] for creating a Boot CD and installing
429 the node, except download the Boot CD image from the
430 <code class="filename">/download</code> directory of your PLC
431 installation, not from PlanetLab Central. The images located
432 here are customized for your installation. If you change the
433 hostname of your boot server (<code class="envar">PLC_BOOT_HOST</code>), or
434 if the SSL certificate of your boot server expires, MyPLC will
435 regenerate it and rebuild the Boot CD with the new
436 certificate. If this occurs, you must replace all Boot CDs
437 created before the certificate was regenerated.</p>
438 <p>The installation process for a node has significantly
439 improved since PlanetLab 3.3. It should now take only a few
440 seconds for a new node to become ready to create slices.</p>
442 <div class="section" lang="en">
443 <div class="titlepage"><div><div><h3 class="title">
444 <a name="id2711338"></a>4.3. Administering nodes</h3></div></div></div>
445 <p>You may administer nodes as <code class="literal">root</code> by
446 using the SSH key stored in
447 <code class="filename">/etc/planetlab/root_ssh_key.rsa</code>.</p>
448 <div class="example">
449 <a name="id2711361"></a><p class="title"><b>Example 5. Accessing nodes via SSH. Replace
450 <code class="literal">node</code> with the hostname of the node.</b></p>
451 <pre class="programlisting">ssh -i /etc/planetlab/root_ssh_key.rsa root@node</pre>
453 <p>Besides the standard Linux log files located in
454 <code class="filename">/var/log</code>, several other files can give you
455 clues about any problems with active processes:</p>
456 <div class="itemizedlist"><ul type="disc">
457 <li><p><code class="filename">/var/log/pl_nm</code>: The log
458 file for the Node Manager.</p></li>
459 <li><p><code class="filename">/vservers/pl_conf/var/log/pl_conf</code>:
460 The log file for the Slice Creation Service.</p></li>
461 <li><p><code class="filename">/var/log/propd</code>: The log
462 file for Proper, the service which allows certain slices to
463 perform certain privileged operations in the root
465 <li><p><code class="filename">/vservers/pl_netflow/var/log/netflow.log</code>:
466 The log file for PlanetFlow, the network traffic auditing
470 <div class="section" lang="en">
471 <div class="titlepage"><div><div><h3 class="title">
472 <a name="id2711439"></a>4.4. Creating a slice</h3></div></div></div>
473 <p>Create a slice by clicking <code class="literal">Create Slice</code>
474 under the <code class="literal">Slices</code> tab. Fill in all the
475 appropriate details, then click <code class="literal">Create</code>. Add
476 nodes to the slice by clicking <code class="literal">Manage Nodes</code>
477 on the <span class="bold"><strong>Slice Details</strong></span> page for
479 <p>A <span><strong class="command">cron</strong></span> job runs every five minutes and
481 <code class="filename">/plc/data/var/www/html/xml/slices-0.5.xml</code>
482 with information about current slice state. The Slice Creation
483 Service running on every node polls this file every ten minutes
484 to determine if it needs to create or delete any slices. You may
485 accelerate this process manually if desired.</p>
486 <div class="example">
487 <a name="id2711502"></a><p class="title"><b>Example 6. Forcing slice creation on a node.</b></p>
488 <pre class="programlisting"># Update slices.xml immediately
489 service plc start crond
491 # Kick the Slice Creation Service on a particular node.
492 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
493 vserver pl_conf exec service pl_conf restart</pre>
497 <div class="section" lang="en">
498 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
499 <a name="DevelopmentEnvironment"></a>5. Rebuilding and customizing MyPLC</h2></div></div></div>
500 <p>The MyPLC package, though distributed as an RPM, is not a
501 traditional package that can be easily rebuilt from SRPM. The
502 requisite build environment is quite extensive and numerous
503 assumptions are made throughout the PlanetLab source code base,
504 that the build environment is based on Fedora Core 4 and that
505 access to a complete Fedora Core 4 mirror is available.</p>
506 <p>For this reason, it is recommended that you only rebuild
507 MyPLC (or any of its components) from within the MyPLC development
508 environment. The MyPLC development environment is similar to MyPLC
509 itself in that it is a portable filesystem contained within a
510 <span><strong class="command">chroot</strong></span> jail. The filesystem contains all the
511 necessary tools required to rebuild MyPLC, as well as a snapshot
512 of the PlanetLab source code base in the form of a local CVS
514 <div class="section" lang="en">
515 <div class="titlepage"><div><div><h3 class="title">
516 <a name="id2711557"></a>5.1. Installation</h3></div></div></div>
517 <p>Install the MyPLC development environment similarly to how
518 you would install MyPLC. You may install both packages on the same
519 host system if you wish. As with MyPLC, the MyPLC development
520 environment should be treated as a monolithic software
521 application, and any files present in the
522 <span><strong class="command">chroot</strong></span> jail should not be modified directly, as
523 they are subject to upgrade.</p>
524 <div class="example">
525 <a name="id2711579"></a><p class="title"><b>Example 7. Installing the MyPLC development environment.</b></p>
526 <pre class="programlisting"># If your distribution supports RPM
527 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
529 # If your distribution does not support RPM
531 wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
533 rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu</pre>
535 <p>The MyPLC development environment installs the following
536 files and directories:</p>
537 <div class="itemizedlist"><ul type="disc">
538 <li><p><code class="filename">/plc/devel/root.img</code>: The
539 main root filesystem of the MyPLC development environment. This
540 file is an uncompressed ext3 filesystem that is loopback mounted
541 on <code class="filename">/plc/devel/root</code> when the MyPLC
542 development environment is initialized. This filesystem, even
543 when mounted, should be treated as an opaque binary that can and
544 will be replaced in its entirety by any upgrade of the MyPLC
545 development environment.</p></li>
546 <li><p><code class="filename">/plc/devel/root</code>: The mount
548 <code class="filename">/plc/devel/root.img</code>.</p></li>
550 <p><code class="filename">/plc/devel/data</code>: The directory
551 where user data and generated files are stored. This directory
552 is bind mounted onto <code class="filename">/plc/devel/root/data</code>
553 so that it is accessible as <code class="filename">/data</code> from
554 within the <span><strong class="command">chroot</strong></span> jail. Files in this
555 directory are marked with
556 <span><strong class="command">%config(noreplace)</strong></span> in the RPM. Symlinks
557 ensure that the following directories (relative to
558 <code class="filename">/plc/devel/root</code>) are stored outside the
559 root filesystem image:</p>
560 <div class="itemizedlist"><ul type="circle">
561 <li><p><code class="filename">/etc/planetlab</code>: This
562 directory contains the configuration files that define your
563 MyPLC development environment.</p></li>
564 <li><p><code class="filename">/cvs</code>: A
565 snapshot of the PlanetLab source code is stored as a CVS
566 repository in this directory. Files in this directory will
567 <span class="bold"><strong>not</strong></span> be updated by an upgrade of
568 <code class="filename">myplc-devel</code>. See <a href="#UpdatingCVS" title="5.4. Updating CVS">Section 5.4, “Updating CVS”</a> for more information about updating
569 PlanetLab source code.</p></li>
570 <li><p><code class="filename">/build</code>:
571 Builds are stored in this directory. This directory is bind
572 mounted onto <code class="filename">/plc/devel/root/build</code> so that
573 it is accessible as <code class="filename">/build</code> from within the
574 <span><strong class="command">chroot</strong></span> jail. The build scripts in this
575 directory are themselves source controlled; see <a href="#BuildingMyPLC" title="5.3. Building MyPLC">Section 5.3, “Building MyPLC”</a> for more information about executing
579 <li><p><code class="filename">/etc/init.d/plc-devel</code>: This file is
580 a System V init script installed on your host filesystem, that
581 allows you to start up and shut down the MyPLC development
582 environment with a single command.</p></li>
585 <div class="section" lang="en">
586 <div class="titlepage"><div><div><h3 class="title">
587 <a name="id2711788"></a>5.2. Fedora Core 4 mirror requirement</h3></div></div></div>
588 <p>The MyPLC development environment requires access to a
589 complete Fedora Core 4 i386 RPM repository, because several
590 different filesystems based upon Fedora Core 4 are constructed
591 during the process of building MyPLC. You may configure the
592 location of this repository via the
593 <code class="envar">PLC_DEVEL_FEDORA_URL</code> variable in
594 <code class="filename">/plc/devel/data/etc/planetlab/plc_config.xml</code>. The
595 value of the variable should be a URL that points to the top
596 level of a Fedora mirror that provides the
597 <code class="filename">base</code>, <code class="filename">updates</code>, and
598 <code class="filename">extras</code> repositories, e.g.,</p>
599 <div class="itemizedlist"><ul type="disc">
600 <li><p><code class="filename">file:///data/fedora</code></p></li>
601 <li><p><code class="filename">http://coblitz.planet-lab.org/pub/fedora</code></p></li>
602 <li><p><code class="filename">ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</code></p></li>
603 <li><p><code class="filename">ftp://mirror.stanford.edu/pub/mirrors/fedora</code></p></li>
604 <li><p><code class="filename">http://rpmfind.net/linux/fedora</code></p></li>
606 <p>As implied by the list, the repository may be located on
607 the local filesystem, or it may be located on a remote FTP or
608 HTTP server. URLs beginning with <code class="filename">file://</code>
609 should exist at the specified location relative to the root of
610 the <span><strong class="command">chroot</strong></span> jail. For optimum performance and
611 reproducibility, specify
612 <code class="envar">PLC_DEVEL_FEDORA_URL=file:///data/fedora</code> and
613 download all Fedora Core 4 RPMS into
614 <code class="filename">/plc/devel/data/fedora</code> on the host system
615 after installing <code class="filename">myplc-devel</code>. Use a tool
616 such as <span><strong class="command">wget</strong></span> or <span><strong class="command">rsync</strong></span> to
617 download the RPMS from a public mirror:</p>
618 <div class="example">
619 <a name="id2711929"></a><p class="title"><b>Example 8. Setting up a local Fedora Core 4 repository.</b></p>
620 <pre class="programlisting">mkdir -p /plc/devel/data/fedora
621 cd /plc/devel/data/fedora
623 for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
624 wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
627 <p>Change the repository URI and <span><strong class="command">--cut-dirs</strong></span>
628 level as needed to produce a hierarchy that resembles:</p>
629 <pre class="programlisting">/plc/devel/data/fedora/core/4/i386/os
630 /plc/devel/data/fedora/core/updates/4/i386
631 /plc/devel/data/fedora/extras/4/i386</pre>
632 <p>A list of additional Fedora Core 4 mirrors is available at
633 <a href="http://fedora.redhat.com/Download/mirrors.html" target="_top">http://fedora.redhat.com/Download/mirrors.html</a>.</p>
635 <div class="section" lang="en">
636 <div class="titlepage"><div><div><h3 class="title">
637 <a name="BuildingMyPLC"></a>5.3. Building MyPLC</h3></div></div></div>
638 <p>All PlanetLab source code modules are built and installed
639 as RPMS. A set of build scripts, checked into the
640 <code class="filename">build/</code> directory of the PlanetLab CVS
641 repository, eases the task of rebuilding PlanetLab source
643 <p>To build MyPLC, or any PlanetLab source code module, from
644 within the MyPLC development environment, execute the following
645 commands as root:</p>
646 <div class="example">
647 <a name="id2712005"></a><p class="title"><b>Example 9. Building MyPLC.</b></p>
648 <pre class="programlisting"># Initialize MyPLC development environment
649 service plc-devel start
651 # Enter development environment
652 chroot /plc/devel/root su -
654 # Check out build scripts into a directory named after the current
655 # date. This is simply a convention, it need not be followed
656 # exactly. See build/build.sh for an example of a build script that
657 # names build directories after CVS tags.
658 DATE=$(date +%Y.%m.%d)
660 cvs -d /cvs checkout -d $DATE build
665 <p>If the build succeeds, a set of binary RPMS will be
667 <code class="filename">/plc/devel/data/build/$DATE/RPMS/</code> that you
669 <code class="filename">/var/www/html/install-rpms/planetlab</code>
670 directory of your MyPLC installation (see <a href="#Installation" title="3. Installation">Section 3, “Installation”</a>).</p>
672 <div class="section" lang="en">
673 <div class="titlepage"><div><div><h3 class="title">
674 <a name="UpdatingCVS"></a>5.4. Updating CVS</h3></div></div></div>
675 <p>A complete snapshot of the PlanetLab source code is included
676 with the MyPLC development environment as a CVS repository in
677 <code class="filename">/plc/devel/data/cvs</code>. This CVS repository may
678 be accessed like any other CVS repository. It may be accessed
679 using an interface such as <a href="http://www.freebsd.org/projects/cvsweb.html" target="_top">CVSweb</a>,
680 and file permissions may be altered to allow for fine-grained
681 access control. Although the files are included with the
682 <code class="filename">myplc-devel</code> RPM, they are <span class="bold"><strong>not</strong></span> subject to upgrade once installed. New
683 versions of the <code class="filename">myplc-devel</code> RPM will install
684 updated snapshot repositories in
685 <code class="filename">/plc/devel/data/cvs-%{version}-%{release}</code>,
686 where <code class="literal">%{version}-%{release}</code> is replaced with
687 the version number of the RPM.</p>
688 <p>Because the CVS repository is not automatically upgraded,
689 if you wish to keep your local repository synchronized with the
690 public PlanetLab repository, it is highly recommended that you
691 use CVS's support for <a href="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources" target="_top">vendor
692 branches</a> to track changes. Vendor branches ease the task
693 of merging upstream changes with your local modifications. To
694 import a new snapshot into your local repository (for example,
695 if you have just upgraded from
696 <code class="filename">myplc-devel-0.4-2</code> to
697 <code class="filename">myplc-devel-0.4-3</code> and you notice the new
698 repository in <code class="filename">/plc/devel/data/cvs-0.4-3</code>),
699 execute the following commands as root from within the MyPLC
700 development environment:</p>
701 <div class="example">
702 <a name="id2712156"></a><p class="title"><b>Example 10. Updating /data/cvs from /data/cvs-0.4-3.</b></p>
703 <p><span class="bold"><strong>Warning</strong></span>: This may cause
704 severe, irreversible changes to be made to your local
705 repository. Always tag your local repository before
707 <pre class="programlisting"># Initialize MyPLC development environment
708 service plc-devel start
710 # Enter development environment
711 chroot /plc/devel/root su -
714 cvs -d /cvs rtag before-myplc-0_4-3-merge
717 TMP=$(mktemp -d /data/export.XXXXXX)
719 cvs -d /data/cvs-0.4-3 export -r HEAD .
720 cvs -d /cvs import -m "PlanetLab sources from myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
724 <p>If there any merge conflicts, use the command suggested by
725 CVS to help the merge. Explaining how to fix merge conflicts is
726 beyond the scope of this document; consult the CVS documentation
727 for more information on how to use CVS.</p>
730 <div class="appendix" lang="en">
731 <h2 class="title" style="clear: both">
732 <a name="id2712187"></a>A. Configuration variables (for <span class="emphasis"><em>myplc</em></span>)</h2>
733 <p>Listed below is the set of standard configuration variables
734 and their default values, defined in the template
735 <code class="filename">/etc/planetlab/default_config.xml</code>. Additional
736 variables and their defaults may be defined in site-specific XML
737 templates that should be placed in
738 <code class="filename">/etc/planetlab/configs/</code>.</p>
739 <div class="variablelist"><dl>
740 <dt><span class="term">PLC_NAME</span></dt>
745 Default: PlanetLab Test</p>
746 <p>The name of this PLC installation. It is used in
747 the name of the default system site (e.g., PlanetLab Central)
748 and in the names of various administrative entities (e.g.,
749 PlanetLab Support).</p>
751 <dt><span class="term">PLC_SLICE_PREFIX</span></dt>
757 <p>The abbreviated name of this PLC
758 installation. It is used as the prefix for system slices
759 (e.g., pl_conf). Warning: Currently, this variable should
762 <dt><span class="term">PLC_ROOT_USER</span></dt>
767 Default: root@localhost.localdomain</p>
768 <p>The name of the initial administrative
769 account. We recommend that this account be used only to create
770 additional accounts associated with real
771 administrators, then disabled.</p>
773 <dt><span class="term">PLC_ROOT_PASSWORD</span></dt>
779 <p>The password of the initial administrative
780 account. Also the password of the root account on the Boot
783 <dt><span class="term">PLC_ROOT_SSH_KEY_PUB</span></dt>
788 Default: /etc/planetlab/root_ssh_key.pub</p>
789 <p>The SSH public key used to access the root
790 account on your nodes.</p>
792 <dt><span class="term">PLC_ROOT_SSH_KEY</span></dt>
797 Default: /etc/planetlab/root_ssh_key.rsa</p>
798 <p>The SSH private key used to access the root
799 account on your nodes.</p>
801 <dt><span class="term">PLC_DEBUG_SSH_KEY_PUB</span></dt>
806 Default: /etc/planetlab/debug_ssh_key.pub</p>
807 <p>The SSH public key used to access the root
808 account on your nodes when they are in Debug mode.</p>
810 <dt><span class="term">PLC_DEBUG_SSH_KEY</span></dt>
815 Default: /etc/planetlab/debug_ssh_key.rsa</p>
816 <p>The SSH private key used to access the root
817 account on your nodes when they are in Debug mode.</p>
819 <dt><span class="term">PLC_ROOT_GPG_KEY_PUB</span></dt>
824 Default: /etc/planetlab/pubring.gpg</p>
825 <p>The GPG public keyring used to sign the Boot
826 Manager and all node packages.</p>
828 <dt><span class="term">PLC_ROOT_GPG_KEY</span></dt>
833 Default: /etc/planetlab/secring.gpg</p>
834 <p>The SSH private key used to access the root
835 account on your nodes.</p>
837 <dt><span class="term">PLC_MA_SA_NAMESPACE</span></dt>
843 <p>The namespace of your MA/SA. This should be a
844 globally unique value assigned by PlanetLab
847 <dt><span class="term">PLC_MA_SA_SSL_KEY</span></dt>
852 Default: /etc/planetlab/ma_sa_ssl.key</p>
853 <p>The SSL private key used for signing documents
854 with the signature of your MA/SA. If non-existent, one will
857 <dt><span class="term">PLC_MA_SA_SSL_CRT</span></dt>
862 Default: /etc/planetlab/ma_sa_ssl.crt</p>
863 <p>The corresponding SSL public certificate. By
864 default, this certificate is self-signed. You may replace
865 the certificate later with one signed by the PLC root
868 <dt><span class="term">PLC_MA_SA_CA_SSL_CRT</span></dt>
873 Default: /etc/planetlab/ma_sa_ca_ssl.crt</p>
874 <p>If applicable, the certificate of the PLC root
875 CA. If your MA/SA certificate is self-signed, then this file
876 is the same as your MA/SA certificate.</p>
878 <dt><span class="term">PLC_MA_SA_CA_SSL_KEY_PUB</span></dt>
883 Default: /etc/planetlab/ma_sa_ca_ssl.pub</p>
884 <p>If applicable, the public key of the PLC root
885 CA. If your MA/SA certificate is self-signed, then this file
886 is the same as your MA/SA public key.</p>
888 <dt><span class="term">PLC_MA_SA_API_CRT</span></dt>
893 Default: /etc/planetlab/ma_sa_api.xml</p>
894 <p>The API Certificate is your MA/SA public key
895 embedded in a digitally signed XML document. By default,
896 this document is self-signed. You may replace this
897 certificate later with one signed by the PLC root
900 <dt><span class="term">PLC_NET_DNS1</span></dt>
905 Default: 127.0.0.1</p>
906 <p>Primary DNS server address.</p>
908 <dt><span class="term">PLC_NET_DNS2</span></dt>
914 <p>Secondary DNS server address.</p>
916 <dt><span class="term">PLC_DNS_ENABLED</span></dt>
922 <p>Enable the internal DNS server. The server does
923 not provide reverse resolution and is not a production
924 quality or scalable DNS solution. Use the internal DNS
925 server only for small deployments or for
928 <dt><span class="term">PLC_MAIL_ENABLED</span></dt>
934 <p>Set to false to suppress all e-mail notifications
937 <dt><span class="term">PLC_MAIL_SUPPORT_ADDRESS</span></dt>
942 Default: root+support@localhost.localdomain</p>
943 <p>This address is used for support
944 requests. Support requests may include traffic complaints,
945 security incident reporting, web site malfunctions, and
946 general requests for information. We recommend that the
947 address be aliased to a ticketing system such as Request
950 <dt><span class="term">PLC_MAIL_BOOT_ADDRESS</span></dt>
955 Default: root+install-msgs@localhost.localdomain</p>
956 <p>The API will notify this address when a problem
957 occurs during node installation or boot.</p>
959 <dt><span class="term">PLC_MAIL_SLICE_ADDRESS</span></dt>
964 Default: root+SLICE@localhost.localdomain</p>
965 <p>This address template is used for sending
966 e-mail notifications to slices. SLICE will be replaced with
967 the name of the slice.</p>
969 <dt><span class="term">PLC_DB_ENABLED</span></dt>
975 <p>Enable the database server on this
978 <dt><span class="term">PLC_DB_TYPE</span></dt>
983 Default: postgresql</p>
984 <p>The type of database server. Currently, only
985 postgresql is supported.</p>
987 <dt><span class="term">PLC_DB_HOST</span></dt>
992 Default: localhost.localdomain</p>
993 <p>The fully qualified hostname of the database
996 <dt><span class="term">PLC_DB_IP</span></dt>
1001 Default: 127.0.0.1</p>
1002 <p>The IP address of the database server, if not
1003 resolvable by the configured DNS servers.</p>
1005 <dt><span class="term">PLC_DB_PORT</span></dt>
1011 <p>The TCP port number through which the database
1012 server should be accessed.</p>
1014 <dt><span class="term">PLC_DB_NAME</span></dt>
1019 Default: planetlab3</p>
1020 <p>The name of the database to access.</p>
1022 <dt><span class="term">PLC_DB_USER</span></dt>
1027 Default: pgsqluser</p>
1028 <p>The username to use when accessing the
1031 <dt><span class="term">PLC_DB_PASSWORD</span></dt>
1037 <p>The password to use when accessing the
1038 database. If left blank, one will be
1041 <dt><span class="term">PLC_API_ENABLED</span></dt>
1047 <p>Enable the API server on this
1050 <dt><span class="term">PLC_API_DEBUG</span></dt>
1056 <p>Enable verbose API debugging. Do not enable on
1057 a production system!</p>
1059 <dt><span class="term">PLC_API_HOST</span></dt>
1064 Default: localhost.localdomain</p>
1065 <p>The fully qualified hostname of the API
1068 <dt><span class="term">PLC_API_IP</span></dt>
1073 Default: 127.0.0.1</p>
1074 <p>The IP address of the API server, if not
1075 resolvable by the configured DNS servers.</p>
1077 <dt><span class="term">PLC_API_PORT</span></dt>
1083 <p>The TCP port number through which the API
1084 should be accessed. Warning: SSL (port 443) access is not
1085 fully supported by the website code yet. We recommend that
1086 port 80 be used for now and that the API server either run
1087 on the same machine as the web server, or that they both be
1088 on a secure wired network.</p>
1090 <dt><span class="term">PLC_API_PATH</span></dt>
1095 Default: /PLCAPI/</p>
1096 <p>The base path of the API URL.</p>
1098 <dt><span class="term">PLC_API_MAINTENANCE_USER</span></dt>
1103 Default: maint@localhost.localdomain</p>
1104 <p>The username of the maintenance account. This
1105 account is used by local scripts that perform automated
1106 tasks, and cannot be used for normal logins.</p>
1108 <dt><span class="term">PLC_API_MAINTENANCE_PASSWORD</span></dt>
1114 <p>The password of the maintenance account. If
1115 left blank, one will be generated. We recommend that the
1116 password be changed periodically.</p>
1118 <dt><span class="term">PLC_API_MAINTENANCE_SOURCES</span></dt>
1124 <p>A space-separated list of IP addresses allowed
1125 to access the API through the maintenance account. The value
1126 of this variable is set automatically to allow only the API,
1127 web, and boot servers, and should not be
1130 <dt><span class="term">PLC_API_SSL_KEY</span></dt>
1135 Default: /etc/planetlab/api_ssl.key</p>
1136 <p>The SSL private key to use for encrypting HTTPS
1137 traffic. If non-existent, one will be
1140 <dt><span class="term">PLC_API_SSL_CRT</span></dt>
1145 Default: /etc/planetlab/api_ssl.crt</p>
1146 <p>The corresponding SSL public certificate. By
1147 default, this certificate is self-signed. You may replace
1148 the certificate later with one signed by a root
1151 <dt><span class="term">PLC_API_CA_SSL_CRT</span></dt>
1156 Default: /etc/planetlab/api_ca_ssl.crt</p>
1157 <p>The certificate of the root CA, if any, that
1158 signed your server certificate. If your server certificate is
1159 self-signed, then this file is the same as your server
1162 <dt><span class="term">PLC_WWW_ENABLED</span></dt>
1168 <p>Enable the web server on this
1171 <dt><span class="term">PLC_WWW_DEBUG</span></dt>
1177 <p>Enable debugging output on web pages. Do not
1178 enable on a production system!</p>
1180 <dt><span class="term">PLC_WWW_HOST</span></dt>
1185 Default: localhost.localdomain</p>
1186 <p>The fully qualified hostname of the web
1189 <dt><span class="term">PLC_WWW_IP</span></dt>
1194 Default: 127.0.0.1</p>
1195 <p>The IP address of the web server, if not
1196 resolvable by the configured DNS servers.</p>
1198 <dt><span class="term">PLC_WWW_PORT</span></dt>
1204 <p>The TCP port number through which the
1205 unprotected portions of the web site should be
1208 <dt><span class="term">PLC_WWW_SSL_PORT</span></dt>
1214 <p>The TCP port number through which the protected
1215 portions of the web site should be accessed.</p>
1217 <dt><span class="term">PLC_WWW_SSL_KEY</span></dt>
1222 Default: /etc/planetlab/www_ssl.key</p>
1223 <p>The SSL private key to use for encrypting HTTPS
1224 traffic. If non-existent, one will be
1227 <dt><span class="term">PLC_WWW_SSL_CRT</span></dt>
1232 Default: /etc/planetlab/www_ssl.crt</p>
1233 <p>The corresponding SSL public certificate for
1234 the HTTP server. By default, this certificate is
1235 self-signed. You may replace the certificate later with one
1236 signed by a root CA.</p>
1238 <dt><span class="term">PLC_WWW_CA_SSL_CRT</span></dt>
1243 Default: /etc/planetlab/www_ca_ssl.crt</p>
1244 <p>The certificate of the root CA, if any, that
1245 signed your server certificate. If your server certificate is
1246 self-signed, then this file is the same as your server
1249 <dt><span class="term">PLC_BOOT_ENABLED</span></dt>
1255 <p>Enable the boot server on this
1258 <dt><span class="term">PLC_BOOT_HOST</span></dt>
1263 Default: localhost.localdomain</p>
1264 <p>The fully qualified hostname of the boot
1267 <dt><span class="term">PLC_BOOT_IP</span></dt>
1272 Default: 127.0.0.1</p>
1273 <p>The IP address of the boot server, if not
1274 resolvable by the configured DNS servers.</p>
1276 <dt><span class="term">PLC_BOOT_PORT</span></dt>
1282 <p>The TCP port number through which the
1283 unprotected portions of the boot server should be
1286 <dt><span class="term">PLC_BOOT_SSL_PORT</span></dt>
1292 <p>The TCP port number through which the protected
1293 portions of the boot server should be
1296 <dt><span class="term">PLC_BOOT_SSL_KEY</span></dt>
1301 Default: /etc/planetlab/boot_ssl.key</p>
1302 <p>The SSL private key to use for encrypting HTTPS
1305 <dt><span class="term">PLC_BOOT_SSL_CRT</span></dt>
1310 Default: /etc/planetlab/boot_ssl.crt</p>
1311 <p>The corresponding SSL public certificate for
1312 the HTTP server. By default, this certificate is
1313 self-signed. You may replace the certificate later with one
1314 signed by a root CA.</p>
1316 <dt><span class="term">PLC_BOOT_CA_SSL_CRT</span></dt>
1321 Default: /etc/planetlab/boot_ca_ssl.crt</p>
1322 <p>The certificate of the root CA, if any, that
1323 signed your server certificate. If your server certificate is
1324 self-signed, then this file is the same as your server
1329 <div class="appendix" lang="en">
1330 <h2 class="title" style="clear: both">
1331 <a name="id2715081"></a>B. Development configuration variables (for <span class="emphasis"><em>myplc-devel</em></span>)</h2>
1332 <div class="variablelist"><dl>
1333 <dt><span class="term">PLC_DEVEL_FEDORA_RELEASE</span></dt>
1339 <p>Version number of Fedora Core upon which to
1340 base the build environment. Warning: Currently, only Fedora
1341 Core 4 is supported.</p>
1343 <dt><span class="term">PLC_DEVEL_FEDORA_ARCH</span></dt>
1349 <p>Base architecture of the build
1350 environment. Warning: Currently, only i386 is
1353 <dt><span class="term">PLC_DEVEL_FEDORA_URL</span></dt>
1358 Default: file:///usr/share/mirrors/fedora</p>
1359 <p>Fedora Core mirror from which to install
1362 <dt><span class="term">PLC_DEVEL_CVSROOT</span></dt>
1368 <p>CVSROOT to use when checking out code.</p>
1370 <dt><span class="term">PLC_DEVEL_BOOTSTRAP</span></dt>
1376 <p>Controls whether MyPLC should be built inside
1377 of its own development environment.</p>
1381 <div class="bibliography">
1382 <div class="titlepage"><div><div><h2 class="title">
1383 <a name="id2715252"></a>Bibliography</h2></div></div></div>
1384 <div class="biblioentry">
1385 <a name="TechsGuide"></a><p>[1] <span class="author"><span class="firstname">Mark</span> <span class="surname">Huang</span>. </span><span class="title"><i><a href="http://www.planet-lab.org/doc/TechsGuide.php" target="_top">PlanetLab
1386 Technical Contact's Guide</a></i>. </span></p>
1389 </div><?php require('footer.php'); ?>