3 // DO NOT EDIT. This file was automatically generated from
4 // DocBook XML. See plc_www/doc/README.
6 $_title= "MyPLC User's Guide";
8 require_once('session.php');
9 require_once('header.php');
10 require_once('nav.php');
12 ?><div class="article" lang="en">
13 <div class="titlepage">
15 <div><h1 class="title">
16 <a name="id2414672"></a>MyPLC User's Guide</h1></div>
17 <div><div class="author"><h3 class="author"><span class="firstname">Mark Huang</span></h3></div></div>
18 <div><div class="revhistory"><table border="1" width="100%" summary="Revision history">
19 <tr><th align="left" valign="top" colspan="3"><b>Revision History</b></th></tr>
21 <td align="left">Revision 1.0</td>
22 <td align="left">April 7, 2006</td>
23 <td align="left">MLH</td>
25 <tr><td align="left" colspan="3"><p>Initial draft.</p></td></tr>
27 <td align="left">Revision 1.1</td>
28 <td align="left">July 19, 2006</td>
29 <td align="left">MLH</td>
31 <tr><td align="left" colspan="3"><p>Add development environment.</p></td></tr>
33 <div><div class="abstract">
34 <p class="title"><b>Abstract</b></p>
35 <p>This document describes the design, installation, and
36 administration of MyPLC, a complete PlanetLab Central (PLC)
37 portable installation contained within a
38 <span><strong class="command">chroot</strong></span> jail. This document assumes advanced
39 knowledge of the PlanetLab architecture and Linux system
46 <p><b>Table of Contents</b></p>
48 <dt><span class="section"><a href="#id2484133">1. Overview</a></span></dt>
49 <dd><dl><dt><span class="section"><a href="#id2461682">1.1. Purpose of the <span class="emphasis"><em> myplc-devel
50 </em></span> package </a></span></dt></dl></dd>
51 <dt><span class="section"><a href="#Requirements">2. Requirements </a></span></dt>
52 <dt><span class="section"><a href="#Installation">3. Installation</a></span></dt>
53 <dt><span class="section"><a href="#id2462055">4. Quickstart</a></span></dt>
55 <dt><span class="section"><a href="#ChangingTheConfiguration">4.1. Changing the configuration</a></span></dt>
56 <dt><span class="section"><a href="#id2513463">4.2. Installing nodes</a></span></dt>
57 <dt><span class="section"><a href="#id2513546">4.3. Administering nodes</a></span></dt>
58 <dt><span class="section"><a href="#id2513647">4.4. Creating a slice</a></span></dt>
60 <dt><span class="section"><a href="#DevelopmentEnvironment">5. Rebuilding and customizing MyPLC</a></span></dt>
62 <dt><span class="section"><a href="#id2513765">5.1. Installation</a></span></dt>
63 <dt><span class="section"><a href="#id2513996">5.2. Fedora Core 4 mirror requirement</a></span></dt>
64 <dt><span class="section"><a href="#BuildingMyPLC">5.3. Building MyPLC</a></span></dt>
65 <dt><span class="section"><a href="#UpdatingCVS">5.4. Updating CVS</a></span></dt>
67 <dt><span class="appendix"><a href="#id2514395">A. Configuration variables (for <span class="emphasis"><em>myplc</em></span>)</a></span></dt>
68 <dt><span class="appendix"><a href="#id2517288">B. Development configuration variables(for <span class="emphasis"><em>myplc-devel</em></span>)</a></span></dt>
69 <dt><span class="bibliography"><a href="#id2517460">Bibliography</a></span></dt>
72 <div class="section" lang="en">
73 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
74 <a name="id2484133"></a>1. Overview</h2></div></div></div>
75 <p>MyPLC is a complete PlanetLab Central (PLC) portable
76 installation contained within a <span><strong class="command">chroot</strong></span>
77 jail. The default installation consists of a web server, an
78 XML-RPC API server, a boot server, and a database server: the core
79 components of PLC. The installation is customized through an
80 easy-to-use graphical interface. All PLC services are started up
81 and shut down through a single script installed on the host
82 system. The usually complex process of installing and
83 administering the PlanetLab backend is reduced by containing PLC
84 services within a virtual filesystem. By packaging it in such a
85 manner, MyPLC may also be run on any modern Linux distribution,
86 and could conceivably even run in a PlanetLab slice.</p>
88 <a name="Architecture"></a><p class="title"><b>Figure 1. MyPLC architecture</b></p>
89 <div class="mediaobject" align="center">
90 <img src="architecture.png" align="middle" width="270" alt="MyPLC architecture"><div class="caption"><p>MyPLC should be viewed as a single application that
91 provides multiple functions and can run on any host
95 <div class="section" lang="en">
96 <div class="titlepage"><div><div><h3 class="title">
97 <a name="id2461682"></a>1.1. Purpose of the <span class="emphasis"><em> myplc-devel
98 </em></span> package </h3></div></div></div>
99 <p> The <span class="emphasis"><em>myplc</em></span> package comes with all
100 required node software, rebuilt from the public PlanetLab CVS
101 repository. If for any reason you need to implement your own
102 customized version of this software, you can use the
103 <span class="emphasis"><em>myplc-devel</em></span> package instead, for setting up
104 your own development environment, including a local CVS
105 repository; you can then freely manage your changes and rebuild
106 your customized version of <span class="emphasis"><em>myplc</em></span>. We also
107 provide good practices, that will then allow you to resync your local
108 CVS repository with any further evolution on the mainstream public
109 PlanetLab software. </p>
112 <div class="section" lang="en">
113 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
114 <a name="Requirements"></a>2. Requirements </h2></div></div></div>
115 <p> <span class="emphasis"><em>myplc</em></span> and
116 <span class="emphasis"><em>myplc-devel</em></span> were designed as
117 <span><strong class="command">chroot</strong></span> jails so as to reduce the requirements on
118 your host operating system. So in theory, these distributions should
119 work on virtually any Linux 2.6 based distribution, whether it
120 supports rpm or not. </p>
121 <p> However, things are never that simple and there indeed are
122 some known limitations to this, so here are a couple notes as a
123 recommended reading before you proceed with installing</p>
124 <p> As of August 2006 9, so this should apply to
126 <div class="itemizedlist"><ul type="disc">
127 <li><p> The software is vastly based on <span class="emphasis"><em>Fedora
128 Core 4</em></span>. Please note that the build server at Princeton
129 runs <span class="emphasis"><em>Fedora Core 2</em></span>, togother with a upgraded
133 <p> myplc and myplc-devel are known to work on both
134 <span class="emphasis"><em>Fedora Core 2</em></span> and <span class="emphasis"><em>Fedora Core
135 4</em></span>. Please note however that, on fc4 at least, it is
136 highly recommended to use the <span class="application">Security Level
137 Configuration</span> utility and to <span class="emphasis"><em>switch off
138 SElinux</em></span> on your box because : </p>
139 <div class="itemizedlist"><ul type="circle">
141 myplc requires you to run SElinux as 'Permissive' at most
144 myplc-devel requires you to turn SElinux Off.
150 <div class="section" lang="en">
151 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
152 <a name="Installation"></a>3. Installation</h2></div></div></div>
153 <p>Though internally composed of commodity software
154 subpackages, MyPLC should be treated as a monolithic software
155 application. MyPLC is distributed as single RPM package that has
156 no external dependencies, allowing it to be installed on
157 practically any Linux 2.6 based distribution:</p>
158 <div class="example">
159 <a name="id2461069"></a><p class="title"><b>Example 1. Installing MyPLC.</b></p>
160 <pre class="programlisting"># If your distribution supports RPM
161 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
163 # If your distribution does not support RPM
165 wget http://build.planet-lab.org/build/myplc-0_4-rc1/RPMS/i386/myplc-0.4-1.planetlab.i386.rpm
167 rpm2cpio /tmp/myplc-0.4-1.planetlab.i386.rpm | cpio -diu</pre>
169 <p>MyPLC installs the following files and directories:</p>
170 <div class="itemizedlist"><ul type="disc">
171 <li><p><code class="filename">/plc/root.img</code>: The main
172 root filesystem of the MyPLC application. This file is an
173 uncompressed ext3 filesystem that is loopback mounted on
174 <code class="filename">/plc/root</code> when MyPLC starts. This
175 filesystem, even when mounted, should be treated as an opaque
176 binary that can and will be replaced in its entirety by any
177 upgrade of MyPLC.</p></li>
178 <li><p><code class="filename">/plc/root</code>: The mount point
179 for <code class="filename">/plc/root.img</code>. Once the root filesystem
180 is mounted, all MyPLC services run in a
181 <span><strong class="command">chroot</strong></span> jail based in this
184 <p><code class="filename">/plc/data</code>: The directory where user
185 data and generated files are stored. This directory is bind
186 mounted onto <code class="filename">/plc/root/data</code> so that it is
187 accessible as <code class="filename">/data</code> from within the
188 <span><strong class="command">chroot</strong></span> jail. Files in this directory are
189 marked with <span><strong class="command">%config(noreplace)</strong></span> in the
190 RPM. That is, during an upgrade of MyPLC, if a file has not
191 changed since the last installation or upgrade of MyPLC, it is
192 subject to upgrade and replacement. If the file has changed,
193 the new version of the file will be created with a
194 <code class="filename">.rpmnew</code> extension. Symlinks within the
195 MyPLC root filesystem ensure that the following directories
196 (relative to <code class="filename">/plc/root</code>) are stored
197 outside the MyPLC filesystem image:</p>
198 <div class="itemizedlist"><ul type="circle">
199 <li><p><code class="filename">/etc/planetlab</code>: This
200 directory contains the configuration files, keys, and
201 certificates that define your MyPLC
202 installation.</p></li>
203 <li><p><code class="filename">/var/lib/pgsql</code>: This
204 directory contains PostgreSQL database
206 <li><p><code class="filename">/var/www/html/alpina-logs</code>: This
207 directory contains node installation logs.</p></li>
208 <li><p><code class="filename">/var/www/html/boot</code>: This
209 directory contains the Boot Manager, customized for your MyPLC
210 installation, and its data files.</p></li>
211 <li><p><code class="filename">/var/www/html/download</code>: This
212 directory contains Boot CD images, customized for your MyPLC
213 installation.</p></li>
214 <li><p><code class="filename">/var/www/html/install-rpms</code>: This
215 directory is where you should install node package updates,
216 if any. By default, nodes are installed from the tarball
218 <code class="filename">/var/www/html/boot/PlanetLab-Bootstrap.tar.bz2</code>,
219 which is pre-built from the latest PlanetLab Central
220 sources, and installed as part of your MyPLC
221 installation. However, nodes will attempt to install any
222 newer RPMs located in
223 <code class="filename">/var/www/html/install-rpms/planetlab</code>,
224 after initial installation and periodically thereafter. You
225 must run <span><strong class="command">yum-arch</strong></span> and
226 <span><strong class="command">createrepo</strong></span> to update the
227 <span><strong class="command">yum</strong></span> caches in this directory after
228 installing a new RPM. PlanetLab Central cannot support any
229 changes to this directory.</p></li>
230 <li><p><code class="filename">/var/www/html/xml</code>: This
231 directory contains various XML files that the Slice Creation
232 Service uses to determine the state of slices. These XML
233 files are refreshed periodically by <span><strong class="command">cron</strong></span>
234 jobs running in the MyPLC root.</p></li>
238 <p><code class="filename">/etc/init.d/plc</code>: This file
239 is a System V init script installed on your host filesystem,
240 that allows you to start up and shut down MyPLC with a single
241 command. On a Red Hat or Fedora host system, it is customary to
242 use the <span><strong class="command">service</strong></span> command to invoke System V
244 <div class="example">
245 <a name="StartingAndStoppingMyPLC"></a><p class="title"><b>Example 2. Starting and stopping MyPLC.</b></p>
246 <pre class="programlisting"># Starting MyPLC
250 service plc stop</pre>
252 <p>Like all other registered System V init services, MyPLC is
253 started and shut down automatically when your host system boots
254 and powers off. You may disable automatic startup by invoking
255 the <span><strong class="command">chkconfig</strong></span> command on a Red Hat or Fedora
257 <div class="example">
258 <a name="id2461985"></a><p class="title"><b>Example 3. Disabling automatic startup of MyPLC.</b></p>
259 <pre class="programlisting"># Disable automatic startup
262 # Enable automatic startup
263 chkconfig plc on</pre>
266 <li><p><code class="filename">/etc/sysconfig/plc</code>: This
267 file is a shell script fragment that defines the variables
268 <code class="envar">PLC_ROOT</code> and <code class="envar">PLC_DATA</code>. By default,
269 the values of these variables are <code class="filename">/plc/root</code>
270 and <code class="filename">/plc/data</code>, respectively. If you wish,
271 you may move your MyPLC installation to another location on your
272 host filesystem and edit the values of these variables
273 appropriately, but you will break the RPM upgrade
274 process. PlanetLab Central cannot support any changes to this
276 <li><p><code class="filename">/etc/planetlab</code>: This
277 symlink to <code class="filename">/plc/data/etc/planetlab</code> is
278 installed on the host system for convenience.</p></li>
281 <div class="section" lang="en">
282 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
283 <a name="id2462055"></a>4. Quickstart</h2></div></div></div>
284 <p>Once installed, start MyPLC (see <a href="#StartingAndStoppingMyPLC" title="Example 2. Starting and stopping MyPLC.">Example 2, “Starting and stopping MyPLC.”</a>). MyPLC must be started as
285 root. Observe the output of this command for any failures. If no
286 failures occur, you should see output similar to the
288 <div class="example">
289 <a name="id2462176"></a><p class="title"><b>Example 4. A successful MyPLC startup.</b></p>
290 <pre class="programlisting">Mounting PLC: [ OK ]
291 PLC: Generating network files: [ OK ]
292 PLC: Starting system logger: [ OK ]
293 PLC: Starting database server: [ OK ]
294 PLC: Generating SSL certificates: [ OK ]
295 PLC: Configuring the API: [ OK ]
296 PLC: Updating GPG keys: [ OK ]
297 PLC: Generating SSH keys: [ OK ]
298 PLC: Starting web server: [ OK ]
299 PLC: Bootstrapping the database: [ OK ]
300 PLC: Starting DNS server: [ OK ]
301 PLC: Starting crond: [ OK ]
302 PLC: Rebuilding Boot CD: [ OK ]
303 PLC: Rebuilding Boot Manager: [ OK ]
304 PLC: Signing node packages: [ OK ]
307 <p>If <code class="filename">/plc/root</code> is mounted successfully, a
308 complete log file of the startup process may be found at
309 <code class="filename">/plc/root/var/log/boot.log</code>. Possible reasons
310 for failure of each step include:</p>
311 <div class="itemizedlist"><ul type="disc">
312 <li><p><code class="literal">Mounting PLC</code>: If this step
313 fails, first ensure that you started MyPLC as root. Check
314 <code class="filename">/etc/sysconfig/plc</code> to ensure that
315 <code class="envar">PLC_ROOT</code> and <code class="envar">PLC_DATA</code> refer to the
316 right locations. You may also have too many existing loopback
317 mounts, or your kernel may not support loopback mounting, bind
318 mounting, or the ext3 filesystem. Try freeing at least one
319 loopback device, or re-compiling your kernel to support loopback
320 mounting, bind mounting, and the ext3 filesystem. If you see an
321 error similar to <code class="literal">Permission denied while trying to open
322 /plc/root.img</code>, then SELinux may be enabled. See <a href="#Requirements" title="2. Requirements ">Section 2, “ Requirements ”</a> above for details.</p></li>
323 <li><p><code class="literal">Starting database server</code>: If
324 this step fails, check
325 <code class="filename">/plc/root/var/log/pgsql</code> and
326 <code class="filename">/plc/root/var/log/boot.log</code>. The most common
327 reason for failure is that the default PostgreSQL port, TCP port
328 5432, is already in use. Check that you are not running a
329 PostgreSQL server on the host system.</p></li>
330 <li><p><code class="literal">Starting web server</code>: If this
332 <code class="filename">/plc/root/var/log/httpd/error_log</code> and
333 <code class="filename">/plc/root/var/log/boot.log</code> for obvious
334 errors. The most common reason for failure is that the default
335 web ports, TCP ports 80 and 443, are already in use. Check that
336 you are not running a web server on the host
338 <li><p><code class="literal">Bootstrapping the database</code>:
339 If this step fails, it is likely that the previous step
340 (<code class="literal">Starting web server</code>) also failed. Another
341 reason that it could fail is if <code class="envar">PLC_API_HOST</code> (see
342 <a href="#ChangingTheConfiguration" title="4.1. Changing the configuration">Section 4.1, “Changing the configuration”</a>) does not resolve to
343 the host on which the API server has been enabled. By default,
344 all services, including the API server, are enabled and run on
345 the same host, so check that <code class="envar">PLC_API_HOST</code> is
346 either <code class="filename">localhost</code> or resolves to a local IP
348 <li><p><code class="literal">Starting crond</code>: If this step
349 fails, it is likely that the previous steps (<code class="literal">Starting
350 web server</code> and <code class="literal">Bootstrapping the
351 database</code>) also failed. If not, check
352 <code class="filename">/plc/root/var/log/boot.log</code> for obvious
353 errors. This step starts the <span><strong class="command">cron</strong></span> service and
354 generates the initial set of XML files that the Slice Creation
355 Service uses to determine slice state.</p></li>
357 <p>If no failures occur, then MyPLC should be active with a
358 default configuration. Open a web browser on the host system and
359 visit <code class="literal">http://localhost/</code>, which should bring you
360 to the front page of your PLC installation. The password of the
361 default administrator account
362 <code class="literal">root@localhost.localdomain</code> (set by
363 <code class="envar">PLC_ROOT_USER</code>) is <code class="literal">root</code> (set by
364 <code class="envar">PLC_ROOT_PASSWORD</code>).</p>
365 <div class="section" lang="en">
366 <div class="titlepage"><div><div><h3 class="title">
367 <a name="ChangingTheConfiguration"></a>4.1. Changing the configuration</h3></div></div></div>
368 <p>After verifying that MyPLC is working correctly, shut it
369 down and begin changing some of the default variable
370 values. Shut down MyPLC with <span><strong class="command">service plc stop</strong></span>
371 (see <a href="#StartingAndStoppingMyPLC" title="Example 2. Starting and stopping MyPLC.">Example 2, “Starting and stopping MyPLC.”</a>). With a text
372 editor, open the file
373 <code class="filename">/etc/planetlab/plc_config.xml</code>. This file is
374 a self-documenting configuration file written in XML. Variables
375 are divided into categories. Variable identifiers must be
376 alphanumeric, plus underscore. A variable is referred to
377 canonically as the uppercase concatenation of its category
378 identifier, an underscore, and its variable identifier. Thus, a
379 variable with an <code class="literal">id</code> of
380 <code class="literal">slice_prefix</code> in the <code class="literal">plc</code>
381 category is referred to canonically as
382 <code class="envar">PLC_SLICE_PREFIX</code>.</p>
383 <p>The reason for this convention is that during MyPLC
384 startup, <code class="filename">plc_config.xml</code> is translated into
385 several different languages—shell, PHP, and
386 Python—so that scripts written in each of these languages
387 can refer to the same underlying configuration. Most MyPLC
388 scripts are written in shell, so the convention for shell
389 variables predominates.</p>
390 <p>The variables that you should change immediately are:</p>
391 <div class="itemizedlist"><ul type="disc">
392 <li><p><code class="envar">PLC_NAME</code>: Change this to the
393 name of your PLC installation.</p></li>
394 <li><p><code class="envar">PLC_ROOT_PASSWORD</code>: Change this
395 to a more secure password.</p></li>
396 <li><p><code class="envar">PLC_MAIL_SUPPORT_ADDRESS</code>:
397 Change this to the e-mail address at which you would like to
398 receive support requests.</p></li>
399 <li><p><code class="envar">PLC_DB_HOST</code>,
400 <code class="envar">PLC_DB_IP</code>, <code class="envar">PLC_API_HOST</code>,
401 <code class="envar">PLC_API_IP</code>, <code class="envar">PLC_WWW_HOST</code>,
402 <code class="envar">PLC_WWW_IP</code>, <code class="envar">PLC_BOOT_HOST</code>,
403 <code class="envar">PLC_BOOT_IP</code>: Change all of these to the
404 preferred FQDN and external IP address of your host
407 <p>After changing these variables, save the file, then
408 restart MyPLC with <span><strong class="command">service plc start</strong></span>. You
409 should notice that the password of the default administrator
410 account is no longer <code class="literal">root</code>, and that the
411 default site name includes the name of your PLC installation
412 instead of PlanetLab.</p>
414 <div class="section" lang="en">
415 <div class="titlepage"><div><div><h3 class="title">
416 <a name="id2513463"></a>4.2. Installing nodes</h3></div></div></div>
417 <p>Install your first node by clicking <code class="literal">Add
418 Node</code> under the <code class="literal">Nodes</code> tab. Fill in
419 all the appropriate details, then click
420 <code class="literal">Add</code>. Download the node's configuration file
421 by clicking <code class="literal">Download configuration file</code> on
422 the <span class="bold"><strong>Node Details</strong></span> page for the
423 node. Save it to a floppy disk or USB key as detailed in [<a href="#TechsGuide" title="[TechsGuide]">1</a>].</p>
424 <p>Follow the rest of the instructions in [<a href="#TechsGuide" title="[TechsGuide]">1</a>] for creating a Boot CD and installing
425 the node, except download the Boot CD image from the
426 <code class="filename">/download</code> directory of your PLC
427 installation, not from PlanetLab Central. The images located
428 here are customized for your installation. If you change the
429 hostname of your boot server (<code class="envar">PLC_BOOT_HOST</code>), or
430 if the SSL certificate of your boot server expires, MyPLC will
431 regenerate it and rebuild the Boot CD with the new
432 certificate. If this occurs, you must replace all Boot CDs
433 created before the certificate was regenerated.</p>
434 <p>The installation process for a node has significantly
435 improved since PlanetLab 3.3. It should now take only a few
436 seconds for a new node to become ready to create slices.</p>
438 <div class="section" lang="en">
439 <div class="titlepage"><div><div><h3 class="title">
440 <a name="id2513546"></a>4.3. Administering nodes</h3></div></div></div>
441 <p>You may administer nodes as <code class="literal">root</code> by
442 using the SSH key stored in
443 <code class="filename">/etc/planetlab/root_ssh_key.rsa</code>.</p>
444 <div class="example">
445 <a name="id2513569"></a><p class="title"><b>Example 5. Accessing nodes via SSH. Replace
446 <code class="literal">node</code> with the hostname of the node.</b></p>
447 <pre class="programlisting">ssh -i /etc/planetlab/root_ssh_key.rsa root@node</pre>
449 <p>Besides the standard Linux log files located in
450 <code class="filename">/var/log</code>, several other files can give you
451 clues about any problems with active processes:</p>
452 <div class="itemizedlist"><ul type="disc">
453 <li><p><code class="filename">/var/log/pl_nm</code>: The log
454 file for the Node Manager.</p></li>
455 <li><p><code class="filename">/vservers/pl_conf/var/log/pl_conf</code>:
456 The log file for the Slice Creation Service.</p></li>
457 <li><p><code class="filename">/var/log/propd</code>: The log
458 file for Proper, the service which allows certain slices to
459 perform certain privileged operations in the root
461 <li><p><code class="filename">/vservers/pl_netflow/var/log/netflow.log</code>:
462 The log file for PlanetFlow, the network traffic auditing
466 <div class="section" lang="en">
467 <div class="titlepage"><div><div><h3 class="title">
468 <a name="id2513647"></a>4.4. Creating a slice</h3></div></div></div>
469 <p>Create a slice by clicking <code class="literal">Create Slice</code>
470 under the <code class="literal">Slices</code> tab. Fill in all the
471 appropriate details, then click <code class="literal">Create</code>. Add
472 nodes to the slice by clicking <code class="literal">Manage Nodes</code>
473 on the <span class="bold"><strong>Slice Details</strong></span> page for
475 <p>A <span><strong class="command">cron</strong></span> job runs every five minutes and
477 <code class="filename">/plc/data/var/www/html/xml/slices-0.5.xml</code>
478 with information about current slice state. The Slice Creation
479 Service running on every node polls this file every ten minutes
480 to determine if it needs to create or delete any slices. You may
481 accelerate this process manually if desired.</p>
482 <div class="example">
483 <a name="id2513710"></a><p class="title"><b>Example 6. Forcing slice creation on a node.</b></p>
484 <pre class="programlisting"># Update slices.xml immediately
485 service plc start crond
487 # Kick the Slice Creation Service on a particular node.
488 ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
489 vserver pl_conf exec service pl_conf restart</pre>
493 <div class="section" lang="en">
494 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
495 <a name="DevelopmentEnvironment"></a>5. Rebuilding and customizing MyPLC</h2></div></div></div>
496 <p>The MyPLC package, though distributed as an RPM, is not a
497 traditional package that can be easily rebuilt from SRPM. The
498 requisite build environment is quite extensive and numerous
499 assumptions are made throughout the PlanetLab source code base,
500 that the build environment is based on Fedora Core 4 and that
501 access to a complete Fedora Core 4 mirror is available.</p>
502 <p>For this reason, it is recommended that you only rebuild
503 MyPLC (or any of its components) from within the MyPLC development
504 environment. The MyPLC development environment is similar to MyPLC
505 itself in that it is a portable filesystem contained within a
506 <span><strong class="command">chroot</strong></span> jail. The filesystem contains all the
507 necessary tools required to rebuild MyPLC, as well as a snapshot
508 of the PlanetLab source code base in the form of a local CVS
510 <div class="section" lang="en">
511 <div class="titlepage"><div><div><h3 class="title">
512 <a name="id2513765"></a>5.1. Installation</h3></div></div></div>
513 <p>Install the MyPLC development environment similarly to how
514 you would install MyPLC. You may install both packages on the same
515 host system if you wish. As with MyPLC, the MyPLC development
516 environment should be treated as a monolithic software
517 application, and any files present in the
518 <span><strong class="command">chroot</strong></span> jail should not be modified directly, as
519 they are subject to upgrade.</p>
520 <div class="example">
521 <a name="id2513786"></a><p class="title"><b>Example 7. Installing the MyPLC development environment.</b></p>
522 <pre class="programlisting"># If your distribution supports RPM
523 rpm -U http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
525 # If your distribution does not support RPM
527 wget http://build.planet-lab.org/build/myplc-0_4-rc2/RPMS/i386/myplc-devel-0.4-2.planetlab.i386.rpm
529 rpm2cpio /tmp/myplc-devel-0.4-2.planetlab.i386.rpm | cpio -diu</pre>
531 <p>The MyPLC development environment installs the following
532 files and directories:</p>
533 <div class="itemizedlist"><ul type="disc">
534 <li><p><code class="filename">/plc/devel/root.img</code>: The
535 main root filesystem of the MyPLC development environment. This
536 file is an uncompressed ext3 filesystem that is loopback mounted
537 on <code class="filename">/plc/devel/root</code> when the MyPLC
538 development environment is initialized. This filesystem, even
539 when mounted, should be treated as an opaque binary that can and
540 will be replaced in its entirety by any upgrade of the MyPLC
541 development environment.</p></li>
542 <li><p><code class="filename">/plc/devel/root</code>: The mount
544 <code class="filename">/plc/devel/root.img</code>.</p></li>
546 <p><code class="filename">/plc/devel/data</code>: The directory
547 where user data and generated files are stored. This directory
548 is bind mounted onto <code class="filename">/plc/devel/root/data</code>
549 so that it is accessible as <code class="filename">/data</code> from
550 within the <span><strong class="command">chroot</strong></span> jail. Files in this
551 directory are marked with
552 <span><strong class="command">%config(noreplace)</strong></span> in the RPM. Symlinks
553 ensure that the following directories (relative to
554 <code class="filename">/plc/devel/root</code>) are stored outside the
555 root filesystem image:</p>
556 <div class="itemizedlist"><ul type="circle">
557 <li><p><code class="filename">/etc/planetlab</code>: This
558 directory contains the configuration files that define your
559 MyPLC development environment.</p></li>
560 <li><p><code class="filename">/cvs</code>: A
561 snapshot of the PlanetLab source code is stored as a CVS
562 repository in this directory. Files in this directory will
563 <span class="bold"><strong>not</strong></span> be updated by an upgrade of
564 <code class="filename">myplc-devel</code>. See <a href="#UpdatingCVS" title="5.4. Updating CVS">Section 5.4, “Updating CVS”</a> for more information about updating
565 PlanetLab source code.</p></li>
566 <li><p><code class="filename">/build</code>:
567 Builds are stored in this directory. This directory is bind
568 mounted onto <code class="filename">/plc/devel/root/build</code> so that
569 it is accessible as <code class="filename">/build</code> from within the
570 <span><strong class="command">chroot</strong></span> jail. The build scripts in this
571 directory are themselves source controlled; see <a href="#BuildingMyPLC" title="5.3. Building MyPLC">Section 5.3, “Building MyPLC”</a> for more information about executing
575 <li><p><code class="filename">/etc/init.d/plc-devel</code>: This file is
576 a System V init script installed on your host filesystem, that
577 allows you to start up and shut down the MyPLC development
578 environment with a single command.</p></li>
581 <div class="section" lang="en">
582 <div class="titlepage"><div><div><h3 class="title">
583 <a name="id2513996"></a>5.2. Fedora Core 4 mirror requirement</h3></div></div></div>
584 <p>The MyPLC development environment requires access to a
585 complete Fedora Core 4 i386 RPM repository, because several
586 different filesystems based upon Fedora Core 4 are constructed
587 during the process of building MyPLC. You may configure the
588 location of this repository via the
589 <code class="envar">PLC_DEVEL_FEDORA_URL</code> variable in
590 <code class="filename">/plc/devel/data/etc/planetlab/plc_config.xml</code>. The
591 value of the variable should be a URL that points to the top
592 level of a Fedora mirror that provides the
593 <code class="filename">base</code>, <code class="filename">updates</code>, and
594 <code class="filename">extras</code> repositories, e.g.,</p>
595 <div class="itemizedlist"><ul type="disc">
596 <li><p><code class="filename">file:///data/fedora</code></p></li>
597 <li><p><code class="filename">http://coblitz.planet-lab.org/pub/fedora</code></p></li>
598 <li><p><code class="filename">ftp://mirror.cs.princeton.edu/pub/mirrors/fedora</code></p></li>
599 <li><p><code class="filename">ftp://mirror.stanford.edu/pub/mirrors/fedora</code></p></li>
600 <li><p><code class="filename">http://rpmfind.net/linux/fedora</code></p></li>
602 <p>As implied by the list, the repository may be located on
603 the local filesystem, or it may be located on a remote FTP or
604 HTTP server. URLs beginning with <code class="filename">file://</code>
605 should exist at the specified location relative to the root of
606 the <span><strong class="command">chroot</strong></span> jail. For optimum performance and
607 reproducibility, specify
608 <code class="envar">PLC_DEVEL_FEDORA_URL=file:///data/fedora</code> and
609 download all Fedora Core 4 RPMS into
610 <code class="filename">/plc/devel/data/fedora</code> on the host system
611 after installing <code class="filename">myplc-devel</code>. Use a tool
612 such as <span><strong class="command">wget</strong></span> or <span><strong class="command">rsync</strong></span> to
613 download the RPMS from a public mirror:</p>
614 <div class="example">
615 <a name="id2514137"></a><p class="title"><b>Example 8. Setting up a local Fedora Core 4 repository.</b></p>
616 <pre class="programlisting">mkdir -p /plc/devel/data/fedora
617 cd /plc/devel/data/fedora
619 for repo in core/4/i386/os core/updates/4/i386 extras/4/i386 ; do
620 wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo
623 <p>Change the repository URI and <span><strong class="command">--cut-dirs</strong></span>
624 level as needed to produce a hierarchy that resembles:</p>
625 <pre class="programlisting">/plc/devel/data/fedora/core/4/i386/os
626 /plc/devel/data/fedora/core/updates/4/i386
627 /plc/devel/data/fedora/extras/4/i386</pre>
628 <p>A list of additional Fedora Core 4 mirrors is available at
629 <a href="http://fedora.redhat.com/Download/mirrors.html" target="_top">http://fedora.redhat.com/Download/mirrors.html</a>.</p>
631 <div class="section" lang="en">
632 <div class="titlepage"><div><div><h3 class="title">
633 <a name="BuildingMyPLC"></a>5.3. Building MyPLC</h3></div></div></div>
634 <p>All PlanetLab source code modules are built and installed
635 as RPMS. A set of build scripts, checked into the
636 <code class="filename">build/</code> directory of the PlanetLab CVS
637 repository, eases the task of rebuilding PlanetLab source
639 <p>To build MyPLC, or any PlanetLab source code module, from
640 within the MyPLC development environment, execute the following
641 commands as root:</p>
642 <div class="example">
643 <a name="id2514212"></a><p class="title"><b>Example 9. Building MyPLC.</b></p>
644 <pre class="programlisting"># Initialize MyPLC development environment
645 service plc-devel start
647 # Enter development environment
648 chroot /plc/devel/root su -
650 # Check out build scripts into a directory named after the current
651 # date. This is simply a convention, it need not be followed
652 # exactly. See build/build.sh for an example of a build script that
653 # names build directories after CVS tags.
654 DATE=$(date +%Y.%m.%d)
656 cvs -d /cvs checkout -d $DATE build
661 <p>If the build succeeds, a set of binary RPMS will be
663 <code class="filename">/plc/devel/data/build/$DATE/RPMS/</code> that you
665 <code class="filename">/var/www/html/install-rpms/planetlab</code>
666 directory of your MyPLC installation (see <a href="#Installation" title="3. Installation">Section 3, “Installation”</a>).</p>
668 <div class="section" lang="en">
669 <div class="titlepage"><div><div><h3 class="title">
670 <a name="UpdatingCVS"></a>5.4. Updating CVS</h3></div></div></div>
671 <p>A complete snapshot of the PlanetLab source code is included
672 with the MyPLC development environment as a CVS repository in
673 <code class="filename">/plc/devel/data/cvs</code>. This CVS repository may
674 be accessed like any other CVS repository. It may be accessed
675 using an interface such as <a href="http://www.freebsd.org/projects/cvsweb.html" target="_top">CVSweb</a>,
676 and file permissions may be altered to allow for fine-grained
677 access control. Although the files are included with the
678 <code class="filename">myplc-devel</code> RPM, they are <span class="bold"><strong>not</strong></span> subject to upgrade once installed. New
679 versions of the <code class="filename">myplc-devel</code> RPM will install
680 updated snapshot repositories in
681 <code class="filename">/plc/devel/data/cvs-%{version}-%{release}</code>,
682 where <code class="literal">%{version}-%{release}</code> is replaced with
683 the version number of the RPM.</p>
684 <p>Because the CVS repository is not automatically upgraded,
685 if you wish to keep your local repository synchronized with the
686 public PlanetLab repository, it is highly recommended that you
687 use CVS's support for <a href="http://ximbiot.com/cvs/wiki/index.php?title=CVS--Concurrent_Versions_System_v1.12.12.1:_Tracking_third-party_sources" target="_top">vendor
688 branches</a> to track changes. Vendor branches ease the task
689 of merging upstream changes with your local modifications. To
690 import a new snapshot into your local repository (for example,
691 if you have just upgraded from
692 <code class="filename">myplc-devel-0.4-2</code> to
693 <code class="filename">myplc-devel-0.4-3</code> and you notice the new
694 repository in <code class="filename">/plc/devel/data/cvs-0.4-3</code>),
695 execute the following commands as root from within the MyPLC
696 development environment:</p>
697 <div class="example">
698 <a name="id2514363"></a><p class="title"><b>Example 10. Updating /data/cvs from /data/cvs-0.4-3.</b></p>
699 <p><span class="bold"><strong>Warning</strong></span>: This may cause
700 severe, irreversible changes to be made to your local
701 repository. Always tag your local repository before
703 <pre class="programlisting"># Initialize MyPLC development environment
704 service plc-devel start
706 # Enter development environment
707 chroot /plc/devel/root su -
710 cvs -d /cvs rtag before-myplc-0_4-3-merge
713 TMP=$(mktemp -d /data/export.XXXXXX)
715 cvs -d /data/cvs-0.4-3 export -r HEAD .
716 cvs -d /cvs import -m "PlanetLab sources from myplc-0.4-3" -ko -I ! . planetlab myplc-0_4-3
720 <p>If there any merge conflicts, use the command suggested by
721 CVS to help the merge. Explaining how to fix merge conflicts is
722 beyond the scope of this document; consult the CVS documentation
723 for more information on how to use CVS.</p>
726 <div class="appendix" lang="en">
727 <h2 class="title" style="clear: both">
728 <a name="id2514395"></a>A. Configuration variables (for <span class="emphasis"><em>myplc</em></span>)</h2>
729 <p>Listed below is the set of standard configuration variables
730 and their default values, defined in the template
731 <code class="filename">/etc/planetlab/default_config.xml</code>. Additional
732 variables and their defaults may be defined in site-specific XML
733 templates that should be placed in
734 <code class="filename">/etc/planetlab/configs/</code>.</p>
735 <div class="variablelist"><dl>
736 <dt><span class="term">PLC_NAME</span></dt>
741 Default: PlanetLab Test</p>
742 <p>The name of this PLC installation. It is used in
743 the name of the default system site (e.g., PlanetLab Central)
744 and in the names of various administrative entities (e.g.,
745 PlanetLab Support).</p>
747 <dt><span class="term">PLC_SLICE_PREFIX</span></dt>
753 <p>The abbreviated name of this PLC
754 installation. It is used as the prefix for system slices
755 (e.g., pl_conf). Warning: Currently, this variable should
758 <dt><span class="term">PLC_ROOT_USER</span></dt>
763 Default: root@localhost.localdomain</p>
764 <p>The name of the initial administrative
765 account. We recommend that this account be used only to create
766 additional accounts associated with real
767 administrators, then disabled.</p>
769 <dt><span class="term">PLC_ROOT_PASSWORD</span></dt>
775 <p>The password of the initial administrative
776 account. Also the password of the root account on the Boot
779 <dt><span class="term">PLC_ROOT_SSH_KEY_PUB</span></dt>
784 Default: /etc/planetlab/root_ssh_key.pub</p>
785 <p>The SSH public key used to access the root
786 account on your nodes.</p>
788 <dt><span class="term">PLC_ROOT_SSH_KEY</span></dt>
793 Default: /etc/planetlab/root_ssh_key.rsa</p>
794 <p>The SSH private key used to access the root
795 account on your nodes.</p>
797 <dt><span class="term">PLC_DEBUG_SSH_KEY_PUB</span></dt>
802 Default: /etc/planetlab/debug_ssh_key.pub</p>
803 <p>The SSH public key used to access the root
804 account on your nodes when they are in Debug mode.</p>
806 <dt><span class="term">PLC_DEBUG_SSH_KEY</span></dt>
811 Default: /etc/planetlab/debug_ssh_key.rsa</p>
812 <p>The SSH private key used to access the root
813 account on your nodes when they are in Debug mode.</p>
815 <dt><span class="term">PLC_ROOT_GPG_KEY_PUB</span></dt>
820 Default: /etc/planetlab/pubring.gpg</p>
821 <p>The GPG public keyring used to sign the Boot
822 Manager and all node packages.</p>
824 <dt><span class="term">PLC_ROOT_GPG_KEY</span></dt>
829 Default: /etc/planetlab/secring.gpg</p>
830 <p>The SSH private key used to access the root
831 account on your nodes.</p>
833 <dt><span class="term">PLC_MA_SA_NAMESPACE</span></dt>
839 <p>The namespace of your MA/SA. This should be a
840 globally unique value assigned by PlanetLab
843 <dt><span class="term">PLC_MA_SA_SSL_KEY</span></dt>
848 Default: /etc/planetlab/ma_sa_ssl.key</p>
849 <p>The SSL private key used for signing documents
850 with the signature of your MA/SA. If non-existent, one will
853 <dt><span class="term">PLC_MA_SA_SSL_CRT</span></dt>
858 Default: /etc/planetlab/ma_sa_ssl.crt</p>
859 <p>The corresponding SSL public certificate. By
860 default, this certificate is self-signed. You may replace
861 the certificate later with one signed by the PLC root
864 <dt><span class="term">PLC_MA_SA_CA_SSL_CRT</span></dt>
869 Default: /etc/planetlab/ma_sa_ca_ssl.crt</p>
870 <p>If applicable, the certificate of the PLC root
871 CA. If your MA/SA certificate is self-signed, then this file
872 is the same as your MA/SA certificate.</p>
874 <dt><span class="term">PLC_MA_SA_CA_SSL_KEY_PUB</span></dt>
879 Default: /etc/planetlab/ma_sa_ca_ssl.pub</p>
880 <p>If applicable, the public key of the PLC root
881 CA. If your MA/SA certificate is self-signed, then this file
882 is the same as your MA/SA public key.</p>
884 <dt><span class="term">PLC_MA_SA_API_CRT</span></dt>
889 Default: /etc/planetlab/ma_sa_api.xml</p>
890 <p>The API Certificate is your MA/SA public key
891 embedded in a digitally signed XML document. By default,
892 this document is self-signed. You may replace this
893 certificate later with one signed by the PLC root
896 <dt><span class="term">PLC_NET_DNS1</span></dt>
901 Default: 127.0.0.1</p>
902 <p>Primary DNS server address.</p>
904 <dt><span class="term">PLC_NET_DNS2</span></dt>
910 <p>Secondary DNS server address.</p>
912 <dt><span class="term">PLC_DNS_ENABLED</span></dt>
918 <p>Enable the internal DNS server. The server does
919 not provide reverse resolution and is not a production
920 quality or scalable DNS solution. Use the internal DNS
921 server only for small deployments or for
924 <dt><span class="term">PLC_MAIL_ENABLED</span></dt>
930 <p>Set to false to suppress all e-mail notifications
933 <dt><span class="term">PLC_MAIL_SUPPORT_ADDRESS</span></dt>
938 Default: root+support@localhost.localdomain</p>
939 <p>This address is used for support
940 requests. Support requests may include traffic complaints,
941 security incident reporting, web site malfunctions, and
942 general requests for information. We recommend that the
943 address be aliased to a ticketing system such as Request
946 <dt><span class="term">PLC_MAIL_BOOT_ADDRESS</span></dt>
951 Default: root+install-msgs@localhost.localdomain</p>
952 <p>The API will notify this address when a problem
953 occurs during node installation or boot.</p>
955 <dt><span class="term">PLC_MAIL_SLICE_ADDRESS</span></dt>
960 Default: root+SLICE@localhost.localdomain</p>
961 <p>This address template is used for sending
962 e-mail notifications to slices. SLICE will be replaced with
963 the name of the slice.</p>
965 <dt><span class="term">PLC_DB_ENABLED</span></dt>
971 <p>Enable the database server on this
974 <dt><span class="term">PLC_DB_TYPE</span></dt>
979 Default: postgresql</p>
980 <p>The type of database server. Currently, only
981 postgresql is supported.</p>
983 <dt><span class="term">PLC_DB_HOST</span></dt>
988 Default: localhost.localdomain</p>
989 <p>The fully qualified hostname of the database
992 <dt><span class="term">PLC_DB_IP</span></dt>
997 Default: 127.0.0.1</p>
998 <p>The IP address of the database server, if not
999 resolvable by the configured DNS servers.</p>
1001 <dt><span class="term">PLC_DB_PORT</span></dt>
1007 <p>The TCP port number through which the database
1008 server should be accessed.</p>
1010 <dt><span class="term">PLC_DB_NAME</span></dt>
1015 Default: planetlab3</p>
1016 <p>The name of the database to access.</p>
1018 <dt><span class="term">PLC_DB_USER</span></dt>
1023 Default: pgsqluser</p>
1024 <p>The username to use when accessing the
1027 <dt><span class="term">PLC_DB_PASSWORD</span></dt>
1033 <p>The password to use when accessing the
1034 database. If left blank, one will be
1037 <dt><span class="term">PLC_API_ENABLED</span></dt>
1043 <p>Enable the API server on this
1046 <dt><span class="term">PLC_API_DEBUG</span></dt>
1052 <p>Enable verbose API debugging. Do not enable on
1053 a production system!</p>
1055 <dt><span class="term">PLC_API_HOST</span></dt>
1060 Default: localhost.localdomain</p>
1061 <p>The fully qualified hostname of the API
1064 <dt><span class="term">PLC_API_IP</span></dt>
1069 Default: 127.0.0.1</p>
1070 <p>The IP address of the API server, if not
1071 resolvable by the configured DNS servers.</p>
1073 <dt><span class="term">PLC_API_PORT</span></dt>
1079 <p>The TCP port number through which the API
1080 should be accessed. Warning: SSL (port 443) access is not
1081 fully supported by the website code yet. We recommend that
1082 port 80 be used for now and that the API server either run
1083 on the same machine as the web server, or that they both be
1084 on a secure wired network.</p>
1086 <dt><span class="term">PLC_API_PATH</span></dt>
1091 Default: /PLCAPI/</p>
1092 <p>The base path of the API URL.</p>
1094 <dt><span class="term">PLC_API_MAINTENANCE_USER</span></dt>
1099 Default: maint@localhost.localdomain</p>
1100 <p>The username of the maintenance account. This
1101 account is used by local scripts that perform automated
1102 tasks, and cannot be used for normal logins.</p>
1104 <dt><span class="term">PLC_API_MAINTENANCE_PASSWORD</span></dt>
1110 <p>The password of the maintenance account. If
1111 left blank, one will be generated. We recommend that the
1112 password be changed periodically.</p>
1114 <dt><span class="term">PLC_API_MAINTENANCE_SOURCES</span></dt>
1120 <p>A space-separated list of IP addresses allowed
1121 to access the API through the maintenance account. The value
1122 of this variable is set automatically to allow only the API,
1123 web, and boot servers, and should not be
1126 <dt><span class="term">PLC_API_SSL_KEY</span></dt>
1131 Default: /etc/planetlab/api_ssl.key</p>
1132 <p>The SSL private key to use for encrypting HTTPS
1133 traffic. If non-existent, one will be
1136 <dt><span class="term">PLC_API_SSL_CRT</span></dt>
1141 Default: /etc/planetlab/api_ssl.crt</p>
1142 <p>The corresponding SSL public certificate. By
1143 default, this certificate is self-signed. You may replace
1144 the certificate later with one signed by a root
1147 <dt><span class="term">PLC_API_CA_SSL_CRT</span></dt>
1152 Default: /etc/planetlab/api_ca_ssl.crt</p>
1153 <p>The certificate of the root CA, if any, that
1154 signed your server certificate. If your server certificate is
1155 self-signed, then this file is the same as your server
1158 <dt><span class="term">PLC_WWW_ENABLED</span></dt>
1164 <p>Enable the web server on this
1167 <dt><span class="term">PLC_WWW_DEBUG</span></dt>
1173 <p>Enable debugging output on web pages. Do not
1174 enable on a production system!</p>
1176 <dt><span class="term">PLC_WWW_HOST</span></dt>
1181 Default: localhost.localdomain</p>
1182 <p>The fully qualified hostname of the web
1185 <dt><span class="term">PLC_WWW_IP</span></dt>
1190 Default: 127.0.0.1</p>
1191 <p>The IP address of the web server, if not
1192 resolvable by the configured DNS servers.</p>
1194 <dt><span class="term">PLC_WWW_PORT</span></dt>
1200 <p>The TCP port number through which the
1201 unprotected portions of the web site should be
1204 <dt><span class="term">PLC_WWW_SSL_PORT</span></dt>
1210 <p>The TCP port number through which the protected
1211 portions of the web site should be accessed.</p>
1213 <dt><span class="term">PLC_WWW_SSL_KEY</span></dt>
1218 Default: /etc/planetlab/www_ssl.key</p>
1219 <p>The SSL private key to use for encrypting HTTPS
1220 traffic. If non-existent, one will be
1223 <dt><span class="term">PLC_WWW_SSL_CRT</span></dt>
1228 Default: /etc/planetlab/www_ssl.crt</p>
1229 <p>The corresponding SSL public certificate for
1230 the HTTP server. By default, this certificate is
1231 self-signed. You may replace the certificate later with one
1232 signed by a root CA.</p>
1234 <dt><span class="term">PLC_WWW_CA_SSL_CRT</span></dt>
1239 Default: /etc/planetlab/www_ca_ssl.crt</p>
1240 <p>The certificate of the root CA, if any, that
1241 signed your server certificate. If your server certificate is
1242 self-signed, then this file is the same as your server
1245 <dt><span class="term">PLC_BOOT_ENABLED</span></dt>
1251 <p>Enable the boot server on this
1254 <dt><span class="term">PLC_BOOT_HOST</span></dt>
1259 Default: localhost.localdomain</p>
1260 <p>The fully qualified hostname of the boot
1263 <dt><span class="term">PLC_BOOT_IP</span></dt>
1268 Default: 127.0.0.1</p>
1269 <p>The IP address of the boot server, if not
1270 resolvable by the configured DNS servers.</p>
1272 <dt><span class="term">PLC_BOOT_PORT</span></dt>
1278 <p>The TCP port number through which the
1279 unprotected portions of the boot server should be
1282 <dt><span class="term">PLC_BOOT_SSL_PORT</span></dt>
1288 <p>The TCP port number through which the protected
1289 portions of the boot server should be
1292 <dt><span class="term">PLC_BOOT_SSL_KEY</span></dt>
1297 Default: /etc/planetlab/boot_ssl.key</p>
1298 <p>The SSL private key to use for encrypting HTTPS
1301 <dt><span class="term">PLC_BOOT_SSL_CRT</span></dt>
1306 Default: /etc/planetlab/boot_ssl.crt</p>
1307 <p>The corresponding SSL public certificate for
1308 the HTTP server. By default, this certificate is
1309 self-signed. You may replace the certificate later with one
1310 signed by a root CA.</p>
1312 <dt><span class="term">PLC_BOOT_CA_SSL_CRT</span></dt>
1317 Default: /etc/planetlab/boot_ca_ssl.crt</p>
1318 <p>The certificate of the root CA, if any, that
1319 signed your server certificate. If your server certificate is
1320 self-signed, then this file is the same as your server
1325 <div class="appendix" lang="en">
1326 <h2 class="title" style="clear: both">
1327 <a name="id2517288"></a>B. Development configuration variables(for <span class="emphasis"><em>myplc-devel</em></span>)</h2>
1328 <div class="variablelist"><dl>
1329 <dt><span class="term">PLC_DEVEL_FEDORA_RELEASE</span></dt>
1335 <p>Version number of Fedora Core upon which to
1336 base the build environment. Warning: Currently, only Fedora
1337 Core 4 is supported.</p>
1339 <dt><span class="term">PLC_DEVEL_FEDORA_ARCH</span></dt>
1345 <p>Base architecture of the build
1346 environment. Warning: Currently, only i386 is
1349 <dt><span class="term">PLC_DEVEL_FEDORA_URL</span></dt>
1354 Default: file:///usr/share/mirrors/fedora</p>
1355 <p>Fedora Core mirror from which to install
1358 <dt><span class="term">PLC_DEVEL_CVSROOT</span></dt>
1364 <p>CVSROOT to use when checking out code.</p>
1366 <dt><span class="term">PLC_DEVEL_BOOTSTRAP</span></dt>
1372 <p>Controls whether MyPLC should be built inside
1373 of its own development environment.</p>
1377 <div class="bibliography">
1378 <div class="titlepage"><div><div><h2 class="title">
1379 <a name="id2517460"></a>Bibliography</h2></div></div></div>
1380 <div class="biblioentry">
1381 <a name="TechsGuide"></a><p>[1] <span class="author"><span class="firstname">Mark</span> <span class="surname">Huang</span>. </span><span class="title"><i><a href="http://www.planet-lab.org/doc/TechsGuide.php" target="_top">PlanetLab
1382 Technical Contact's Guide</a></i>. </span></p>
1385 </div><?php require('footer.php'); ?>