From: Thierry Parmentelat Date: Wed, 7 May 2008 16:20:25 +0000 (+0000) Subject: * reviewed myplc doc and variables layout X-Git-Tag: MyPLC-4.2-10~2 X-Git-Url: http://git.onelab.eu/?p=myplc.git;a=commitdiff_plain;h=34d010f83bce84a4d33b77b1e6674502847f262c * reviewed myplc doc and variables layout * deprecated everything related to myplc-devel * svn-removed intermediate files --- diff --git a/doc/Makefile b/doc/Makefile index 2ab87e3..cfa763f 100644 --- a/doc/Makefile +++ b/doc/Makefile @@ -7,23 +7,14 @@ # $Id$ # -vpath GenDoc.xsl ../../plc_www/doc -vpath %_config.xml .. - -# dont redo php by default, this requires plc_www (see above) -# that we build separately (has no doc/ subdir anyway) -# note that the build host (myplc-devel) needs ghostscript -# that we added only on 20 july 2007 all: myplc.pdf myplc.html -static: pyplc.php - .PHONY: all # Dependencies -.myplc.xml.valid: architecture.eps architecture.png plc_variables.xml plc_devel_variables.xml +.myplc.xml.valid: architecture.eps architecture.png plc_variables.xml -%_variables.xml: variables.xsl %_config.xml +plc_variables.xml: variables.xsl ../default_config.xml xsltproc $(XSLFLAGS) --output $@ $^ # Validate the XML diff --git a/doc/myplc.pdf b/doc/myplc.pdf deleted file mode 100644 index 3314015..0000000 Binary files a/doc/myplc.pdf and /dev/null differ diff --git a/doc/myplc.xml b/doc/myplc.xml index 4522b6b..9ce2cd9 100644 --- a/doc/myplc.xml +++ b/doc/myplc.xml @@ -2,15 +2,15 @@ - ]>
MyPLC User's Guide - - Mark Huang - + + Mark Huang + Thierry Parmentelat + Princeton University @@ -19,8 +19,7 @@ This document describes the design, installation, and administration of MyPLC, a complete PlanetLab Central (PLC) - portable installation contained within a - chroot jail. This document assumes advanced + portable installation. This document assumes advanced knowledge of the PlanetLab architecture and Linux system administration. @@ -47,6 +46,19 @@ Present implementation details last. + + 1.3 + May 9, 2008 + TPT + + + Review for 4.2 : focus on new packaging myplc-native. + + + Removed deprecated myplc-devel. + + + @@ -54,17 +66,12 @@ Overview MyPLC is a complete PlanetLab Central (PLC) portable - installation contained within a chroot - jail. The default installation consists of a web server, an + installation. The default installation consists of a web server, an XML-RPC API server, a boot server, and a database server: the core components of PLC. The installation is customized through an easy-to-use graphical interface. All PLC services are started up and shut down through a single script installed on the host - system. The usually complex process of installing and - administering the PlanetLab backend is reduced by containing PLC - services within a virtual filesystem. By packaging it in such a - manner, MyPLC may also be run on any modern Linux distribution, - and could conceivably even run in a PlanetLab slice. + system.
MyPLC architecture @@ -78,105 +85,152 @@ MyPLC architecture - - MyPLC should be viewed as a single application that - provides multiple functions and can run on any host - system. -
+ -
Purpose of the <emphasis> myplc-devel - </emphasis> package - The myplc package comes with all - required node software, rebuilt from the public PlanetLab CVS - repository. If for any reason you need to implement your own - customized version of this software, you can use the - myplc-devel package instead, for setting up - your own development environment, including a local CVS - repository; you can then freely manage your changes and rebuild - your customized version of myplc. We also - provide good practices, that will then allow you to resync your local - CVS repository with any further evolution on the mainstream public - PlanetLab software.
- +
Historical Notes + This document focuses on the new packaging named + myplc-native as introduced in the 4.2 release + of PlanetLab. -
Requirements + The former, chroot-based, packaging known as simply + myplc might still be present in this release + but its usage is not recommended anymore. - myplc and - myplc-devel were designed as - chroot jails so as to reduce the requirements on - your host operating system. So in theory, these distributions should - work on virtually any Linux 2.6 based distribution, whether it - supports rpm or not. - - However, things are never that simple and there indeed are - some known limitations to this, so here are a couple notes as a - recommended reading before you proceed with the installation. - - As of 17 August 2006 (i.e myplc-0.5-2) : - - - The software is vastly based on Fedora - Core 4. Please note that the build server at Princeton - runs Fedora Core 2, togother with a upgraded - version of yum. - - - myplc and myplc-devel are known to work on both - Fedora Core 2 and Fedora Core - 4. Please note however that, on fc4 at least, it is - highly recommended to use the Security Level - Configuration utility and to switch off - SElinux on your box because : - - - - myplc requires you to run SElinux as 'Permissive' at most - - - myplc-devel requires you to turn SElinux Off. - - - + With 4.2, the general architecture of the build system has + drastically changed as well. Rather than providing a static chroot + image for building the software, that was formerly known as + myplc-devel, the current paradigm is to + create a fresh vserver and to rely on yum to install all needed + development tools. More details on how to set up such an + environment can be found at that + describes how to turn a CentOS5 box into a vserver-capable host + system. + + +
+ +
Requirements - In addition, as far as myplc is concerned, you - need to check your firewall configuration since you need, of course, - to open up the http and - https ports, so as to accept connections from - the managed nodes and from the users desktops. + The recommended way to deploy MyPLC relies on + vserver. Here again, please refer to for how to set + up such en environment. As of PlanetLab 4.2, the recommended Linux + distribution here is CentOS5, because there are publicly available + resources that allow a smooth setup. + + As of PlanetLab 4.2, the current focus is on Fedora 8. This + means that you should create a fresh Fedora 8 vserver in your + vserver-capable CentOS box, and perform all subsequent installations + from that place as described below. Although you might find builds + for other Linux distributions, it is recommended for new users to + use this particular variant. + + + It is also possible to perform these installations from a + fresh Fedora 8 installation. However, having a vserver-capable box + instead provides much more flexibility and is thus recommended, in + particular in terms of future upgrades of the system. + + In addition, there have been numerous reports that SELINUX + should be turned off for running myplc in former releases. This is + part of the instructions provided to set up vserver, and please + keep this in mind if you plan on running MyPLC in a dedicated Fedora + 8 box. + + Last, you need to check your firewall configuration(s) since + it is required, of course, to open up the http + and https ports, so as to accept connections + from the managed nodes and from the users desktops, and possibly + ssh as well. -
- Installating and using MyPLC + Installing and using MyPLC + +
+ Locating a build. + The following locations are entry points for locating the + build you plan on using. + + + is maintained by the PlanetLab team at Princeton + University. + + is + maintained by the OneLab team at INRIA. + + + There are currently two so-called PlanetLab + Distributions known as planetlab and + onelab. planet-lab.org generally builds + only the planetlab flavour, while both + flavours are generally available at one-lab.org. +
+ +
+ Note on IP addressing + Once you've located the build you want to use, it is + strongly advised to assign this vserver a unique IP address, and + to avoid sharing the hosting box's one. To that end, the typical + sentence for creating such a vserver would be: + + Creating the vserver + + + + + In this example, we have chosen to use a planetlab flavour, + based on rc2.1lab, for i386 (this is what the final 32 stands + for). + + +
+ +
+ Setting up yum - Though internally composed of commodity software - subpackages, MyPLC should be treated as a monolithic software - application. MyPLC is distributed as single RPM package that has - no external dependencies, allowing it to be installed on - practically any Linux 2.6 based distribution. + If you do not use the convenience script mentioned above, you need to + create an entry in your yum configuration: + + Setting up yum repository + + myplc.repo +[myplc] +name= MyPLC +baseurl=http://build.one-lab.org/4.2/planetlab-4.2-rc2.1lab-f8-32/RPMS +enabled=1 +gpgcheck=0 +^D +[myvserver] #]]> + + + +
Installing MyPLC. - - If your distribution supports RPM: - - - If your distribution does not support RPM: - - + + To actually install myplc at that stage, just run: + + + Installing MyPLC + + + - The below explains in - details the installation strategy and the miscellaneous files and - directories involved. + The below explains in + details the installation strategy and the miscellaneous files and + directories involved.
@@ -187,10 +241,10 @@ scripts. As the examples suggest, the service must be started as root: Starting MyPLC: - + Stopping MyPLC: - + In , we provide greater @@ -219,26 +273,15 @@ (see ). The preferred option for changing the configuration is to - use the plc-config-tty tool. This tool comes - with the root image, so you need to have it mounted first. The + use the plc-config-tty tool. The full set of applicable variables is described in , but using the u - guides you to the most useful ones. Note that if you - plan on federating with other PLCs, it is strongly - recommended that you change the PLC_NAME - and PLC_SLICE_PREFIX - settings. + linkend="VariablesRuntime"/>, but using the u + guides you to the most useful ones. Here is sample session: Using plc-config-tty for configuration: - # plc-config-tty -Config file /etc/planetlab/configs/site.xml located under a non-existing directory -Want to create /etc/planetlab/configs [y]/n ? y -Created directory /etc/planetlab/configs + # plc-config-tty Enter command (u for usual changes, w to save, ? for help) u == PLC_NAME : [PlanetLab Test] OneLab == PLC_SLICE_PREFIX : [pl] thone @@ -262,42 +305,79 @@ Enter command (u for usual changes, w to save, ? for help) r ==================== Starting plc ... Enter command (u for usual changes, w to save, ? for help) q - # exit -# -]]> +[myvserver] # ]]> - If you used this method for configuring, you can skip to - the . As an alternative to using - plc-config-tty, you may also use a text - editor, but this requires some understanding on how the + The variables that you should change immediately are: + + + + PLC_NAME: Change this to the name of your PLC installation. + + + PLC_SLICE_PREFIX: Pick some + reasonable, short value; this is especially crucial if you + plan on federating with other PLCs. + + + PLC_ROOT_PASSWORD: Change this to a more + secure password. + + + PLC_MAIL_SUPPORT_ADDRESS: + Change this to the e-mail address at which you would like to + receive support requests. + + + PLC_DB_HOST, PLC_API_HOST, + PLC_WWW_HOST, PLC_BOOT_HOST, + Change all of these to the preferred FQDN address of your + host system. The corresponding *_IP values can + be safely ignored if the FQDN can be resolved through DNS. + + + + After changing these variables, make sure that you save + (w) and restart your plc (r), as shown in the above example. + You should notice that the password of the default administrator + account is no longer root, and that the + default site name includes the name of your PLC installation + instead of PlanetLab. As a side effect of these changes, the ISO + images for the boot CDs now have new names, so that you can + freely remove the ones names after 'PlanetLab Test', which is + the default value of PLC_NAME + + If you used the above method method for configuring, you + can skip to . As an alternative + to using plc-config-tty, you may also use a + text editor, but this requires some understanding on how the configuration files are used within myplc. The default configuration is stored in a file named /etc/planetlab/default_config.xml, that is designed to remain intact. You may store your local - changes in any file located in the configs/ - sub-directory, that are loaded on top of the defaults. Finally - the file /etc/planetlab/plc_config.xml is - loaded, and the resulting configuration is stored in the latter - file, that is used as a reference. - - Using a separate file for storing local changes only, as - plc-config-tty does, is not a workable option - with a text editor because it would involve tedious xml - re-assembling. So your local changes should go in - /etc/planetlab/plc_config.xml. Be warned - however that any change you might do this way could be lost if - you use plc-config-tty later on. - - This file is a self-documenting configuration file written - in XML. Variables are divided into categories. Variable - identifiers must be alphanumeric, plus underscore. A variable is - referred to canonically as the uppercase concatenation of its - category identifier, an underscore, and its variable - identifier. Thus, a variable with an id of - slice_prefix in the plc - category is referred to canonically as - PLC_SLICE_PREFIX. + changes in a file named + /etc/planetlab/configs/site.xml, that gets + loaded on top of the defaults. The resulting complete + configuration is stored in the file + /etc/planetlab/plc_config.xml that is used + as the reference. If you use this strategy, be sure to issue the + following command to refresh this file: + + Refreshing <filename> plc_config.xml + </filename> after a manual change in<filename> + site.xml</filename> + + + + The defaults configuration file is a self-documenting + configuration file written in XML. Variables are divided into + categories. Variable identifiers must be alphanumeric, plus + underscore. A variable is referred to canonically as the + uppercase concatenation of its category identifier, an + underscore, and its variable identifier. Thus, a variable with + an id of slice_prefix in + the plc category is referred to canonically + as PLC_SLICE_PREFIX. The reason for this convention is that during MyPLC startup, plc_config.xml is translated into @@ -307,37 +387,6 @@ Enter command (u for usual changes, w to save, ? for help) q scripts are written in shell, so the convention for shell variables predominates. - The variables that you should change immediately are: - - - PLC_NAME: Change this to the - name of your PLC installation. - PLC_ROOT_PASSWORD: Change this - to a more secure password. - - PLC_MAIL_SUPPORT_ADDRESS: - Change this to the e-mail address at which you would like to - receive support requests. - - PLC_DB_HOST, - PLC_DB_IP, PLC_API_HOST, - PLC_API_IP, PLC_WWW_HOST, - PLC_WWW_IP, PLC_BOOT_HOST, - PLC_BOOT_IP: Change all of these to the - preferred FQDN and external IP address of your host - system. - - - After changing these variables, - save the file, then restart MyPLC with service plc - start. You should notice that the password of the - default administrator account is no longer - root, and that the default site name includes - the name of your PLC installation instead of PlanetLab. As a - side effect of these changes, the ISO images for the boot CDs - now have new names, so that you can freely remove the ones names - after 'PlanetLab Test', which is the default value of - PLC_NAME
Login as a real user @@ -359,27 +408,17 @@ Enter command (u for usual changes, w to save, ? for help) q Install your first node by clicking Add Node under the Nodes tab. Fill in all the appropriate details, then click - Add. Download the node's configuration file - by clicking Download configuration file on - the Node Details page for the - node. Save it to a floppy disk or USB key as detailed in . - - Follow the rest of the instructions in for creating a Boot CD and installing - the node, except download the Boot CD image from the - /download directory of your PLC - installation, not from PlanetLab Central. The images located - here are customized for your installation. If you change the - hostname of your boot server (PLC_BOOT_HOST), or - if the SSL certificate of your boot server expires, MyPLC will - regenerate it and rebuild the Boot CD with the new - certificate. If this occurs, you must replace all Boot CDs - created before the certificate was regenerated. - - The installation process for a node has significantly - improved since PlanetLab 3.3. It should now take only a few - seconds for a new node to become ready to create slices. + Add. Download the node's boot material, + please refer to for more details + about this stage. + + Please keep in mind that this boot medium is customized + for your particular instance, and contains details such as the + host that your configured as PLC_BOOT_HOST), or + the SSL certificate of your boot server that might expire. So + changes in your configuration may require you to replace all + your boot CD's. +
@@ -393,28 +432,22 @@ Enter command (u for usual changes, w to save, ? for help) q Accessing nodes via SSH. Replace <literal>node</literal> with the hostname of the node. - ssh -i /etc/planetlab/root_ssh_key.rsa root@node + [myvserver] # ssh -i /etc/planetlab/root_ssh_key.rsa root@node - Besides the standard Linux log files located in - /var/log, several other files can give you - clues about any problems with active processes: + From the node's root context, besides the standard Linux + log files located in /var/log, several + other files can give you clues about any problems with active + processes: - /var/log/pl_nm: The log + /var/log/nm: The log file for the Node Manager. - /vservers/pl_conf/var/log/pl_conf: - The log file for the Slice Creation Service. - - /var/log/propd: The log - file for Proper, the service which allows certain slices to - perform certain privileged operations in the root - context. + /vservers/slicename/var/log/nm: + The log file for the Node Manager operations that perform + within the slice's vserver. - /vservers/pl_netflow/var/log/netflow.log: - The log file for PlanetFlow, the network traffic auditing - service.
@@ -425,26 +458,20 @@ Enter command (u for usual changes, w to save, ? for help) q under the Slices tab. Fill in all the appropriate details, then click Create. Add nodes to the slice by clicking Manage Nodes - on the Slice Details page for + on the Slice Details page for the slice. - A cron job runs every five minutes and - updates the file - /plc/data/var/www/html/xml/slices-0.5.xml - with information about current slice state. The Slice Creation - Service running on every node polls this file every ten minutes - to determine if it needs to create or delete any slices. You may - accelerate this process manually if desired. + + Slice creation is performed by the NodeManager. In some + particular cases you may wish to restart it manually, here is + how to do this: + Forcing slice creation on a node. - + +
@@ -454,12 +481,13 @@ vserver pl_conf exec service pl_conf restart]]> During service startup described in , observe the output of this command for any failures. If no failures occur, you should see output similar - to the following: + to the following. Please note that as of this writing, with 4.2 + the system logger step might fail, this is harmless! A successful MyPLC startup. - - If /plc/root is mounted successfully, a - complete log file of the startup process may be found at - /plc/root/var/log/boot.log. Possible reasons + A complete log file of the startup process may be found at + /var/log/boot.log. Possible reasons for failure of each step include: - Mounting PLC: If this step - fails, first ensure that you started MyPLC as root. Check - /etc/sysconfig/plc to ensure that - PLC_ROOT and PLC_DATA refer to the - right locations. You may also have too many existing loopback - mounts, or your kernel may not support loopback mounting, bind - mounting, or the ext3 filesystem. Try freeing at least one - loopback device, or re-compiling your kernel to support loopback - mounting, bind mounting, and the ext3 filesystem. If you see an - error similar to Permission denied while trying to open - /plc/root.img, then SELinux may be enabled. See above for details. - Starting database server: If this step fails, check - /plc/root/var/log/pgsql and - /plc/root/var/log/boot.log. The most common + /var/log/pgsql and + /var/log/boot.log. The most common reason for failure is that the default PostgreSQL port, TCP port 5432, is already in use. Check that you are not running a PostgreSQL server on the host system. Starting web server: If this step fails, check - /plc/root/var/log/httpd/error_log and - /plc/root/var/log/boot.log for obvious + /var/log/httpd/error_log and + /var/log/boot.log for obvious errors. The most common reason for failure is that the default web ports, TCP ports 80 and 443, are already in use. Check that you are not running a web server on the host @@ -529,7 +543,7 @@ PLC: Signing node packages: [ OK ] fails, it is likely that the previous steps (Starting web server and Bootstrapping the database) also failed. If not, check - /plc/root/var/log/boot.log for obvious + /var/log/boot.log for obvious errors. This step starts the cron service and generates the initial set of XML files that the Slice Creation Service uses to determine slice state. @@ -538,453 +552,110 @@ PLC: Signing node packages: [ OK ] If no failures occur, then MyPLC should be active with a default configuration. Open a web browser on the host system and visit http://localhost/, which should bring you - to the front page of your PLC installation. The password of the - default administrator account + to the front page of your PLC installation. The default password + for the administrator account root@localhost.localdomain (set by PLC_ROOT_USER) is root (set by PLC_ROOT_PASSWORD). -
Files and directories - involved in <emphasis>myplc</emphasis> - MyPLC installs the following files and directories: - - - - /plc/root.img: The main - root filesystem of the MyPLC application. This file is an - uncompressed ext3 filesystem that is loopback mounted on - /plc/root when MyPLC starts. This - filesystem, even when mounted, should be treated as an opaque - binary that can and will be replaced in its entirety by any - upgrade of MyPLC. - - /plc/root: The mount point - for /plc/root.img. Once the root filesystem - is mounted, all MyPLC services run in a - chroot jail based in this - directory. +
- - /plc/data: The directory where user - data and generated files are stored. This directory is bind - mounted onto /plc/root/data so that it is - accessible as /data from within the - chroot jail. Files in this directory are - marked with %config(noreplace) in the - RPM. That is, during an upgrade of MyPLC, if a file has not - changed since the last installation or upgrade of MyPLC, it is - subject to upgrade and replacement. If the file has changed, - the new version of the file will be created with a - .rpmnew extension. Symlinks within the - MyPLC root filesystem ensure that the following directories - (relative to /plc/root) are stored - outside the MyPLC filesystem image: - - - /etc/planetlab: This - directory contains the configuration files, keys, and - certificates that define your MyPLC - installation. - - /var/lib/pgsql: This - directory contains PostgreSQL database - files. - - /var/www/html/alpina-logs: This - directory contains node installation logs. - - /var/www/html/boot: This - directory contains the Boot Manager, customized for your MyPLC - installation, and its data files. - - /var/www/html/download: This - directory contains Boot CD images, customized for your MyPLC - installation. - - /var/www/html/install-rpms: This - directory is where you should install node package updates, - if any. By default, nodes are installed from the tarball - located at - /var/www/html/boot/PlanetLab-Bootstrap.tar.bz2, - which is pre-built from the latest PlanetLab Central - sources, and installed as part of your MyPLC - installation. However, nodes will attempt to install any - newer RPMs located in - /var/www/html/install-rpms/planetlab, - after initial installation and periodically thereafter. You - must run yum-arch and - createrepo to update the - yum caches in this directory after - installing a new RPM. PlanetLab Central cannot support any - changes to this directory. - - /var/www/html/xml: This - directory contains various XML files that the Slice Creation - Service uses to determine the state of slices. These XML - files are refreshed periodically by cron - jobs running in the MyPLC root. - - /root: this is the - location of the root-user's homedir, and for your - convenience is stored under /data so - that your local customizations survive across - updates - this feature is inherited from the - myplc-devel package, where it is probably - more useful. - - - - - - /etc/init.d/plc: This file - is a System V init script installed on your host filesystem, - that allows you to start up and shut down MyPLC with a single - command, as described in . - + + Files and directories involved in <emphasis>myplc</emphasis> + + + The various places where is stored the persistent + information pertaining to your own deployment are + + + /etc/planetlab: This + directory contains the configuration files, keys, and + certificates that define your MyPLC + installation. + + /var/lib/pgsql: This + directory contains PostgreSQL database + files. + + /var/www/html/boot: This + directory contains the Boot Manager, customized for your MyPLC + installation, and its data files. + + /var/www/html/download: + This directory contains Boot CD images, customized for your + MyPLC installation. + + /var/www/html/install-rpms: + This directory is where you should install node package + updates, if any. By default, nodes are installed from the + tarball located at + /var/www/html/boot/PlanetLab-Bootstrap.tar.bz2, + which is pre-built from the latest PlanetLab Central sources, + and installed as part of your MyPLC installation. However, + nodes will attempt to install any newer RPMs located in + /var/www/html/install-rpms/planetlab, + after initial installation and periodically thereafter. You + must run + + command to update the + yum caches in this directory after + installing a new RPM. + + + If you wish to upgrade all your nodes RPMs from a more + recent build, you should take advantage of the + noderepo RPM, as described in + + + - /etc/sysconfig/plc: This - file is a shell script fragment that defines the variables - PLC_ROOT and PLC_DATA. By default, - the values of these variables are /plc/root - and /plc/data, respectively. If you wish, - you may move your MyPLC installation to another location on your - host filesystem and edit the values of these variables - appropriately, but you will break the RPM upgrade - process. PlanetLab Central cannot support any changes to this - file. - - /etc/planetlab: This - symlink to /plc/data/etc/planetlab is - installed on the host system for convenience. -
- Rebuilding and customizing MyPLC - - The MyPLC package, though distributed as an RPM, is not a - traditional package that can be easily rebuilt from SRPM. The - requisite build environment is quite extensive and numerous - assumptions are made throughout the PlanetLab source code base, - that the build environment is based on Fedora Core 4 and that - access to a complete Fedora Core 4 mirror is available. - - For this reason, it is recommended that you only rebuild - MyPLC (or any of its components) from within the MyPLC development - environment. The MyPLC development environment is similar to MyPLC - itself in that it is a portable filesystem contained within a - chroot jail. The filesystem contains all the - necessary tools required to rebuild MyPLC, as well as a snapshot - of the PlanetLab source code base in the form of a local CVS - repository. -
- Installation + Rebuilding and customizing MyPLC - Install the MyPLC development environment similarly to how - you would install MyPLC. You may install both packages on the same - host system if you wish. As with MyPLC, the MyPLC development - environment should be treated as a monolithic software - application, and any files present in the - chroot jail should not be modified directly, as - they are subject to upgrade. + + Please refer to the following resources for setting up a build environment: + - - If your distribution supports RPM: - - - If your distribution does not support RPM: - + + + + + will get you started for setting up vserver, launching a + nightly build or running the build manually. + + + + + + and in particular the various README files, provide some + help on how to use advanced features of the build. + + -
- -
- Configuration - - The default configuration should work as-is on most - sites. Configuring the development package can be achieved in a - similar way as for myplc, as described in - . plc-config-tty supports a - -d option for supporting the - myplc-devel case, that can be useful in a - context where it would not guess it by itself. Refer to for a list of variables. -
- -
Files and directories - involved in <emphasis>myplc-devl</emphasis> - - The MyPLC development environment installs the following - files and directories: - - - /plc/devel/root.img: The - main root filesystem of the MyPLC development environment. This - file is an uncompressed ext3 filesystem that is loopback mounted - on /plc/devel/root when the MyPLC - development environment is initialized. This filesystem, even - when mounted, should be treated as an opaque binary that can and - will be replaced in its entirety by any upgrade of the MyPLC - development environment. - - /plc/devel/root: The mount - point for - /plc/devel/root.img. - - - /plc/devel/data: The directory - where user data and generated files are stored. This directory - is bind mounted onto /plc/devel/root/data - so that it is accessible as /data from - within the chroot jail. Files in this - directory are marked with - %config(noreplace) in the RPM. Symlinks - ensure that the following directories (relative to - /plc/devel/root) are stored outside the - root filesystem image: - - - /etc/planetlab: This - directory contains the configuration files that define your - MyPLC development environment. - - /cvs: A - snapshot of the PlanetLab source code is stored as a CVS - repository in this directory. Files in this directory will - not be updated by an upgrade of - myplc-devel. See for more information about updating - PlanetLab source code. - - /build: - Builds are stored in this directory. This directory is bind - mounted onto /plc/devel/root/build so that - it is accessible as /build from within the - chroot jail. The build scripts in this - directory are themselves source controlled; see for more information about executing - builds. - - /root: this is the - location of the root-user's homedir, and for your - convenience is stored under /data so - that your local customizations survive across - updates. - - - /etc/init.d/plc-devel: This file is - a System V init script installed on your host filesystem, that - allows you to start up and shut down the MyPLC development - environment with a single command. - - -
- -
- Fedora Core 4 mirror requirement - - The MyPLC development environment requires access to a - complete Fedora Core 4 i386 RPM repository, because several - different filesystems based upon Fedora Core 4 are constructed - during the process of building MyPLC. You may configure the - location of this repository via the - PLC_DEVEL_FEDORA_URL variable in - /plc/devel/data/etc/planetlab/plc_config.xml. The - value of the variable should be a URL that points to the top - level of a Fedora mirror that provides the - base, updates, and - extras repositories, e.g., - - - file:///data/fedora - http://coblitz.planet-lab.org/pub/fedora - ftp://mirror.cs.princeton.edu/pub/mirrors/fedora - ftp://mirror.stanford.edu/pub/mirrors/fedora - http://rpmfind.net/linux/fedora - - - As implied by the list, the repository may be located on - the local filesystem, or it may be located on a remote FTP or - HTTP server. URLs beginning with file:// - should exist at the specified location relative to the root of - the chroot jail. For optimum performance and - reproducibility, specify - PLC_DEVEL_FEDORA_URL=file:///data/fedora and - download all Fedora Core 4 RPMS into - /plc/devel/data/fedora on the host system - after installing myplc-devel. Use a tool - such as wget or rsync to - download the RPMS from a public mirror: - - - Setting up a local Fedora Core 4 repository. - - wget -m -nH --cut-dirs=3 http://coblitz.planet-lab.org/pub/fedora/linux/$repo -> done]]> - - - Change the repository URI and --cut-dirs - level as needed to produce a hierarchy that resembles: - - - - A list of additional Fedora Core 4 mirrors is available at - http://fedora.redhat.com/Download/mirrors.html. -
- -
- Building MyPLC - - All PlanetLab source code modules are built and installed - as RPMS. A set of build scripts, checked into the - build/ directory of the PlanetLab CVS - repository, eases the task of rebuilding PlanetLab source - code. - - Before you try building MyPLC, you might check the - configuration, in a file named - plc_config.xml that relies on a very - similar model as MyPLC, located in - /etc/planetlab within the chroot jail, or - in /plc/devel/data/etc/planetlab from the - root context. The set of applicable variables is described in - . - - To build MyPLC, or any PlanetLab source code module, from - within the MyPLC development environment, execute the following - commands as root: - - - Building MyPLC. - - - - - If the build succeeds, a set of binary RPMS will be - installed under - /plc/devel/data/build/$DATE/RPMS/ that you - may copy to the - /var/www/html/install-rpms/planetlab - directory of your MyPLC installation (see ). -
- -
- Updating CVS - - A complete snapshot of the PlanetLab source code is included - with the MyPLC development environment as a CVS repository in - /plc/devel/data/cvs. This CVS repository may - be accessed like any other CVS repository. It may be accessed - using an interface such as CVSweb, - and file permissions may be altered to allow for fine-grained - access control. Although the files are included with the - myplc-devel RPM, they are not subject to upgrade once installed. New - versions of the myplc-devel RPM will install - updated snapshot repositories in - /plc/devel/data/cvs-%{version}-%{release}, - where %{version}-%{release} is replaced with - the version number of the RPM. - - Because the CVS repository is not automatically upgraded, - if you wish to keep your local repository synchronized with the - public PlanetLab repository, it is highly recommended that you - use CVS's support for vendor branches to track changes, as - described here - and here. - Vendor branches ease the task of merging upstream changes with - your local modifications. To import a new snapshot into your - local repository (for example, if you have just upgraded from - myplc-devel-0.4-2 to - myplc-devel-0.4-3 and you notice the new - repository in /plc/devel/data/cvs-0.4-3), - execute the following commands as root from within the MyPLC - development environment: - - - Updating /data/cvs from /data/cvs-0.4-3. - - Warning: This may cause - severe, irreversible changes to be made to your local - repository. Always tag your local repository before - importing. - - - - - If there are any merge conflicts, use the command - suggested by CVS to help the merge. Explaining how to fix merge - conflicts is beyond the scope of this document; consult the CVS - documentation for more information on how to use CVS. -
- -
More information : the FAQ wiki page - - Please refer to, and feel free to contribute, -the FAQ page on the Princeton's wiki .
+ - Configuration variables (for <emphasis>myplc</emphasis>) - - Listed below is the set of standard configuration variables - and their default values, defined in the template - /etc/planetlab/default_config.xml. Additional - variables and their defaults may be defined in site-specific XML - templates that should be placed in - /etc/planetlab/configs/. - - This information is available online within - plc-config-tty, e.g.: - -Advanced usage of plc-config-tty - # plc-config-tty + Configuration variables + + + Listed below is the set of standard configuration variables + together with their default values, as defined in the template + /etc/planetlab/default_config.xml. + + + This information is available online within + plc-config-tty, e.g.: + + + + Advanced usage of plc-config-tty + - - Development configuration variables (for <emphasis>myplc-devel</emphasis>) - - &DevelVariables; - - Bibliography + + <ulink + url="http://www.planet-lab.org/doc/guides/user">PlanetLab + User's Guide</ulink> + + + + <ulink + url="http://www.planet-lab.org/doc/guides/pi">PlanetLab + Principal Investigator's Guide</ulink> + + MarkHuang <ulink - url="http://www.planet-lab.org/doc/TechsGuide.php">PlanetLab + url="http://www.planet-lab.org/doc/guides/tech">PlanetLab Technical Contact's Guide</ulink> +
diff --git a/doc/plc_devel_variables.xml b/doc/plc_devel_variables.xml deleted file mode 100644 index 4dacd1c..0000000 --- a/doc/plc_devel_variables.xml +++ /dev/null @@ -1,58 +0,0 @@ - - - PLC_DEVEL_FEDORA_RELEASE - - - Type: string - - Default: 4 - Version number of Fedora Core upon which to - base the build environment. Warning: Currently, only Fedora - Core 4 is supported. - - - - PLC_DEVEL_FEDORA_ARCH - - - Type: string - - Default: i386 - Base architecture of the build - environment. Warning: Currently, only i386 is - supported. - - - - PLC_DEVEL_FEDORA_URL - - - Type: string - - Default: file:///data/fedora - Fedora Core mirror from which to install - filesystems. - - - - PLC_DEVEL_CVSROOT - - - Type: string - - Default: /cvs - CVSROOT to use when checking out code. - - - - PLC_DEVEL_BOOTSTRAP - - - Type: boolean - - Default: false - Controls whether MyPLC should be built inside - of its own development environment. - - - diff --git a/doc/plc_variables.xml b/doc/plc_variables.xml deleted file mode 100644 index d4a9218..0000000 --- a/doc/plc_variables.xml +++ /dev/null @@ -1,630 +0,0 @@ - - - PLC_NAME - - - Type: string - - Default: PlanetLab Test - The name of this PLC installation. It is used in - the name of the default system site (e.g., PlanetLab Central) - and in the names of various administrative entities (e.g., - PlanetLab Support). - - - - PLC_SLICE_PREFIX - - - Type: string - - Default: pl - The abbreviated name of this PLC - installation. It is used as the prefix for system slices - (e.g., pl_conf). Warning: Currently, this variable should - not be changed. - - - - PLC_ROOT_USER - - - Type: email - - Default: root@localhost.localdomain - The name of the initial administrative - account. We recommend that this account be used only to create - additional accounts associated with real - administrators, then disabled. - - - - PLC_ROOT_PASSWORD - - - Type: password - - Default: root - The password of the initial administrative - account. Also the password of the root account on the Boot - CD. - - - - PLC_ROOT_SSH_KEY_PUB - - - Type: file - - Default: /etc/planetlab/root_ssh_key.pub - The SSH public key used to access the root - account on your nodes. - - - - PLC_ROOT_SSH_KEY - - - Type: file - - Default: /etc/planetlab/root_ssh_key.rsa - The SSH private key used to access the root - account on your nodes. - - - - PLC_DEBUG_SSH_KEY_PUB - - - Type: file - - Default: /etc/planetlab/debug_ssh_key.pub - The SSH public key used to access the root - account on your nodes when they are in Debug mode. - - - - PLC_DEBUG_SSH_KEY - - - Type: file - - Default: /etc/planetlab/debug_ssh_key.rsa - The SSH private key used to access the root - account on your nodes when they are in Debug mode. - - - - PLC_ROOT_GPG_KEY_PUB - - - Type: file - - Default: /etc/planetlab/pubring.gpg - The GPG public keyring used to sign the Boot - Manager and all node packages. - - - - PLC_ROOT_GPG_KEY - - - Type: file - - Default: /etc/planetlab/secring.gpg - The SSH private key used to access the root - account on your nodes. - - - - PLC_NET_DNS1 - - - Type: ip - - Default: 127.0.0.1 - Primary DNS server address. - - - - PLC_NET_DNS2 - - - Type: ip - - Default: - Secondary DNS server address. - - - - PLC_DNS_ENABLED - - - Type: boolean - - Default: true - Enable the internal DNS server. The server does - not provide reverse resolution and is not a production - quality or scalable DNS solution. Use the internal DNS - server only for small deployments or for - testing. - - - - PLC_MAIL_ENABLED - - - Type: boolean - - Default: false - Set to false to suppress all e-mail notifications - and warnings. - - - - PLC_MAIL_SUPPORT_ADDRESS - - - Type: email - - Default: root+support@localhost.localdomain - This address is used for support - requests. Support requests may include traffic complaints, - security incident reporting, web site malfunctions, and - general requests for information. We recommend that the - address be aliased to a ticketing system such as Request - Tracker. - - - - PLC_MAIL_BOOT_ADDRESS - - - Type: email - - Default: root+install-msgs@localhost.localdomain - The API will notify this address when a problem - occurs during node installation or boot. - - - - PLC_MAIL_SLICE_ADDRESS - - - Type: email - - Default: root+SLICE@localhost.localdomain - This address template is used for sending - e-mail notifications to slices. SLICE will be replaced with - the name of the slice. - - - - PLC_DB_ENABLED - - - Type: boolean - - Default: true - Enable the database server on this - machine. - - - - PLC_DB_TYPE - - - Type: string - - Default: postgresql - The type of database server. Currently, only - postgresql is supported. - - - - PLC_DB_HOST - - - Type: hostname - - Default: localhost.localdomain - The fully qualified hostname of the database - server. - - - - PLC_DB_IP - - - Type: ip - - Default: 127.0.0.1 - The IP address of the database server, if not - resolvable by the configured DNS servers. - - - - PLC_DB_PORT - - - Type: int - - Default: 5432 - The TCP port number through which the database - server should be accessed. - - - - PLC_DB_NAME - - - Type: string - - Default: planetlab4 - The name of the database to access. - - - - PLC_DB_USER - - - Type: string - - Default: pgsqluser - The username to use when accessing the - database. - - - - PLC_DB_PASSWORD - - - Type: password - - Default: - The password to use when accessing the - database. If left blank, one will be - generated. - - - - PLC_API_ENABLED - - - Type: boolean - - Default: true - Enable the API server on this - machine. - - - - PLC_API_DEBUG - - - Type: boolean - - Default: false - Enable verbose API debugging. Do not enable on - a production system! - - - - PLC_API_HOST - - - Type: hostname - - Default: localhost.localdomain - The fully qualified hostname of the API - server. - - - - PLC_API_IP - - - Type: ip - - Default: 127.0.0.1 - The IP address of the API server, if not - resolvable by the configured DNS servers. - - - - PLC_API_PORT - - - Type: int - - Default: 443 - The TCP port number through which the API - should be accessed. - - - - PLC_API_PATH - - - Type: string - - Default: /PLCAPI/ - The base path of the API URL. - - - - PLC_API_MAINTENANCE_USER - - - Type: string - - Default: maint@localhost.localdomain - The username of the maintenance account. This - account is used by local scripts that perform automated - tasks, and cannot be used for normal logins. - - - - PLC_API_MAINTENANCE_PASSWORD - - - Type: password - - Default: - The password of the maintenance account. If - left blank, one will be generated. We recommend that the - password be changed periodically. - - - - PLC_API_MAINTENANCE_SOURCES - - - Type: hostname - - Default: - A space-separated list of IP addresses allowed - to access the API through the maintenance account. The value - of this variable is set automatically to allow only the API, - web, and boot servers, and should not be - changed. - - - - PLC_API_SSL_KEY - - - Type: file - - Default: /etc/planetlab/api_ssl.key - The SSL private key to use for encrypting HTTPS - traffic. If non-existent, one will be - generated. - - - - PLC_API_SSL_CRT - - - Type: file - - Default: /etc/planetlab/api_ssl.crt - The corresponding SSL public certificate. By - default, this certificate is self-signed. You may replace - the certificate later with one signed by a root - CA. - - - - PLC_API_CA_SSL_CRT - - - Type: file - - Default: /etc/planetlab/api_ca_ssl.crt - The certificate of the root CA, if any, that - signed your server certificate. If your server certificate is - self-signed, then this file is the same as your server - certificate. - - - - PLC_WWW_ENABLED - - - Type: boolean - - Default: true - Enable the web server on this - machine. - - - - PLC_WWW_DEBUG - - - Type: boolean - - Default: false - Enable debugging output on web pages. Do not - enable on a production system! - - - - PLC_WWW_HOST - - - Type: hostname - - Default: localhost.localdomain - The fully qualified hostname of the web - server. - - - - PLC_WWW_IP - - - Type: ip - - Default: 127.0.0.1 - The IP address of the web server, if not - resolvable by the configured DNS servers. - - - - PLC_WWW_PORT - - - Type: int - - Default: 80 - The TCP port number through which the - unprotected portions of the web site should be - accessed. - - - - PLC_WWW_SSL_PORT - - - Type: int - - Default: 443 - The TCP port number through which the protected - portions of the web site should be accessed. - - - - PLC_WWW_SSL_KEY - - - Type: file - - Default: /etc/planetlab/www_ssl.key - The SSL private key to use for encrypting HTTPS - traffic. If non-existent, one will be - generated. - - - - PLC_WWW_SSL_CRT - - - Type: file - - Default: /etc/planetlab/www_ssl.crt - The corresponding SSL public certificate for - the HTTP server. By default, this certificate is - self-signed. You may replace the certificate later with one - signed by a root CA. - - - - PLC_WWW_CA_SSL_CRT - - - Type: file - - Default: /etc/planetlab/www_ca_ssl.crt - The certificate of the root CA, if any, that - signed your server certificate. If your server certificate is - self-signed, then this file is the same as your server - certificate. - - - - PLC_BOOT_ENABLED - - - Type: boolean - - Default: true - Enable the boot server on this - machine. - - - - PLC_BOOT_HOST - - - Type: hostname - - Default: localhost.localdomain - The fully qualified hostname of the boot - server. - - - - PLC_BOOT_IP - - - Type: ip - - Default: 127.0.0.1 - The IP address of the boot server, if not - resolvable by the configured DNS servers. - - - - PLC_BOOT_PORT - - - Type: int - - Default: 80 - The TCP port number through which the - unprotected portions of the boot server should be - accessed. - - - - PLC_BOOT_SSL_PORT - - - Type: int - - Default: 443 - The TCP port number through which the protected - portions of the boot server should be - accessed. - - - - PLC_BOOT_SSL_KEY - - - Type: file - - Default: /etc/planetlab/boot_ssl.key - The SSL private key to use for encrypting HTTPS - traffic. - - - - PLC_BOOT_SSL_CRT - - - Type: file - - Default: /etc/planetlab/boot_ssl.crt - The corresponding SSL public certificate for - the HTTP server. By default, this certificate is - self-signed. You may replace the certificate later with one - signed by a root CA. - - - - PLC_BOOT_CA_SSL_CRT - - - Type: file - - Default: /etc/planetlab/boot_ca_ssl.crt - The certificate of the root CA, if any, that - signed your server certificate. If your server certificate is - self-signed, then this file is the same as your server - certificate. - - - diff --git a/doc/variables.xsl b/doc/variables.xsl index a00bdb4..cc6084c 100644 --- a/doc/variables.xsl +++ b/doc/variables.xsl @@ -11,40 +11,45 @@ $Id$ - - - - - - - + + + + + + + +
+ + Category + <filename><xsl:value-of select="$category_id" /> </filename> + + + + + - _ + + _ + + (Default:) - - Type: - - - Default: - - - - - - + +
+
+
diff --git a/myplc-native.spec b/myplc-native.spec index 06d3fcc..981e431 100644 --- a/myplc-native.spec +++ b/myplc-native.spec @@ -161,7 +161,7 @@ if [ -f /usr/share/plc_api/doc/PLCAPI.html ] ; then cp /usr/share/plc_api/doc/PLCAPI.{html,pdf} /var/www/html/planetlab/doc ./docbook2drupal.sh "PLCAPI Documentation" \ /var/www/html/planetlab/doc/PLCAPI.html \ - /var/www/html/planetlab/doc/plcapi.php + /var/www/html/planetlab/doc/PLCAPI.php fi || : # same for the PLCAPI doc if [ -f /usr/share/myplc/doc/myplc.html ] ; then