From: Mark Huang Date: Thu, 13 Apr 2006 15:11:39 +0000 (+0000) Subject: generate php for the website X-Git-Tag: myplc-0_4-rc1~69 X-Git-Url: http://git.onelab.eu/?a=commitdiff_plain;h=9270478495cb3377fd91aee4f0d8f43cddf40f3e;p=myplc.git generate php for the website --- diff --git a/doc/Makefile b/doc/Makefile index a756198..da88c87 100644 --- a/doc/Makefile +++ b/doc/Makefile @@ -4,11 +4,16 @@ # Mark Huang # Copyright (C) 2006 The Trustees of Princeton University # -# $Id: Makefile.in,v 1.6 2005/09/07 22:05:20 mlhuang Exp $ +# $Id: Makefile,v 1.1 2006/04/12 21:21:36 mlhuang Exp $ # +vpath GenDoc.xsl ../../plc_www/doc + all: myplc.pdf +# Dependencies +.myplc.xml.valid: architecture.eps architecture.png + # Validate the XML .%.xml.valid: %.xml xmllint --valid --output $@ $< @@ -27,6 +32,10 @@ endef %.pdf: %.ps ps2pdf $< $@ +# PHP for the website +%.php: GenDoc.xsl .%.xml.valid + xsltproc $(XSLFLAGS) --output $@ $^ + $(foreach format,$(FORMATS),$(eval $(call docbook2,$(format)))) docclean: diff --git a/doc/myplc.pdf b/doc/myplc.pdf new file mode 100644 index 0000000..0114bba Binary files /dev/null and b/doc/myplc.pdf differ diff --git a/doc/myplc.php b/doc/myplc.php new file mode 100644 index 0000000..efcba75 --- /dev/null +++ b/doc/myplc.php @@ -0,0 +1,425 @@ +
+
+
+

+MyPLC User's Guide

+

Mark Huang

+
+ + + + + + + +
Revision History
Revision 1.0April 7, 2006MLH
+

Initial draft.

+
+
+

Abstract

+

This document describes the design, installation, and + administration of MyPLC, a complete PlanetLab Central (PLC) + portable installation contained within a + chroot jail. This document assumes advanced + knowledge of the PlanetLab architecture and Linux system + administration.

+
+
+
+
+ +
+

+1. Overview

+

MyPLC is a complete PlanetLab Central (PLC) portable + installation contained within a chroot + jail. The default installation consists of a web server, an + XML-RPC API server, a boot server, and a database server: the core + components of PLC. The installation is customized through an + easy-to-use graphical interface. All PLC services are started up + and shut down through a single script installed on the host + system. The usually complex process of installing and + administering the PlanetLab backend is reduced by containing PLC + services within a virtual filesystem. By packaging it in such a + manner, MyPLC may also be run on any modern Linux distribution, + and could conceivably even run in a PlanetLab slice.

+
+

Figure 1. MyPLC architecture

+
+MyPLC architecture

MyPLC should be viewed as a single application that + provides multiple functions and can run on any host + system.

+
+
+
+
+

+2. Installation

+

Though internally composed of commodity software + subpackages, MyPLC should be treated as a monolithic software + application. MyPLC is distributed as single RPM package that has + no external dependencies, allowing it to be installed on + practically any Linux 2.6 based distribution:

+
+

Example 1. Installing MyPLC.

+
# If your distribution supports RPM
+rpm -U myplc-0.3-1.planetlab.i386.rpm
+
+# If your distribution does not support RPM
+cd /
+rpm2cpio myplc-0.3-1.planetlab.i386.rpm | cpio -diu
+
+

MyPLC installs the following files and directories:

+
    +
  • /plc/root.img: The main + root filesystem of the MyPLC application. This file is an + uncompressed ext3 filesystem that is loopback mounted on + /plc/root when MyPLC starts. The + filesystem, even when mounted, should be treated an opaque + binary that can and will be replaced in its entirety by any + upgrade of MyPLC.

  • +
  • /plc/root: The mount point + for /plc/root.img. Once the root filesystem + is mounted, all MyPLC services run in a + chroot jail based in this + directory.

  • +
  • +

    /plc/data: The directory where user + data and generated files are stored. This directory is bind + mounted into the chroot jail on + /data. Files in this directory are marked + with %config(noreplace) in the RPM. That + is, during an upgrade of MyPLC, if a file has not changed + since the last installation or upgrade of MyPLC, it is subject + to upgrade and replacement. If the file has chanegd, the new + version of the file will be created with a + .rpmnew extension. Symlinks within the + MyPLC root filesystem ensure that the following directories + (relative to /plc/root) are stored + outside the MyPLC filesystem image:

    +
      +
    • /etc/planetlab: This + directory contains the configuration files, keys, and + certificates that define your MyPLC + installation.

    • +
    • /var/lib/pgsql: This + directory contains PostgreSQL database + files.

    • +
    • /var/www/html/alpina-logs: This + directory contains node installation logs.

    • +
    • /var/www/html/boot: This + directory contains the Boot Manager, customized for your MyPLC + installation, and its data files.

    • +
    • /var/www/html/download: This + directory contains Boot CD images, customized for your MyPLC + installation.

    • +
    • /var/www/html/install-rpms: This + directory is where you should install node package updates, + if any. By default, nodes are installed from the tarball + located at + /var/www/html/boot/PlanetLab-Bootstrap.tar.bz2, + which is pre-built from the latest PlanetLab Central + sources, and installed as part of your MyPLC + installation. However, nodes will attempt to install any + newer RPMs located in + /var/www/html/install-rpms/planetlab, + after initial installation and periodically thereafter. You + must run yum-arch and + createrepo to update the + yum caches in this directory after + installing a new RPM. PlanetLab Central cannot support any + changes to this file.

    • +
    • /var/www/html/xml: This + directory contains various XML files that the Slice Creation + Service uses to determine the state of slices. These XML + files are refreshed periodically by cron + jobs running in the MyPLC root.

    • +
    +
  • +
  • +

    /etc/init.d/plc: This file + is a System V init script installed on your host filesystem, + that allows you to start up and shut down MyPLC with a single + command. On a Red Hat or Fedora host system, it is customary to + use the service command to invoke System V + init scripts:

    +
    +

    Example 2. Starting and stopping MyPLC.

    +
    # Starting MyPLC
    +service plc start
    +
    +# Stopping MyPLC
    +service plc stop
    +
    +

    Like all other registered System V init services, MyPLC is + started and shut down automatically when your host system boots + and powers off. You may disable automatic startup by invoking + the chkconfig command on a Red Hat or Fedora + host system:

    +
    +

    Example 3. Disabling automatic startup of MyPLC.

    +
    # Disable automatic startup
    +chkconfig plc off
    +
    +# Enable automatic startup
    +chkconfig plc on
    +
    +
  • +
  • /etc/sysconfig/plc: This + file is a shell script fragment that defines the variables + PLC_ROOT and PLC_DATA. By default, + the values of these variables are /plc/root + and /plc/data, respectively. If you wish, + you may move your MyPLC installation to another location on your + host filesystem and edit the values of these variables + appropriately, but you will break the RPM upgrade + process. PlanetLab Central cannot support any changes to this + file.

  • +
  • /etc/planetlab: This + symlink to /plc/data/etc/planetlab is + installed on the host system for convenience.

  • +
+
+
+

+3. Quickstart

+

Once installed, start MyPLC (see Example 2, “Starting and stopping MyPLC.”). MyPLC must be started as + root. Observe the output of this command for any failures. If no + failures occur, you should see output similar to the + following:

+
+

Example 4. A successful MyPLC startup.

+
Mounting PLC:                                              [  OK  ]
+PLC: Generating network files:                             [  OK  ]
+PLC: Starting system logger:                               [  OK  ]
+PLC: Starting database server:                             [  OK  ]
+PLC: Generating SSL certificates:                          [  OK  ]
+PLC: Generating SSH keys:                                  [  OK  ]
+PLC: Starting web server:                                  [  OK  ]
+PLC: Bootstrapping the database:                           [  OK  ]
+PLC: Starting crond:                                       [  OK  ]
+PLC: Rebuilding Boot CD:                                   [  OK  ]
+PLC: Rebuilding Boot Manager:                              [  OK  ]
+
+
+

If /plc/root is mounted successfully, a + complete log file of the startup process may be found at + /plc/root/var/log/boot.log. Possible reasons + for failure of each step include:

+
    +
  • Mounting PLC: If this step + fails, first ensure that you started MyPLC as root. Check + /etc/sysconfig/plc to ensure that + PLC_ROOT and PLC_DATA refer to the + right locations. You may also have too many existing loopback + mounts, or your kernel may not support loopback mounting, bind + mounting, or the ext3 filesystem. Try freeing at least one + loopback device, or re-compiling your kernel to support loopback + mounting, bind mounting, and the ext3 + filesystem.

  • +
  • Starting database server: If + this step fails, check + /plc/root/var/log/pgsql and + /plc/root/var/log/boot.log. The most common + reason for failure is that the default PostgreSQL port, TCP port + 5432, is already in use. Check that you are not running a + PostgreSQL server on the host system.

  • +
  • Starting web server: If this + step fails, check + /plc/root/var/log/httpd/error_log and + /plc/root/var/log/boot.log for obvious + errors. The most common reason for failure is that the default + web ports, TCP ports 80 and 443, are already in use. Check that + you are not running a web server on the host + system.

  • +
  • Bootstrapping the database: + If this step fails, it is likely that the previous step + (Starting web server) also failed. Another + reason that it could fail is if PLC_API_HOST (see + Section 3.1, “Changing the configuration”) does not resolve to + the host on which the API server has been enabled. By default, + all services, including the API server, are enabled and run on + the same host, so check that PLC_API_HOST is + either localhost or resolves to a local IP + address.

  • +
  • Starting crond: If this step + fails, it is likely that the previous steps (Starting + web server and Bootstrapping the + database) also failed. If not, check + /plc/root/var/log/boot.log for obvious + errors. This step starts the cron service and + generates the initial set of XML files that the Slice Creation + Service uses to determine slice state.

  • +
+

If no failures occur, then MyPLC should be active with a + default configuration. Open a web browser on the host system and + visit http://localhost/, which should bring you + to the front page of your PLC installation. The password of the + default administrator account + root@test.planet-lab.org (set by + PLC_ROOT_USER) is root (set by + PLC_ROOT_PASSWORD).

+
+

+3.1. Changing the configuration

+

After verifying that MyPLC is working correctly, shut it + down and begin changing some of the default variable + values. Shut down MyPLC with service plc stop + (see Example 2, “Starting and stopping MyPLC.”). With a text + editor, open the file + /etc/planetlab/plc_config.xml. This file is + a self-documenting configuration file written in XML. Variables + are divided into categories. Variable identifiers must be + alphanumeric, plus underscore. A variable is referred to + canonically as the uppercase concatenation of its category + identifier, an underscore, and its variable identifier. Thus, a + variable with an id of + slice_prefix in the plc + category is referred to canonically as + PLC_SLICE_PREFIX.

+

The reason for this convention is that during MyPLC + startup, plc_config.xml is translated into + several different languages—shell, PHP, and + Python—so that scripts written in each of these languages + can refer to the same underlying configuration. Most MyPLC + scripts are written in shell, so the convention for shell + variables predominates.

+

The variables that you should change immediately are:

+
    +
  • PLC_NAME: Change this to the + name of your PLC installation.

  • +
  • PLC_ROOT_PASSWORD: Change this + to a more secure password.

  • +
  • PLC_NET_DNS1, + PLC_NET_DNS2: Change these to the IP addresses + of your primary and secondary DNS servers. Check + /etc/resolv.conf on your host + filesystem.

  • +
  • PLC_MAIL_SUPPORT_ADDRESS: + Change this to the e-mail address at which you would like to + receive support requests.

  • +
  • PLC_DB_HOST, + PLC_API_HOST, PLC_WWW_HOST, + PLC_BOOT_HOST: Change all of these to the + preferred FQDN of your host system.

  • +
+

After changing these variables, save the file, then + restart MyPLC with service plc start. You + should notice that the password of the default administrator + account is no longer root, and that the + default site name includes the name of your PLC installation + instead of PlanetLab.

+
+
+

+3.2. Installing nodes

+

Install your first node by clicking Add + Node under the Nodes tab. Fill in + all the appropriate details, then click + Add. Download the node's configuration file + by clicking Download configuration file on + the Node Details page for the + node. Save it to a floppy disk or USB key as detailed in [1].

+

Follow the rest of the instructions in [1] for creating a Boot CD and installing + the node, except download the Boot CD image from the + /download directory of your PLC + installation, not from PlanetLab Central. The images located + here are customized for your installation. If you change the + hostname of your boot server (PLC_BOOT_HOST), or + if the SSL certificate of your boot server expires, MyPLC will + regenerate it and rebuild the Boot CD with the new + certificate. If this occurs, you must replace all Boot CDs + created before the certificate was regenerated.

+

The installation process for a node has significantly + improved since PlanetLab 3.3. It should now take only a few + seconds for a new node to become ready to create slices.

+
+
+

+3.3. Administering nodes

+

You may administer nodes as root by + using the SSH key stored in + /etc/planetlab/root_ssh_key.rsa.

+
+

Example 5. Accessing nodes via SSH. Replace + node with the hostname of the node.

+
ssh -i /etc/planetlab/root_ssh_key.rsa root@node
+
+

Besides the standard Linux log files located in + /var/log, several other files can give you + clues about any problems with active processes:

+
    +
  • /var/log/pl_nm: The log + file for the Node Manager.

  • +
  • /vservers/pl_conf/var/log/pl_conf: + The log file for the Slice Creation Service.

  • +
  • /var/log/propd: The log + file for Proper, the service which allows certain slices to + perform certain privileged operations in the root + context.

  • +
  • /vserver/pl_netflow/var/log/netflow.log: + The log file for PlanetFlow, the network traffic auditing + service.

  • +
+
+
+

+3.4. Creating a slice

+

Create a slice by clicking Create Slice + under the Slices tab. Fill in all the + appropriate details, then click Create. Add + nodes to the slice by clicking Manage Nodes + on the Slice Details page for + the slice.

+

A cron job runs every five minutes and + updates the file + /plc/data/var/www/html/xml/slices-0.5.xml + with information about current slice state. The Slice Creation + Service running on every node polls this file every ten minutes + to determine if it needs to create or delete any slices. You may + accelerate this process manually if desired.

+
+

Example 6. Forcing slice creation on a node.

+
# Update slices.xml immediately
+service plc start crond
+
+# Kick the Slice Creation Service on a particular node.
+ssh -i /etc/planetlab/root_ssh_key.rsa root@node \
+vserver pl_conf exec service pl_conf restart
+
+
+
+
+

+Bibliography

+ +
+