Difference: WiP1 (2 vs. 3)

Revision 32010-02-15 - crlb

Line: 1 to 1
 
META TOPICPARENT name="ColinLeavettBrown"
-- ColinLeavettBrown - 2010-02-11

Nimbus Install with just ws-core

Line: 8 to 8
  Create the nimbus account on elephant head node (elephant01) and propagate to all nodes in the cluster.
Changed:
<
<
[crlb@elephant01 ~] sudo adduser nimbus [crlb@elephant01 ~] sudo /usr/local/sbin/usync
>
>
[crlb@elephant01 ~]$ sudo adduser nimbus [crlb@elephant01 ~]$ sudo /usr/local/sbin/usync

Download everything that the nimbus user will need.

[crlb@elephant01 ~]$ sudo su - nimbus
[nimbus@elephant01 ~]$ mkdir Downloads
[nimbus@elephant01 ~]$ cd Downloads
[nimbus@elephant01 Downloads]$ wget http://www.nimbusproject.org/downloads/nimbus-2.3.tar.gz
[nimbus@elephant01 Downloads]$ wget http://www.nimbusproject.org/downloads/nimbus-controls-2.3.tar.gz
[nimbus@elephant01 Downloads]$ wget http://www-unix.globus.org/ftppub/gt4/4.0/4.0.8/ws-core/bin/ws-core-4.0.8-bin.tar.gz
[nimbus@elephant01 Downloads]$ wget http://mirror.csclub.uwaterloo.ca/apache/ant/binaries/apache-ant-1.8.0-bin.tar.bz2
[nimbus@elephant01 Downloads]$ exit
[crlb@elephant01 ~]$ 
 

Switch to the interim cloud cluster head node, elephant11 and install java-1.6.0-sun-compat:

Line: 19 to 32
 

Install Apache Ant

Changed:
<
<
[crlb@elephant11 ~]$ mkdir -p Downloads/ant
[crlb@elephant11 ~]$ cd ~/Downloads/ant
[crlb@elephant11 ant]$ wget http://mirror.csclub.uwaterloo.ca/apache/ant/binaries/apache-ant-1.8.0-bin.tar.bz2
>
>
[nimbus@elephant11 nimbus]$ 
 [crlb@elephant11 ant]$ cd /usr/local
Changed:
<
<
[crlb@elephant11 local]$ sudo tar -xjvf ~/Downloads/ant/apache-ant-1.8.0-bin.tar.bz2
>
>
[crlb@elephant11 local]$ sudo tar -xjvf ~nimbus/Downloads/apache-ant-1.8.0-bin.tar.bz2
 
Changed:
<
<
Create home for globus ws-core
>
>
Create home for nimbus/globus ws-core
 
Changed:
<
<
[crlb@elephant11 local]$ sudo mkdir nimbus [crlb@elephant11 local]$ sudo chown nimbus.nimbus nimbus
>
>
[crlb@elephant11 local]$ sudo mkdir nimbus-2.3 [crlb@elephant11 local]$ sudo chown nimbus.nimbus nimbus-2.3 [crlb@elephant11 local]$ sudo ln -s nimbus-2.3 nimbus
 

and for Nimbus worker node control software

Changed:
<
<
[crlb@elephant11 local]$ sudo mkdir -p /opt/nimbus [crlb@elephant11 local]$ sudo chown nimbus.nimbus /opt/nimbus
>
>
[crlb@elephant11 local]$ sudo mkdir -p /opt/nimbus-2.3 [crlb@elephant11 local]$ sudo chown nimbus.nimbus /opt/nimbus-2.3
 
Line: 52 to 63
 

Installing the Webservice Core

Changed:
<
<
First we set up the basic globus webservice core. First, download and install the basic core tools.
>
>
First installwe set up the basic globus webservice core. First, download and install the basic core tools.
 
Changed:
<
<
nimbus$ wget http://www-unix.globus.org/ftppub/gt4/4.0/4.0.8/ws-core/bin/ws-core-4.0.8-bin.tar.gz nimbus$ tar xzf ws-core-4.0.8-bin.tar.gz nimbus$ cp -R ws-core-4.0.8/* /usr/local/nimbus nimbus$ rm -Rf ws-core-4.0.8*
>
>
[nimbus@elephant11 ~]$ cd /usr/local/nimbus [nimbus@elephant11 nimbus]$ tar -xzf ~/Downloads/ws-core-4.0.8-bin.tar.gz [nimbus@elephant11 nimbus]$ mv ws-core-4.0.8/* . [nimbus@elephant11 nimbus]$ rmdir ws-core-4.0.8
 

Create an empty grid-mapfile. This file will contain the certificate subjects of the users of your cloud-enabled cluster.

Changed:
<
<
nimbus$ touch /usr/local/nimbus/share/grid-mapfile
>
>
[nimbus@elephant11 nimbus]$  touch /usr/local/nimbus/share/grid-mapfile
  Now set our environment variables. I'm assuming bash is your nimbus user's shell. If you're using csh or ksh, you might want to try substituting .profile for .bashrc:
Changed:
<
<
nimbus$ echo "export GLOBUS_LOCATION=/usr/local/nimbus" >> ~/.bashrc nimbus$ echo "export X509_CERT_DIR=/usr/local/nimbus/share/certificates" >> ~/.bashrc nimbus$ . ~/.bashrc
>
>
[nimbus@elephant11 nimbus]$ cd [nimbus@elephant11 ~]$ echo "export GLOBUS_LOCATION=/usr/local/nimbus" >> .bashrc [nimbus@elephant11 ~]$ echo "export X509_CERT_DIR=/usr/local/nimbus/share/certificates" >> .bashrc [nimbus@elephant11 ~]$ echo "export PATH=$PATH:/usr/local/apache-ant-1.8.0/bin" >> .bashrc [nimbus@elephant11 ~]$ . .bashrc
 

Certificates

Now we can set up the certificates. We're going to put them in our $X509_CERT_DIR . First, we make our certificates directory and put the grid canada root certificates in there.

Changed:
<
<
nimbus$ mkdir -p $X509_CERT_DIR nimbus$ cd $X509_CERT_DIR nimbus$ wget http://www.gridcanada.ca/ca/bffbd7d0.0
>
>
[nimbus@elephant11 ~]$ mkdir -p $X509_CERT_DIR [nimbus@elephant11 ~]$ cd $X509_CERT_DIR [nimbus@elephant11 ~]$ wget http://www.gridcanada.ca/ca/bffbd7d0.0
 

Then create a host certificate request to send to our CA.

Changed:
<
<
nimbus$ $GLOBUS_LOCATION/bin/grid-cert-request -int -host `hostname -f` -dir $X509_CERT_DIR -caEmail ca@gridcanada.ca -force
>
>
[nimbus@elephant11 ~]$ $GLOBUS_LOCATION/bin/grid-cert-request -int -host `hostname -f` -dir $X509_CERT_DIR -caEmail ca@gridcanada.ca -force
 
You are about to be asked to enter information that will be incorporated into your certificate request.
Line: 115 to 128
  to point to your new certificates and modify the gridmap value:
Changed:
<
<
$ cat /usr/local/nimbus/etc/globus_wsrf_core/global_security_descriptor.xml
>
>
[nimbus@elephant11 ~]$ vim $GLOBUS_LOCATION/etc/globus_wsrf_core/global_security_descriptor.xml
 
Line: 127 to 140
 

Now we'll activate our security configuration by adding a element under the CONTAINER_SECURITY_DESCRIPTOR:

Changed:
<
<
$ vim $GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd
>
>
[nimbus@elephant11 ~]$ vim $GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd
 
<-- @CONTAINER_SECURITY_DESCRIPTOR@ -->
<parameter name="containerSecDesc" value="etc/globus_wsrf_core/global_security_descriptor.xml"/>
Line: 140 to 153
 Now that we've set up security, we can try starting our container for the first time. To do so, run globus-start-container. You should see something like the following:
Changed:
<
<
$ $GLOBUS_LOCATION/bin/globus-start-container
>
>
[nimbus@elephant11 ~]$ $GLOBUS_LOCATION/bin/globus-start-container
 Starting SOAP server at: https://204.174.103.121:8443/wsrf/services/ With the following services:
Line: 201 to 214
  Then mark it as executable:
Changed:
<
<
$ chmod 744 $GLOBUS_LOCATION/bin/start-stop
>
>
[nimbus@elephant11 ~]$ chmod 744 $GLOBUS_LOCATION/bin/globus-start-stop
 

We can now try starting and stopping the container with this script, and see if we're listening on 8443:

Changed:
<
<
$ $GLOBUS_LOCATION/bin/globus-start-stop start
>
>
[nimbus@elephant11 ~]$ $GLOBUS_LOCATION/bin/globus-start-stop start
 $ netstat -an | grep 8443 tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN
Line: 215 to 228
 Great! Now we have a running container. Let's stop it before we carry on with our installation.
Changed:
<
<
$ $GLOBUS_LOCATION/bin/globus-start-stop stop

Setting Up Worker Nodes

Setting up passwordless access to worker nodes

Nimbus needs to be able to ssh without a password from the head node to the worker nodes and vice versa. This is for sending commands back and forth. The following setup assumes you have the nimbus home directory mounted over NFS between the head node and the worker nodes. If you don't you'll just need to copy the .ssh directory on the head node to the nimbus home directory on each worker.

$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/nimbus/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/nimbus/.ssh/id_rsa.
Your public key has been saved in /home/nimbus/.ssh/id_rsa.pub.
The key fingerprint is:
9c:75:52:2f:d9:bd:5a:05:43:ee:3f:b2:83:cc:f2:0b nimbus@canfardev.dao.nrc.ca
$ cd ~/.ssh
$ cp id_rsa.pub authorized_keys
$ chmod 600 authorized_keys

Now test it:

nimbus@canfardev $ ssh gildor
nimbus@ gildor $ ssh canfardev.dao.nrc.ca
nimbus@canfardev $

Great. It works. You may be asked to authorize a new host key. If so, just answer "yes".

Setting up Xen, ebtables and dhcpd

First, make sure Xen is installed. If it is, you should see something like the following when you run these commands:

# which xm
/usr/sbin/xm
# uname -r
2.6.18-128.1.1.el5xen
$ ps aux | grep xen
root        21  0.0  0.0      0     0 ?        S<   16:34   0:00 [xenwatch]
root        22  0.0  0.0      0     0 ?        S<   16:34   0:00 [xenbus]
root      2549  0.0  0.0   2188   956 ?        S    16:35   0:00 xenstored --pid-file /var/run/xenstore.pid
root      2554  0.0  0.1  12176  3924 ?        S    16:35   0:00 python /usr/sbin/xend start
root      2555  0.0  0.1  63484  4836 ?        Sl   16:35   0:00 python /usr/sbin/xend start
root      2557  0.0  0.0  12212   364 ?        Sl   16:35   0:00 xenconsoled --log none --timestamp none --log-dir /var/log/xen/console

If it's not installed, you can do so with:

# yum install xen kernel-xen
# chkconfig xend on

Then reboot.

You'll also need to install ebtables and dhcp. Do this by first enabling the DAG repository, then installing with yum:

# rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
# yum install ebtables dhcp

Now, edit the dhcpd config file. Make sure it looks something like this:

# vim /etc/dhcpd.conf
# dhcpd.conf
#
# Configuration file for ISC dhcpd for workspaces


#################
## GLOBAL OPTS ##
#################

# Option definitions common or default to all supported networks

# Keep this:
ddns-update-style none;

# Can be overriden in host entry:
default-lease-time 120;
max-lease-time 240;


#############
## SUBNETS ##
#############

# Make an entry like this for each supported subnet.  Otherwise, the DHCP
# daemon will not listen for requests on the interface of that subnet.

subnet 172.21.0.0 netmask 255.255.0.0 {
}

### DO NOT EDIT BELOW, the following entries are added and 
### removed programmatically.

### DHCP-CONFIG-AUTOMATIC-BEGINS ###


Setting up Sudo

You need a few rules for the nimbus user to be able to run the xm scripts it needs:

Add the following rules to sudoers:

nimbus ALL=(root) NOPASSWD: /opt/nimbus/bin/mount-alter.sh
nimbus ALL=(root) NOPASSWD: /opt/nimbus/bin/dhcp-config.sh
nimbus ALL=(root) NOPASSWD: /usr/sbin/xm
nimbus ALL=(root) NOPASSWD: /usr/sbin/xend
>
>
[nimbus@elephant11 ~]$ $GLOBUS_LOCATION/bin/globus-start-stop stop
 
Deleted:
<
<
And set requiretty to false in sudoers.

Now that we've set up our pre-requisites, we can install the worker node tools.

Setting Up Control Agents

The Nimbus Control Agents are the binaries on the worker node that act on behalf of the head node. They need to be installed on each worker node.

If you've already set up the control agents on one node, you shouldn't need to do the following steps on the other nodes. Just make sure the install directory is NFS mounted.

First, make sure we have the install directory:

# ls /opt/nimbus
/opt/nimbus
 
Deleted:
<
<
Now do the install:
# wget http://workspace.globus.org/downloads/nimbus-controls-TP2.2.tar.gz
# tar xzf nimbus-controls-TP2.2.tar.gz
# cd nimbus-controls-TP2.2/workspace-control
# cp worksp.conf.example /opt/nimbus/worksp.conf
# python install.py -i -c /opt/nimbus/worksp.conf -a nimbus -g nimbus

The installer will ask you a bunch of questions. Answer them out to the best of your knowledge, and don't worry too much if you're not sure of the answers to some of the questions. Chances are though, you will just answer yes to all of them.

Adding Node to Nimbus Config

This should be done after you've already installed Nimbus on the head node. If you haven't done that yet, come back to this section.

Edit $GLOBUS_LOCATION/etc/nimbus/workspace-service/vmm-pools/canfardevpool to add the new node. Your file should look something like this:

#Some comments up here
gildor 3072
guilin 3072
 
Deleted:
<
<
Your worker node should now be ready!
 

Installing Nimbus

Added:
>
>
Unpack the nimbus package and run the install script.
[nimbus@elephant11 ~]$
  Get Nimbus from the Nimbus website. You'll need the Nimbus package.
Line: 935 to 796
 If you encounter an ebtables problem. You can try a patched version of ebtables. See This page for details.
Added:
>
>

Setting Up Worker Nodes

Setting up passwordless access to worker nodes

Nimbus needs to be able to ssh without a password from the head node to the worker nodes and vice versa. This is for sending commands back and forth. The following setup assumes you have the nimbus home directory mounted over NFS between the head node and the worker nodes. If you don't you'll just need to copy the .ssh directory on the head node to the nimbus home directory on each worker.

$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/nimbus/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/nimbus/.ssh/id_rsa.
Your public key has been saved in /home/nimbus/.ssh/id_rsa.pub.
The key fingerprint is:
9c:75:52:2f:d9:bd:5a:05:43:ee:3f:b2:83:cc:f2:0b nimbus@canfardev.dao.nrc.ca
$ cd ~/.ssh
$ cp id_rsa.pub authorized_keys
$ chmod 600 authorized_keys

Now test it:

nimbus@canfardev $ ssh gildor
nimbus@ gildor $ ssh canfardev.dao.nrc.ca
nimbus@canfardev $

Great. It works. You may be asked to authorize a new host key. If so, just answer "yes".

Setting up Xen, ebtables and dhcpd

First, make sure Xen is installed. If it is, you should see something like the following when you run these commands:

# which xm
/usr/sbin/xm
# uname -r
2.6.18-128.1.1.el5xen
$ ps aux | grep xen
root        21  0.0  0.0      0     0 ?        S<   16:34   0:00 [xenwatch]
root        22  0.0  0.0      0     0 ?        S<   16:34   0:00 [xenbus]
root      2549  0.0  0.0   2188   956 ?        S    16:35   0:00 xenstored --pid-file /var/run/xenstore.pid
root      2554  0.0  0.1  12176  3924 ?        S    16:35   0:00 python /usr/sbin/xend start
root      2555  0.0  0.1  63484  4836 ?        Sl   16:35   0:00 python /usr/sbin/xend start
root      2557  0.0  0.0  12212   364 ?        Sl   16:35   0:00 xenconsoled --log none --timestamp none --log-dir /var/log/xen/console

If it's not installed, you can do so with:

# yum install xen kernel-xen
# chkconfig xend on

Then reboot.

You'll also need to install ebtables (not currently used) and dhcp. Do this by first enabling the DAG repository, then installing with yum:

# rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
# yum install ebtables dhcp

Now, edit the dhcpd config file. Make sure it looks something like this:

# vim /etc/dhcpd.conf
# dhcpd.conf
#
# Configuration file for ISC dhcpd for workspaces


#################
## GLOBAL OPTS ##
#################

# Option definitions common or default to all supported networks

# Keep this:
ddns-update-style none;

# Can be overriden in host entry:
default-lease-time 120;
max-lease-time 240;


#############
## SUBNETS ##
#############

# Make an entry like this for each supported subnet.  Otherwise, the DHCP
# daemon will not listen for requests on the interface of that subnet.

subnet 172.21.0.0 netmask 255.255.0.0 {
}

### DO NOT EDIT BELOW, the following entries are added and 
### removed programmatically.

### DHCP-CONFIG-AUTOMATIC-BEGINS ###


Setting up Sudo

You need a few rules for the nimbus user to be able to run the xm scripts it needs:

Add the following rules to sudoers:

nimbus ALL=(root) NOPASSWD: /opt/nimbus/bin/mount-alter.sh
nimbus ALL=(root) NOPASSWD: /opt/nimbus/bin/dhcp-config.sh
nimbus ALL=(root) NOPASSWD: /usr/sbin/xm
nimbus ALL=(root) NOPASSWD: /usr/sbin/xend
 
Added:
>
>
And set requiretty to false in sudoers.

Now that we've set up our pre-requisites, we can install the worker node tools.

Setting Up Control Agents

The Nimbus Control Agents are the binaries on the worker node that act on behalf of the head node. They need to be installed on each worker node.

If you've already set up the control agents on one node, you shouldn't need to do the following steps on the other nodes. Just make sure the install directory is NFS mounted.

First, make sure we have the install directory:

# ls /opt/nimbus
/opt/nimbus

Now do the install:

# wget http://workspace.globus.org/downloads/nimbus-controls-TP2.2.tar.gz
# tar xzf nimbus-controls-TP2.2.tar.gz
# cd nimbus-controls-TP2.2/workspace-control
# cp worksp.conf.example /opt/nimbus/worksp.conf
# python install.py -i -c /opt/nimbus/worksp.conf -a nimbus -g nimbus

The installer will ask you a bunch of questions. Answer them out to the best of your knowledge, and don't worry too much if you're not sure of the answers to some of the questions. Chances are though, you will just answer yes to all of them.

Adding Node to Nimbus Config

This should be done after you've already installed Nimbus on the head node. If you haven't done that yet, come back to this section.

Edit $GLOBUS_LOCATION/etc/nimbus/workspace-service/vmm-pools/canfardevpool to add the new node. Your file should look something like this:

#Some comments up here
gildor 3072
guilin 3072

Your worker node should now be ready!

 

-- PatrickArmstrong - 16 Jul 2009

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback