Cloud Scheduler Test Drive

You can use the software provided in this RPI project to easily run your bach jobs on the dair cloud. The following example will allow you to test drive this functionality quickly. In summary, you will start the Cloud Scheduler VM (NEP52-cloud-scheduler image), log into it and execute a batch the job. The batch job will trigger Cloud Scheduler to boot a VM on the DAIR OpenStack Cloud.

Step 1: Log into DAIR and boot a Cloud Scheduler instance

Login into the DAIR OpenStack Dashboard: https://nova-ab.dair-atir.canarie.ca . Refer to the OpenStack docs for all the details of booting and managing VMs via the dashboard.

Go to the 'Images and Snapshot' tab on the left of the page then click the button that says 'Launch' next to the "NEP52-cloud-scheduler" image.

Fill in the form to look the same as the screen shot below substituting your username where you see the string "hepnet".

launch.png

Now you need to select an SSH key to associate with the instance so that you can login to the image. Click the access and security tab, pick a key, click "launch" (see screen shot below) and wait for the instance to become active.

select_key.png

Step 2: Log into the Cloud Scheduler instance and configure it

Now associate an external IP ("floating IP" in OpenStack terminology) address to the machine. Click on the instances tab on the left. From the "Actions" beside your newly started Cloud Scheduler instance, choose "Associate Floating IP", complete the dialog and click "Associate".

Now ssh into the box as root (you can find the IP of the machine from the dashboard):

%STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@208.75.74.18 %ENDCONSOLE%

Edit the Cloud Scheduler configuration file to contain your Dair EC2 credentials, specifically access_key_id and secret_access_key, and start the Cloud Scheduler service:

%STARTCONSOLE% nano /etc/cloudscheduler/cloud_resources.conf service cloud_scheduler start %ENDCONSOLE%

If you don't have your credentials follow this video to see how to do it. Your credentials will be used by Cloud Scheduler to boot VMs on your behalf.

Step 3: Run a job and be amazed

Switch to the guest user on the VM and then submit a 'hello world' type of demo job. You can then see what's happening with cloud_status and condor_q or you can issue these two commands periodically through "watch" to monitor the job progress:

%STARTCONSOLE% su - guest condor_submit demo-1.job cloud_status -m condor_q watch 'cloud_status -m; condor_q' %ENDCONSOLE%

When the job completes, it disappears from the queue. The primary output for the job will be contained in the file 'demo-1.out', errors will be reported in 'demo-1.err', and the HTCondor job log is saved in 'demo-1.log'. All these file names are user defined in the job description file 'demo-1.job'.

%STARTCONSOLE% cat demo-1.out %ENDCONSOLE%

You have just run a demonstration job on a dynamically created Virtual Machine.

Running a Job which uses CVMFS

CVMFS is a read only network file system that is designed for sharing software to VMs. It's a secure and very fast way to mount POSIX network file system that can be shared to hundreds of running VMs.

We provide a VM appliance which is preconfigured with CVMFS which will allow you to share your software to multiple running VMs.

Step 1:

Using the OpenStack dashboard, launch an instance of NEP52-cvmfs-server setting the keypair, external IP, and instance name to "{username}-cvmfs" (obviously replacing "{username}" with your own username).

Step 2:

Log into the Cloud Scheduler VM you already launched, edit the job description file, and replace the string "{username}" with your username.

%STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@208.75.74.18 su - guest nano demo-2.job %ENDCONSOLE%

Now submit the job and watch it like we did before:

%STARTCONSOLE% condor_submit demo-2.job watch 'cloud_status -m; condor_q' %ENDCONSOLE%

Once the job finishes you should see something like this in the file "demo-2.out":

%STARTCONSOLE% cat demo-2.out Job started at Tue May 28 15:40:22 PDT 2013 => demo-2.sh <= Simple wait script for testing the default CVMFS application.

Shutting down CernVM-FS: [ OK ] Stopping automount: [ OK ] Starting automount: [ OK ] Starting CernVM-FS: [ OK ]

-rwxr-xr-x 1 cvmfs cvmfs 110 Mar 28 16:00 /cvmfs/dair.cvmfs.server/Hello -rw-r--r-- 1 cvmfs cvmfs 47 Mar 28 16:00 /cvmfs/dair.cvmfs.server/empty

Hello! You have successfully connected to the skeleton CVMFS server and run its software.

Job finished at Tue May 28 15:40:27 PDT 2013 %ENDCONSOLE%

Adding an application to the CVMFS server

In this section we will show you how to modify the CVMFS server to distribute your own software.

Step 1:

Log into the CVMS server and copy the "Hello" bash script to the file "Goodbye". Then edit that script to say something different.

%STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@208.75.74.80 <---- IP of CVMFS server cd /cvmfs/dair.cvmfs.server cp Hello Goodbye nano Goodbye %ENDCONSOLE%

Publish your newly changed script to the world via CVMFS: %STARTCONSOLE% chown -R cvmfs.cvmfs /cvmfs/dair.cvmfs.server cvmfs-sync cvmfs_server publish %ENDCONSOLE%

Step 2:

Now run a job that calls your newly created "Goodbye" script: %STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@208.75.74.18 <--- IP of Cloud Scheduler Server su - guest nano demo-3.job condor_submit demo-3.job watch 'cloud_status -m; condor_q' %ENDCONSOLE%

The "demo-3" job runs the "Goodbye" application and the its output file, "demo-3.out", should contain your message.

So you can publish any arbitrary files using this method. We use CVMFS to publish our 7 GB ATLAS software distributions which dramatically reduces the size of the VM and allows us to make changes to the software without changing the VM images.

Take snapshots of your now customized setup.

If you followed all the steps above you have customized versions of both the Cloud Scheduler and the CVMFS appliances running. You can now use the OpenStack dashboard to snapshot these two servers to save yourself the work of customizing them again.

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng launch.png r1 manage 57.9 K 2013-05-28 - 16:57 UnknownUser  
PNGpng select_key.png r2 r1 manage 23.6 K 2013-05-28 - 19:41 UnknownUser  
Edit | Attach | Watch | Print version | History: r36 | r22 < r21 < r20 < r19 | Backlinks | Raw View | More topic actions...
Topic revision: r20 - 2013-05-28 - crlb
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback