Cloud Scheduler Test Drive

You can use the software provided in this RPI project to easily run your batch jobs on the DAIR cloud. The following examples will allow you to test drive this functionality quickly. In summary:

  • "Running your first batch job" will have you launch an instance of the Cloud Scheduler image (NEP52-cloud-scheduler), configure and start Cloud Scheduler, and submit a batch job. The batch job will trigger Cloud Scheduler to boot a VM on the DAIR OpenStack Cloud automatically, the job will run, and you can monitor its progress and check the job output. At the end of the job, when there are no more jobs in the queue, Cloud Scheduler will automatically remove idle batch VMs.
  • " Running a batch job which uses CVMFS" will have you launch an instance of the CVMFS image (NEP52-cvmfs-server), submit a batch job, and check the output of the distributed application.
  • And finally, "Adding an application to the CVMFS server", will have you log into the CVMFS server, add a new application, and then run another batch job to excercise the new application.

In order to try the Cloud Scheduler Test Drive, you will need the following:

  • A DAIR login ID with a large enough quota to run the three concurrent demonstration instances.
  • To create your own keypair and save the pem file locally (see the Openstack dashboard/documentation).
  • To retrieve your EC2_ACCESS_KEY and EC2_SECRET_KEY from the Openstack dashboard.

Running your first batch job

Step 1: Log into DAIR and boot a Cloud Scheduler instance

Login into the DAIR OpenStack Dashboard: . Select the alberta region. Refer to the OpenStack docs for all the details of booting and managing VMs via the dashboard.

Go to the 'Images and Snapshot' tab on the left of the page then click the button that says 'Launch' next to the "NEP52-cloud-scheduler" image.

Fill in the form to look the same as the screen shot below substituting your username where you see the string "hepnet".


Now you need to select your SSH key to associate with the instance so that you can login to the image. Click the access and security tab, pick your key, click "launch" (see screen shot below) and wait for the instance to become active.


Step 2: Log into the Cloud Scheduler instance and configure it

Now associate a floating IP to the machine. Click on the instances tab on the left. From the "Actions" beside your newly started Cloud Scheduler instance, choose "Associate Floating IP", complete the dialog and click "Associate".

Now ssh into the box as root (you can find the IP of the machine from the dashboard):

%STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@ %ENDCONSOLE%

Use your favourite editor (ie. nano, vi, or vim) to edit the Cloud Scheduler configuration file to contain your DAIR EC2 credentials, specifically "{keypair_name}", "{EC2_ACCESS_KEY}", and "{EC2_SECRET_KEY}", for both the Alberta and Quebec DAIR clouds. Then start the Cloud Scheduler service:

%STARTCONSOLE% vi /etc/cloudscheduler/cloud_resources.conf service cloud_scheduler start %ENDCONSOLE%

If you don't have your credentials follow this video to see how to do it. Your credentials will be used by Cloud Scheduler to boot VMs on your behalf.

Step 3: Run a job and be amazed

Switch to the guest user on the VM and then submit the first demonstration job which calculates pi to 1000 decimal places. You can then see what's happening with cloud_status and condor_q or you can issue these two commands periodically through "watch" to monitor the job progress:

%STARTCONSOLE% su - guest condor_submit demo-1.job cloud_status -m condor_q watch 'cloud_status -m; condor_q' %ENDCONSOLE%

When the job completes, it disappears from the queue. The primary output for the job will be contained in the file 'demo-1.out', errors will be reported in 'demo-1.err', and the HTCondor job log is saved in 'demo-1.log'. All these file names are user defined in the job description file 'demo-1.job'.


You have just run a demonstration job on a dynamically created Virtual Machine.

Running a batch job which uses CVMFS

CVMFS is a read only network file system that is designed for sharing software to VMs. It's a secure and very fast way to mount POSIX network file system that can be shared to hundreds of running VMs.

We provide a VM appliance which is preconfigured with CVMFS which will allow you to share your software to multiple running VMs.

Step 1:

Using the OpenStack dashboard and the same launch procedure as for the Cloud Scheduler image, launch an instance of NEP52-cvmfs-server. You must set the instance name to "{username}-cvmfs" (obviously replacing "{username}" with your own username) and assign the instance your keypair. Once the instance has launched, you should associate an floating IP with the instance.

Step 2:

If you are not already logged into the Cloud Scheduler VM, login and switch to the guest account:

%STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@ su - guest %ENDCONSOLE%

Edit the second demonstration job description file, and replace the string "{username}" with your username.


The line you must change looks like this:

%STARTCONSOLE% Arguments = {user-name} %ENDCONSOLE%

Now submit the job and watch it like we did before:

%STARTCONSOLE% condor_submit demo-2.job watch 'cloud_status -m; condor_q' %ENDCONSOLE%

Once the job finishes you should see something like this in the file "demo-2.out":

%STARTCONSOLE% cat demo-2.out Job started at Tue May 28 15:40:22 PDT 2013 => <= Simple wait script for testing the default CVMFS application.

Shutting down CernVM-FS: [ OK ] Stopping automount: [ OK ] Starting automount: [ OK ] Starting CernVM-FS: [ OK ]

-rwxr-xr-x 1 cvmfs cvmfs 110 Mar 28 16:00 /cvmfs/dair.cvmfs.server/Hello -rw-r--r-- 1 cvmfs cvmfs 47 Mar 28 16:00 /cvmfs/dair.cvmfs.server/empty

Hello! You have successfully connected to the skeleton CVMFS server and run its software.

Job finished at Tue May 28 15:40:27 PDT 2013 %ENDCONSOLE%

Adding an application to the CVMFS server

In this section we will show you how to modify the CVMFS server to distribute your own software.

Step 1:

Log into the CVMS server (if you didn't already assign a floating IP you can associate one now), switch to the distributed software directory, and copy the "Hello" bash script to the file "Goodbye". Then edit the "Goodbye" script to echo a different message.

%STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@ <---- IP of CVMFS server cd /cvmfs/dair.cvmfs.server cp Hello Goodbye vi Goodbye %ENDCONSOLE%

When you have saved your changes to the Goodbye script, publish your newly changed script to the world via CVMFS: %STARTCONSOLE% chown -R cvmfs.cvmfs /cvmfs/dair.cvmfs.server cvmfs-sync cvmfs_server publish %ENDCONSOLE%

Step 2:

Now run a job that calls your newly created "Goodbye" script: make sure to edit the demo-3.job script with your "{user-name}" as you did with demo-2 job before. %STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@ <--- IP of Cloud Scheduler Server su - guest vi demo-3.job condor_submit demo-3.job watch 'cloud_status -m; condor_q' %ENDCONSOLE%

The "demo-3" job runs the "Goodbye" application and its output file, "demo-3.out", should contain your message.

So you can publish any arbitrary files using this method. We use CVMFS to publish our 7 GB ATLAS software distributions which dramatically reduces the size of the VM and allows us to make changes to the software without changing the VM images.

Take snapshots of your customized images

If you followed all the steps above you have customized versions of both the Cloud Scheduler and the CVMFS appliances running. You can now use the OpenStack dashboard to snapshot these two servers to save yourself the work of customizing them again.

Also, you may wish to create a customized batch client image which is permanently configured to talk to a particular CVMFS server. This can be done by:

  • Launching and logging into an instance of NEP52-batch-cvmfs-client,
  • modifying the CVMFS client configuration (the "" and "" scripts both contain code to modify the CVMFS client configuration which demonstrates the modifications that you need to make),
  • and using the OpenStack dashboard to take a snapshot of your modified image.

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng launch.png r1 manage 57.9 K 2013-05-28 - 16:57 UnknownUser  
PNGpng select_key.png r2 r1 manage 23.6 K 2013-05-28 - 19:41 UnknownUser  

This topic: HEPrc > ColinLeavettBrown > NEP52RPIDocument
Topic revision: r26 - 2013-08-23 - crlb
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback