Cloud Scheduler Test Drive

You can use the software provided in this RPI project to easily run your bach jobs on the dair cloud. The following example will allow you to test drive this functionality quickly. In summary, you will start the Cloud Scheduler VM (NEP52-cloud-scheduler image), log into it and execute a batch the job. The batch job will trigger Cloud Scheduler to boot a VM on the DAIR OpenStack Cloud.

Step 1: Log into DAIR and boot a Cloud Scheduler instance

Login into the DAIR OpenStack Dashboard: . Refer to the OpenStack docs for all the details of booting and managing VMs via the dashboard.

Go to the 'Images and Snapshot' tab on the left of the page then click the button that says 'Launch' next to the "NEP52-cloud-scheduler" image.

Fill in the form to look the same as the screen shot below substituting your username where you see the string "hepnet".


Now you need to select an SSH key to associate with the instance so that you can login to the image. Click the access and security tab, pick a key, click "launch" (see screen shot below) and wait for the instance to become active.


Step 2: Log into the Cloud Scheduler instance and configure it

Now associate an external IP ("floating IP" in OpenStack terminology) address to the machine. Click on the instances tab on the left. From the "Actions" beside your newly started Cloud Scheduler instance, choose "Associate Floating IP", complete the dialog and click "Associate".

Now ssh into the box as root (you can find the IP of the machine from the dashboard) :

%STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@ %ENDCONSOLE%

Edit the Cloud Scheduler configuration file to contain your Dair EC2 credentials, specifically access_key_id and secret_access_key, and start the Cloud Scheduler service:

%STARTCONSOLE% nano /etc/cloudscheduler/cloud_resources.conf service cloud_scheduler start %ENDCONSOLE%

If you don't have your credentials follow this video to see how to do it. Your credentials will be used by Cloud Scheduler to boot VMs on your behalf.

Step 3: Run a job and be amazed

Switch to the guest user on the VM and then submit a 'hello world' type of demo job. You can then see what's happening with cloud_status and condor_q or you can issue these two commands periodically through "watch" to monitor the job progress:

%STARTCONSOLE% su - guest condor_submit demo-1.job cloud_status -m condor_q watch 'cloud_status -m; condor_q' %ENDCONSOLE%

When the job completes, it disappears from the queue. The primary output for the job will be contained in the file 'demo-1.out', errors will be reported in 'demo-1.err', and the HTCondor job log is saved in 'demo-1.log'. All these file names are user defined in the job description file 'demo-1.job'.


You have just run a demonstration job on a dynamically created Virtual Machine.

Running a Job which uses CVMFS

CVMFS is a read only network file system that is designed for sharing software to VMs. It's a secure and very fast way to mount POSIX network file system that can be shared to hundreds of running VMs.

We provide a VM appliance which is preconfigured with CVMFS which will allow you to share your software to multiple running VMs.

Step 1:

Using the OpenStack dashboard, launch an instance of NEP52-cvmfs-server setting the instance name to "{username}-cvmfs" (obviously replacing "{username}" with your own username).

Step 2:

Log into the Cloud Scheduler VM you already launched, edit the job description file, and replace the string "{username}" with your username.

%STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@ su - guest nano demo-2.job %ENDCONSOLE%

Now submit the job and watch it like we did before:

%STARTCONSOLE% condor_submit demo-2.job watch 'cloud_status -m; condor_q' %ENDCONSOLE%

Once the job finishes you should see something like this in the file "demo-2.out":

%STARTCONSOLE% cat demo-2.job Hello! You have successfully connected to the skeleton CVMFS server and run its software. %ENDCONSOLE%

In the next section we will show you how to modify the CVMFS server to distribute your own software.

Adding an application to the CVMFS server

In the previous section, you saw how to boot a CVMFS server and run the applications it provides. Now we are going to show you how to change the software that this CVMFS server is hosting.

Step 1:

Log into the CVMS server and copy the "Hello" bash script to the file "Goodbye". Then edit that script to say something different.

%STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@ <---- IP of CVMFS server cd /cvmfs/dair.cvmfs.server cp Hello Goodbye nano Goodbye %ENDCONSOLE%

Publish your newly changed script to the world via CVMFS: %STARTCONSOLE% chown -R cvmfs.cvmfs /cvmfs/dair.cvmfs.server cvmfs-sync cvmfs_server publish %ENDCONSOLE%

Step 2:

Now run a job that calls your newly created "Goodbye" script: %STARTCONSOLE% ssh -i ~/.ssh/MyKey.pem root@ <--- IP of Cloud Scheduler Server su - guest nano demo-3.job condor_submit demo-3.job watch 'cloud_status -m; condor_q' %ENDCONSOLE%

The "demo-3" job runs the "Goodbye" application and the its output file, "demo-3.out", should contain your message.

So you can publish any arbitrary files using this method. We use CVMFS to publish our 7 GB ATLAS software distributions which dramatically reduces the size of the VM and allows us to make changes to the software without changing the VM images.

Take snapshots of your now customized setup.

If you followed all the steps above you have customized versions of both the Cloud Scheduler and the CVMFS appliances running. You can now use the OpenStack dashboard to snapshot these two servers to save yourself the work of customizing them again.

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng launch.png r1 manage 57.9 K 2013-05-28 - 16:57 UnknownUser  
PNGpng select_key.png r2 r1 manage 23.6 K 2013-05-28 - 19:41 UnknownUser  
Edit | Attach | Watch | Print version | History: r36 | r21 < r20 < r19 < r18 | Backlinks | Raw View | More topic actions...
Topic revision: r19 - 2013-05-28 - crlb
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback