Monday, February 25, 2013

Tips: Source installation on your home directory

Hi, this is my first post. I'd like to share some tips on this blog.

The first tip is source installation of software. Sometimes I want to try some new version of software for fun or for study, and install it on my home directory. The source installation is basically 5 steps.

1. Download
 wget http://www.url.com/path/to/software.tar.gz
 or
 wget http://www.url.com/path/to/software.tar.bz2
2. Uncompress
 tar zxvf software.tar.gz
 or
 tar jxvf software.tar.bz2
3. Setup
 ./configure --prefix=/path/to/install
4. Build
 make
5. Install
 make install

So here's an example, installing pre-released OpenMPI version 1.7rc6 on my home for using bravo.

First, login to india and submit an interactive job on bravo queue.
 ssh myaccount@india.futuregrid.org
 qsub -I -l nodes=1:ppn=8 -q bravo
"-I" = interactive mode
"-l nodes=1:ppn=8" = reserve 1 node and 8 processors per node
"-q bravo" = submit a job on bravo cluster

Download OpenMPI 1.7rc6 from the website(http://www.open-mpi.org/software/ompi/v1.7/)
 wget http://www.open-mpi.org/software/ompi/v1.7/downloads/openmpi-1.7rc6.tar.bz2

Uncompress
 tar jxvf openmpi-1.7rc6.tar.bz2

Create a directory for software, and setup the installation.
 mkdir -p /N/u/myaccount/opt/test
 cd openmpi-1.7rc6
 ./configure --help
 ./configure --prefix=/N/u/myaccount/opt/test/openmpi-1.7rc6

Build
 make

Then, install
 make install

Installation is done.

This is optional, I would usually add the path and the library path by putting these lines at the bottom of my .bashrc.
 # OpenMPI-1.7
 export OPENMPI=/N/u/myaccount/opt/test/openmpi-1.7
 export PATH=$OPENMPI/bin:$PATH
 export LD_LIBRARY_PATH=$OPENMPI/lib:$LD_LIBRARY_PATH

Now I have this new version of openMPI for me. So, MPI benchmark would be good for the next topic.

Tuesday, February 19, 2013

Announcing Nimbus Phantom Alpha on FutureGrid


We recently released Nimbus Phantom as a hosted service running on FutureGrid

Nimbus Phantom is a hosted service, running on FutureGrid, that makes it easy to leverage on-demand resources provided by infrastructure clouds. Phantom allows the user to deploy a set of virtual machines over multiple private, community, and commercial clouds and then automatically grows or shrinks this set based on policies defined by the user. This elastic set of virtual machines can then be used to implement scalable and highly available services. An example of such service is a caching service that stands up more workers on more resources as the number of requests to the service increases. Another example is a scheduler that grows its set of resources as demand grows.

Currently Phantom works with all FutureGrid Nimbus and OpenStack clouds as well as Amazon and the XSEDE wispy cloud (the only XSEDE cloud for now). A user can access it via two types of clients: an easy to use web application and a scripting client. The scripting client is the boto autoscale client as Phantom currently implements Amazon Autscaling API – so you can think of it as Amazon Autoscale for FutureGrid clouds that also allows for cloudburst to XSEDE and commercial clouds and is easy to extend with your own policies and sensors.

The Nimbus Phantom web interface in action
The simplest scenario for using Phantom is as a gateway for deploying and monitoring groups of virtual machines spread over multiple FutureGrid clouds. In a more complex scenario you can use it to cloudburst from FutureGrid clouds to Amazon. Finally, you can use it to explore policies that will automate cloudbursting and VM allocations between multiple clouds.

For more information and/or to use the service go to www.nimbusproject.org/phantom. It should take no more than 10-15 minutes to start your own VMs. Happy scaling and we will appreciate your feedback!

Monday, February 4, 2013

CM - Cloud mesh a simple tool to manage multiple virtual machines.

CM - Cloud mesh a simple tool to manage multiple virtual machines.

Although euca2ools and nova clients provide a simple interface to IaaS frameworks, they do not provide the convenience I needed to start, manage, and experiment with multiple parallel virtual machines.

Hence I wrote a little tool called "cloud mesh" or short cm for FutureGrid. A small video about some of its functionality is shown here:

The code and instalation instructions can be found here:

FutureGrid and ConPaaS

I am visiting Vrije Universiteit in Amsterdam for a sabbatical, and a major collaboration has been with the ConPaaS team led by Thilo Kielmann and Guillaume Pierre (who's now at Rennes). One of the main activities in our collaboration has been the integration of technologies that are part of the FutureGrid software stack - the IPOP virtual network and the Grid appliance "bag-of-tasks" middleware - with the ConPaaS Platform-as-a-Service system. This has been a very interesting activity, as I get  a chance to learn more about various PaaS technologies, and witness the possible benefits that IPOP/Grid appliance technologies bring to other projects.

First, a bit of background. IPOP is an "IP-over-P2P" overlay that self-organizes virtual private networks across the Internet; the Grid appliance is a virtual machine appliance that encapsulates IPOP and high-throughput computing middleware (HTCondor) to enable easy-to-deploy self-configuring virtual clusters for "bag-of-tasks" applications. FutureGrid users have access to run Grid appliances across its infrastructure for research and educational projects (check out our tutorials for details on how to run the Grid appliance on FutureGrid).  ConPaaS is part of the EU Contrail project; it provides a runtime environment to easily run applications on the cloud by deploying full-fledged platforms as a service. Its applications include Web hosting, task farming, and Map/Reduce distributed computing.

What IPOP can bring to ConPaaS is the ability to create virtual private clouds that span across multiple providers, as well as handling network address translation (NAT) and firewall devices. Essentially, IPOP-enabled ConPaaS services can communicate over a VPN that is decoupled from the public Internet. The concepts behind the Grid appliance are applicable to one of the services that ConPaaS offers - task farming. By encapsulating the IPOP virtual networks in virtual machine images, a task-farming service can transparently aggregate resources across multiple cloud providers into a single virtual resource pool that is scheduled by HTC middleware - for independent tasks, as well as for workflows.

So, there is a nice synergy between these systems, but getting complex software systems to work together is where the rubber meets the road. As a first concrete step in the integration, we had a productive hands-on meeting this month - where we installed and tested IPOP in the ConPaaS image, shook off a couple of bugs/configuration issues, and worked out a path towards integrating IPOP as a virtual network service that is optionally enabled by a ConPaaS user. The initial tests went well, and we were able to connect ConPaaS VMs running across Amazon EC2 and the DAS cluster at Vrije.

We hope to have integration ready by the next release of ConPaaS. In the meantime, the ConPaaS team shows that we "eat our own dog food" to celebrate the release of version 1.1 - now the project's web site www.conpaas.eu can rely on its own ConPaaS technologies to host it. Cheers!