Elastic Cloud Computing Cluster (EC3) is a tool to create elastic virtual clusters on top of Infrastructure as a Service (IaaS) providers, either public (such as Amazon Web Services, Google Cloud or Microsoft Azure) or on-premises (such as OpenNebula and OpenStack). We offer recipes to deploy TORQUE (optionally with MAUI), SLURM, SGE, HTCondor, Mesos, Nomad and Kubernetes clusters that can be self-managed with CLUES: it starts with a single-node cluster and working nodes will be dynamically deployed and provisioned to fit increasing load (number of jobs at the LRMS). Working nodes will be undeployed when they are idle. This introduces a cost-efficient approach for Cluster-based computing.
PyYAML is usually available in distribution repositories (
python-yaml in Debian;
PyYAML in Red Hat; and
PyYAML in pip).
PLY is usually available in distribution repositories (
ply in pip).
Requests is usually available in distribution repositories (
requests in pip).
jsonschema is usually available in distribution repositories (
jsonschema in pip).
sshpass command is required to provide the user with ssh access to the cluster.
As Python 2 is no longer supported, we recommend to install ec3 with Python 3.
First you need to install pip tool. To install them in Debian and Ubuntu based distributions, do:
sudo apt update
sudo apt install -y python3-pip
In Red Hat based distributions (RHEL, CentOS, Amazon Linux, Oracle Linux, Fedora, etc.), do:
sudo yum install -y epel-release
sudo yum install -y which python3-pip
Then you only have to call the install command of the pip tool with the ec3-cli package:
sudo pip3 install ec3-cli
You can also download the last ec3 version from this git repository:
git clone https://github.com/grycap/ec3
Then you can install it calling the pip tool with the current ec3 directory:
sudo pip3 install ./ec3
Basic example with Amazon EC2¶
First create a file
auth.txt with a single line like this:
id = provider ; type = EC2 ; username = <<Access Key ID>> ; password = <<Secret Access Key>>
<<Access Key ID>> and
<<Secret Access Key>> with the corresponding values
for the AWS account where the cluster will be deployed. It is safer to use the credentials
of an IAM user created within your AWS account.
This file is the authorization file (see Authorization file), and can have more than one set of credentials.
Now we are going to deploy a cluster in Amazon EC2 with a limit number of nodes = 10. The parameter to indicate the maximum size of the cluster is called
ec3_max_instances and it has to be indicated in the RADL file that describes the infrastructure to deploy. In our case, we are going to use the ubuntu-ec2 recipe, available in our github repo. The next command deploys a TORQUE cluster based on an Ubuntu image:
$ ec3 launch mycluster torque ubuntu-ec2 -a auth.txt -y
WARNING: you are not using a secure connection and this can compromise the secrecy of the passwords and private keys available in the authorization file.
Infrastructure successfully created with ID: 60
▄▟▙▄¨ Front-end state: running, IP: 22.214.171.124
If you deployed a local IM server, use the next command instead:
$ ec3 launch mycluster torque ubuntu-ec2 -a auth.txt -u http://localhost:8899
This can take several minutes. After that, open a ssh session to the front-end:
$ ec3 ssh mycluster
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-24-generic x86_64)
* Documentation: https://help.ubuntu.com/
Also you can show basic information about the deployed clusters by executing:
$ ec3 list
name state IP nodes
mycluster configured 126.96.36.199 0
EC3 in Docker Hub¶
$ sudo docker pull grycap/ec3
$ sudo docker pull ghcr.io/grycap/ec3
You can exploit all the potential of EC3 as if you download the CLI and run it on your computer:
$ sudo docker run grycap/ec3 list
$ sudo docker run grycap/ec3 templates
To launch a cluster, you can use the recipes that you have locally by mounting the folder as a volume. Also it is recommendable to mantain the data of active clusters locally, by mounting a volume as follows:
$ sudo docker run -v /home/user/:/tmp/ -v /home/user/ec3/templates/:/etc/ec3/templates -v /home/user/.ec3/clusters:/root/.ec3/clusters grycap/ec3 launch mycluster torque ubuntu16 -a /tmp/auth.dat
Notice that you need to change the local paths to the paths where you store the auth file, the templates folder and the .ec3/clusters folder. So, once the front-end is deployed and configured you can connect to it by using:
$ sudo docker run -ti -v /home/user/.ec3/clusters:/root/.ec3/clusters grycap/ec3 ssh mycluster
Later on, when you need to destroy the cluster, you can type:
$ sudo docker run -ti -v /home/user/.ec3/clusters:/root/.ec3/clusters grycap/ec3 destroy mycluster