distributed computing

Distributed computing on clusters stations, server farms, grids
Higher Edu - Research dev card
Development from the higher education and research community
  • Creation or important update: 02/08/12
  • Minor correction: 02/08/12

Stratuslab : complete IaaS cloud distribution

This software was developed (or is under development) within the higher education and research community. Its stability can vary (see fields below) and its working state is not guaranteed.
  • Web site
  • System:
  • Current version: v2.0 - 25 June 2012
  • License(s): Other - Apache-2, AGPL
  • Status: stable release
  • Support: maintained, ongoing development
  • Designer(s): StratusLab Collaboration (CNRS, UCM, GRNET, SixSq, TID, et TCD)
  • Contact designer(s): support@stratuslab.eu
  • Laboratory, service: Universidad Complutense de Madrid (Madrid, Spain), GRNET (Athens, Greece), SixSq (Geneva, Switzerland), Telefónica I+D (Madrid, Spain), Trinity College Dublin (Dublin, Irland)

 

General software features

logo stratuslab
The distribution contains all of the necessary for deploying an Infrastructure-as-a-Service (IaaS) cloud: network, storage, and virtual machine management. Moreover, it provides innovative features like the Marketplace that facilitates sharing of virtual appliances, service management that allows deployment and autoscaling of multi-machine services, and support for multi-cloud scenarios. The distribution supports multiple operating systems (CentOS 6.2, Fedora16, and OpenSuSE 12.1) and is ideal for both public and private cloud deployments. The StratusLab client, written in Python, provides a simple command line interface to access to StratusLab cloud infrastructures from GNU/Linux, Mac OS X, and Windows machines.

Source code for the distribution can be found on GitHub.

Context in which the software is used

StratusLab is used at LAL to provide one of the StratusLab reference cloud infrastructures, a public cloud open to anyone for non-commercial use. The other StratusLab reference cloud is operated by GRNET in Greece. LAL also operates a second, private cloud infrastructure for deployment of laboratory services; existing services are gradually being migrated to this cloud infrastructure.

CNRS/IBCP operates a StratusLab cloud to support bioinformatics research and services. This public cloud infrastructure is available to users of the ReNaBi network. A portal, customized for bioinformatics users, facilitates use of the IBCP cloud and simplifies access to relevant virtual appliances and databases.

There are also commercial deployments of the StratusLab cloud distribution that support software engineering processes (such as deployment of ESA's SCOS-2000 platform) and scientific use (like the Atos Helix Nebula cloud infrastructure).

Publications related to the software

All of the related publications of the project are available from :

The chapter of the book "European Research Activities in Cloud Computing" contains a general description of the StratusLab project and its cloud distribution.

Higher Edu - Research dev card
Development from the higher education and research community
  • Creation or important update: 03/01/12
  • Minor correction: 03/01/12

XtremWeb-HEP : middleware for distributed data processing

This software was developed (or is under development) within the higher education and research community. Its stability can vary (see fields below) and its working state is not guaranteed.
  • Web site
  • System:
  • Current version: 7.6.4 - 12/12/2011
  • License(s): GPL
  • Status: validated (according to PLUME)
  • Support: maintained, ongoing development
  • Designer(s): Oleg LODYGENSKY
  • Contact designer(s): xtremweb (at) lal.in2p3.fr
  • Laboratory, service:

 

General software features
  • ’XtremWeb-HEP’ is a middleware for Distributed Data Processing (grids) :
    –  It permits Administrators to :
        - manage various Users and Applications, by providing adequate access rights to them,
        - catalog various Data and Computing Resources :
           Â· PC farms managed by an IT department,
           Â· PC grids contributed by volunteer citizens,
    –  It permits Users to submit Jobs referencing these Applications,
    –  From Job descriptions, it dynamically deploys and executes these Applications on available Computing Resources, then it provides the results to authorized Users,
    –  It protects the Computing Resources running Mac OS X by starting the Application inside MAC OS X Sandbox,
    –  For the access to data, it permits the usage of HTTP, HTTPS, and any URI scheme whose driver is provided by the User.

Secured three tiers Architecture.  Scheduler and data repository managed by a software administrator on a server;  Client installed on the machine of each User (for ex. scientist);  Worker installed on the resource of each contributor.

  • Soon will come in production the version of XtremWeb-HEP additionally managing the submission of complete virtual machines for execution inside VirtualBox.

  • Interoperability with other grid middleware stacks :
    –  XtremWeb-HEP accepts X509 certificates and proxies for user management, in particular those of the DEGISCO international project.
    –  XtremWeb-HEP integrates a bridge permitting suitable XtremWeb-HEP jobs to be accepted by the gLite middleware in order to be executed by the EGI European infrastructure.
    –  On the other way, thanks to the 3G Bridge of the EDGI European project, the resources gathered by XtremWeb-HEP are available for the many users of the EGI infrastructure (gLite, ARC and Unicore middleware stacks).

  • Domain, Infrastructures, Documentation and Maintenance :
    –  In spite of its name, XtremWeb-HEP is used way beyond High Energy Physics :  Biology,  ADN Research,  Mathematics,  Physics of Solids,  Signal Processing.
    –  XtremWeb-HEP is powering at least 2 production grids (For each grid, look at the 'Statistics' page) :
        - http://www.xtremweb-hep.org/lal/xw_lal/
        - http://xw.lri.fr:4330/XWHEP
    –  XtremWeb-HEP has a complete up to date set of user manuals, presented at http://www.xtremweb-hep.org/spip.php?rubrique16
    –  XtremWeb-HEP is maintained by the software team presented at http://www.xtremweb-hep.org/spip.php?rubrique35 and is strongly supported by Institut des Grilles et du Cloud, INRIA, ENS Lyon, GRID5000

Context in which the software is used
  • Distributed Data Processing
  • Distributed Computing
  • Resource Sharing
  • Computing Grid (PC Grid)
  • Job Submission
Publications related to the software
  • Hybrid Distributed Computing Infrastructure Experiments in Grid5000 : Supporting QoS in Desktop Grids with Cloud Resources   http://users.lal.in2p3.fr/lodygens/gc/g5k.pdf
    G. Fedak, S. Delamare, O. Lodygensky.   Grid 5000 School, Reims, France - April 18-21, 2011

  • Extending the EGEE grid with XtremWeb-HEP Desktop Grid   http://users.lal.in2p3.fr/lodygens/gc/PCGrid2010.pdf
    H. He, G. Fedak, P. Kacsuk, Z. Farkas, Z. Balaton, O. Lodygensky, E. Urbah, G. Caillat, F. Aurajo, A. Emmen.   4th Workshop on Desktop Grids and Volunteer Computing Systems, Melbourne, Australia - May 17-20, 2010

  • EDGeS : Bridging EGEE to BOINC and XtremWeb   http://users.lal.in2p3.fr/lodygens/gc/EDGeS-Bridgi...
    E. Urbah, P. Kacsuk, Z. Farkas, G. Fedak, G. Kecskemeti, O. Lodygensky, A. Marosi, Z. Balaton, G. Caillat, G. Gombas, A. Kornafeld, J. Kovacs, H. He, and R. Lovas.   JoGC Journal of Grid Computing, Volume 7, Number 3, 2009.

Higher Edu - Research dev card
Development from the higher education and research community
  • Creation or important update: 16/08/10
  • Minor correction: 01/06/11

CiGri : lightweight computing grid

This software was developed (or is under development) within the higher education and research community. Its stability can vary (see fields below) and its working state is not guaranteed.
  • Web site
  • System:
  • Current version: 1.3 - Aout 2009
  • License(s): GPL - v2
  • Status: validated (according to PLUME), stable release, under development
  • Support: maintained, ongoing development
  • Designer(s): Bruno Bzeznik, Nicolas Capit, Olivier Richard, Elton Nicoletti Mathias, Yiannis Georgiou, and various contributors (internships, google summer of code)
  • Contact designer(s): Bruno.Bzeznik@imag.fr
  • Laboratory, service: CIMENT (University of Grenoble Computing center)

 

General software features

The CiGri software allows to set up a grid center to exploit a pre-existing set of super-computers. It is specialised on the management of "bag-of-tasks" jobs. It gathers the unused computing resources from an intranet infrastructure and makes it available for large set of tasks.

More information (in French) at fiche logiciel Fiche Plume.

Context in which the software is used

CiGri software is used at the computing center at Joseph Fourier University of Grenoble (CIMENT) since 2002.

Publications related to the software
  • Yiannis Georgiou, Olivier Richard, et Nicolas Capit.
    Evaluations of the lightweight grid cigri upon the grid5000 platform. In E-SCIENCE '07: Proceedings of the Third IEEE International Conference on e-Science and Grid Computing, pages 279-286, Washington, DC, USA, 2007. IEEE Computer Society.
  • Yiannis Georgiou, Nicolas Capit, Bruno Bzeznik, et Olivier Richard.
    Simple, fault tolerant, lightweight grid computing approach for bag-of-tasks applications. 3rd EGEE User Forum, 2008.
    http://indico.cern.ch/contributionDisplay.py?contr....
  • Yvan Calas, Nicolas Capit, et Estelle Gabarron.
    Cigri : Expériences autour de l’exploitation d’une grille légère. JRES, 2005.
    http://2005.jres.org/paper/90.pdf.
  • F. Dupros, F. Boulahya, J. Vairon, P. Lombard, N. Capit, et J-F. Méhaut.
    Iggi, a computing framework for large scale parametric simulations: Application to uncertainty analysis with toughreact. TOUGH Symposium, 2006.
    http://esd.lbl.gov/TOUGHsymposium/pdf/Dupros_IGGI.pdf.
  • J. Aoun, V. Breton, L. Desbat, B. Bzeznik, M. Leabadand, et J. Dimastromatteo.
    Validation of the Small Animal Biospace Gamma Imager Model Using GATE Monte Carlo Simulations on the Grid. In J. Montagnat S. D. Olabarriaga, D. Lingrand, editor, Proceedings of MICCAI-Grid Workshop Medical imaging on grids: achievements and perspectives, MICCAI-Grid Workshop, New York États-Unis d'Amérique, 2008.
Higher Edu - Research dev card
Development from the higher education and research community
  • Creation or important update: 17/05/10
  • Minor correction: 17/05/10

Vador : Vlasov approximation

This software was developed (or is under development) within the higher education and research community. Its stability can vary (see fields below) and its working state is not guaranteed.
  • Web site
  • System:
  • Current version: version 4.1 - 01/10/08
  • License(s): GPL
  • Status: beta release
  • Support: maintained, ongoing development
  • Designer(s): Francis Filbet (with E. Sonnendrucker)
  • Contact designer(s): filbet@math.univ-lyon1.fr
  • Laboratory, service:

 

General software features

In general, Particle In Cell (PIC) methods have proven to be a very efficient tool for the numerical simulation of charged particle systems. The main advantage of such methods is that they can adequately give very good results for the low order moments of primary interest in particle beam transport with a fairly small number of particles, and thus make it possible to follow a particle beam for a long time. However, in some cases one may be interested in more detailed collective wave phenomena, which may occur over shorter time scales. Then, the statistical noise inherent in the PIC method can make it difficult to get an accurate description of the phenomenon. A better option for such problems can be to use Vlasov solvers, which discretize the full phase space on a multi-dimensional grid. This procedure is intrinsically devoid of statistical noise and numerical errors are only associated with the discretization of the Vlasov equation on the phase-space grid.

Context in which the software is used

The Vlasov equation describes the evolution of a system of particles under the effects of self-consistent electro magnetic fields. The unknown f(t,x,v) depends on time t, position x, and velocity v. It represents the distribution function of particles (electrons, protons, ions,...) in phase space. This model can be used for the study of beam propagation or of a collisionless plasma.

Publications related to the software
  • Convergence of a Finite Volume Scheme for the One Dimensional Vlasov-Poisson System, SIAM J. Numer. Analysis, 39 (2001), no. 4, 1146--1169.
  • Conservative Numerical Schemes for the Vlasov Equation, J. Comput. Physics 172 (2001), no. 1, 166--187, with E. Sonnendrucker and P. Bertrand.
  • Comparison of Eulerian Vlasov Solvers, Comput. Phys. Communications, 150 (2003), no. 3, 247--266, with E. Sonnendrucker.
Higher Edu - Research dev card
Development from the higher education and research community
  • Creation or important update: 17/05/10
  • Minor correction: 17/05/10

Fast Boltzmann : solving the Boltzmann equation in N log N

This software was developed (or is under development) within the higher education and research community. Its stability can vary (see fields below) and its working state is not guaranteed.
  • Web site
  • System:
  • Current version: 1.0 - 01/10/07
  • License(s): GPL
  • Status: under development
  • Support: maintained, ongoing development
  • Designer(s): Francis Filbet
  • Contact designer(s): filbet@math.univ-lyon1.fr
  • Laboratory, service:

 

General software features

We propose fast deterministic algorithms based on spectral methods derived for the Boltzmann collision operator for a class of interactions including the hard spheres model in dimension 3. These algorithms are implemented for the solution of the Boltzmann equation in dimension 2 and 3, first for homogeneous solutions, then for general non homogeneous solutions. The results are compared to explicit solutions, when available, and to Monte-Carlo methods.

Context in which the software is used

The construction of approximate methods of solution for the
Boltzmann equation has a long history tracing back to D. Hilbert, S. Chapmann and D. Enskog [see Cercignani's book] at the beginning of the last century. The mathematical difficulties related to the Boltzmann equation make it extremely difficult, if not impossible, the determination of analytic solutions in most physically relevant situations.

Publications related to the software
  • High order Numerical Methods for the Space Non Homogeneous Boltzmann Equation, J. Comput. Physics, 186 (2003), no. 2, 457--480, with G. Russo.
  • Solving the Boltzmann Equation in N log(N) with Deterministic Methods, SIAM J. Scientific Computing, vol. 28, no. 3 (2006) 1029--1053 with C. Mouhot and L. Pareschi.
Higher Edu - Research dev card
Development from the higher education and research community
  • Creation or important update: 05/04/10
  • Minor correction: 04/04/13

Paraloop : distributing parallel jobs

This software was developed (or is under development) within the higher education and research community. Its stability can vary (see fields below) and its working state is not guaranteed.
  • Web site
  • System:
  • Current version: 1.3 - 2008 September
  • License(s): CeCILL
  • Status: stable release
  • Support: maintained, ongoing development
  • Designer(s): Emmanuel COURCELLE
  • Contact designer(s): emmanuel.courcelle at toulouse.inra.fr
  • Laboratory, service: Service Bioinformatique

 

General software features

Paraloop distributes your jobs on several processors of a machine, independently of its architecture: it may be a single SMP computer with shared memory, as well as a cluster, or even a network of workstations.

Paraloop is best suited for the use cases when we have a high number of independant tasks to execute, as is often the case in the data treatment pipelines found in bioinformatics projects.

Paraloop is a tool for programmers, who are able to easily distribute their jobs, while using the same script whatever the machine they run on. It is a perl object program: data treatment is wrapped inside an object (called a "plugin"), the code responsible for the machine interaction is wrapped inside another object (called "scheduler"). It is thus relatively easy to adapt paraloop to a new architecture: it just means writing a new scheduler (in fact, just a few methods). The same is true for plugins: they are able to read and treat data, using some particular format.

A few plugins are delivered with paraloop: some of them are specific to the bioinformatics field (one of them is useful to execute BLAST in parallel, for instance), while others are completely generic (reading a text file, ...). However, writing plugins dedicated for other thematic fields would be a quite useful task.

When used in a queue context, with a limited cpu time per job, it is possible to configure paraloop so that the current job is interrupted before being killed by the system; the job is resubmitted to the queue just before the interruption, so that it will be resumed as soon as permitted by the system.

Besides, paraloop includes a command to print the progress report of each job.

Finally, a "load balancing mode" is available: it may be used to insure that all the jobs take approximately the same time to execute.

Context in which the software is used

We currently use paraloop for our bioinformatics computations, whether on SMP servers or computer clusters. Besides, paraloop is integrated to our bioinformatics projects (eugene, LeARN, Narcisse, ...)

Publications related to the software

Paraloop was described in a poster session a J-RES 2005 http://2005.jres.org/resume/poster/138.pdf and at JOBIM, in 2005 too http://pbil.univ-lyon1.fr/events/jobim2005/proceedings/P64Courcelle.pdf

Paraloop is currently hosted on the SourceSup forge: http://sourcesup.cru.fr/projects/paraloop

Syndicate content