Dites-nous en plus sur votre projet

Nous vous proposerons une formule adaptée à vos besoins, ainsi qu’une estimation de devis.
Laissez-vous guider

Pourquoi choisir Aneo pour votre projet ?

Aneo est une agence conseil en transformation qui vous accompagne tout au long de la vie de vos projets sur des problématiques de :

Le +  d’Aneo : le conseil sans frontière !

Notre richesse, c’est de vous proposer des équipes pluridisciplinaires, chaque membre avec une approche, une expérience et une sensibilité différente : coach agile, formateur soft skills, Data Scientist designer, Architecte etc. On mise sur la complémentarité des disciplines pour vous apporter la meilleure réponse !

Pourquoi choisir Aneo  pour votre projet ? - Aneo

Aneo, une organisation à part entière !

Depuis 2015, Aneo est une organisation plate à gouvernance plate. Aplatir la hiérarchie, supprimer les silos, instaurer des bonus collectifs, ce nouveau modèle d’organisation avec comme objectif: apporter plus de valeur  à l’entreprise, aux collaborateurs et aux clients.

Le + d’Aneo : l’inattendu !

La sérendipité, c’est « le don de faire par hasard des découvertes fructueuses ». Chez Aneo, nous croyons fermement que les meilleures solutions sont parfois les plus inattendues. Ne rien s’interdire, expérimenter, oser s’entourer de profils atypiques et avoir une obsession : apporter la juste valeur.

Aneo, une organisation à part entière !  - Aneo

Qui êtes-vous ?

Vous êtes pour

Votre secteur

1 seul choix possible
Assurance & Protection sociale
Banque et Finance
Industrie & Services
Santé

Vos besoins

Plusieurs choix possibles
IT & Digital
Transformation des Organisations
Stratégie Business
Pilotage de projets

Détails

Des précisions à ajouter sur votre projet ? (facultatif)
C'est noté !
Nous avons pris en compte les spécificités de votre projet.
Nos équipes vous contacteront sous 48h pour en discuter plus amplement.
Votre prénom *
Votre nom *
Votre adresse email pro *
Votre numéro de téléphone *
Bien reçu !
Nos équipes vous contacteront sous peu
pour discuter de votre projet.
Article

Are Qarnot heaters as efficient as traditional HPC machines?

Qarnot : HPC Everywhere ?

 

Cloud computing is changing the way we execute intensive computations. It could prove to be a very interesting alternative to a classical supercomputing approach.

Qarnot offers a particularly innovative HPC Cloud solution: its infrastructure is based on “computer heaters” installed in offices, social housings or buildings that reuse the heat generated by the microprocessors.

With this approach, the compute services operated by Qarnot have a carbon footprint reduced by 75%.

As a partner of Qarnot, Aneo has led a study on the performance of this solution for the processing of typical HPC workloads.

In this article, we will first present Qarnot’s infrastructure and features and then the results of our analysis.

 

Infrastructure

 

Qarnot’s main offer is the QH-1 (ex Q.rad) heater. Installed in individual houses or in offices, a QH-1 includes 4 Intel Core i7 processors (4 cores, between 3.5 and 4 GHz, Ivy Bridge or Haswell architecture) and 16 GB of RAM. QH-1s do not hold any physical storage capacity but can access memory disks located in the same building. The Qarnot platform represents about 5 000 cores in total today and will grow to 12 000 before end of 2018.

Qarnot is particularly suited for batch HPC processing and applications called “embarrassingly parallel”, meaning with no or little dependency/communication between tasks, like image processing or financial calculations and within the limit of the available memory.

Qarnot is also developing solutions to access interconnected nodes using MPI:

  • A “QH-1” profile, with standard QH-1s, interconnected but with no guarantee on their geographic location. The network’s performance on these nodes is variable and generally not suited for low latency/high throughput data intensive applications (in the 1ms range of latency).
  • A “O-mar” profile, based on a specific infrastructure, where nodes are close and with better connectivity (Ethernet 1Gbps, 20 µs of latency). These nodes are much more adapted to an HPC-type workload. O-mars can only contain up to 64 nodes (256 cores), but this should change soon.

Interface and API

 

Qarnot offers different “profiles” depending on the nature of use, from SaaS (Software as a Service) to PaaS (Platform as a Service). A Docker environment can be found on nodes to execute calculations in an independent “container”.

In this study, we have used a specific profile created by Qarnot authorizing nodes to interconnect with MPI.

Qarnot also makes available a REST API and a Python SDK to facilitate the allocation of nodes/disks and to specify the job parameters (input and output files, etc…):

 

The workflow and the billing can be monitored in real-time through a Web interface:

Benchmarks

 

The performance of the network is an essential criteria for HPC. This is even more important as Qarnot devices are not packing many cores and therefore imply the necessity of having a large amount of nodes.

To this end, we have measured the results of a highly communicating, distributed computing application on both profiles offered by Qarnot: QH-1 and O.mar.

The measures are first executed in mono node to evaluate the overall performance of machines and then in multi-node (using MPI) to evaluate network capacity. Each test uses an MPI process and as many threads as the number of physical cores.

The code used in our benchmarks is SeWaS (Seismic Wave Simulator), an application developed by ANEO, simulating the propagation of seismic waves and inspired by another application used by engineers from the BRGM (Bureau de recherches géologiques et minières). One of the key characteristics of SeWaS is to carry out communications between neighbouring cells during each iteration, making the application very sensitive to network latency.

SeWaS is implemented according to a task-based model with executive support PaRSEC (Parallel Runtime Scheduling and Execution Controller). This framework allows the scheduling of a task graph on architectures with distributed memory and is able to automatically recover communications by calculations.

 

1. Mono-node performance

We start by comparing a Qarnot node with a standard HPC socket, on a simple test case (16 million cells and 100 time steps).

Results are presented in millions of cells treated per second (Mcells/sec).

  • For the same number of cores, the performance of Qarnot nodes is comparable to the one of a same generation HPC node (Haswell). In mono node, SeWaS is limited by the memory bandwidth, it is therefore not surprising to have a factor of about 2 despite the slightly higher frequency of Qarnot nodes.
  • When it comes to executing one task, the standard QH-1 profile provides the same performance as the O.mar one, which one could expect given the fact that the CPU specifications are the same.

 

2 – Inter-node performance

To assess the network’s performance in more detail, we measured both strong scaling and weak scaling. Strong scaling is the evolution of performance when increasing the number of nodes on a specific problem size. Weak scaling is done by also increasing the problem size, in our case this means varying the number of cells in the test case proportionately to the number of nodes.

 

Strong scaling

 

The following performances were obtained after a medium test case scaled for 64 cores (92 million cells, 100 time steps), involving intensive calculations and communications.

With the O.mar profile, the performances are quite convincing on a large scale. The performances on 2 nodes are comparable to the ones of a Xeon.

With the standard profile, the high latency is proven to be a major concern, and the total time increases with the number of nodes. Thus showing the benefits of using a O.mar profile for this kind of operations.

 

Weak Scaling

 

If the scalability would have been perfect, the curve would be a straight line. In our study we have showed that the efficiency is good, superior to 90% up to 16 nodes and to 80% up to 48.

The final performance ratio in comparison to the Xeon node is presented below:

Conclusion

 

The study shows that the Qarnot approach is an interesting solution for HPC workloads assuming that:

  • The scale requirements are a match for Qarnot infrastructure
  • The application requirements are within the platform limitations (memory)
  • The inter-tasks communication requirements are within the networking technology limit (1 Gbps interconnexion for O.mar). In this case It is possible to reach an efficiency of about 90%.

However, one must note that the access to interconnected nodes in MPI is a non-public feature at the moment, that the company only offers on a case by case basis depending on the client’s projects and needs.

In addition, Qarnot is also working on new solutions such as a “boiler-computer”, which should bring more possibilities in terms of hardware (600 AMD Ryzen 7 cores) and network configuration (Infiniband). Qarnot already offers this type of material through its partner Asperitas and will launch a first prototype in December 2018.

 

[Click here for the French version !]

 

Credit : Louis STUBER

 

Ça peut aussi vous intéresser
Les Femmes et le Numérique : une histoire de « Je t’aime, moi non plus … »
17 novembre 2021

Les Femmes et le Numérique : une histoire de « Je t’aime, moi non plus … »

Cet article sur les Femmes et le Numérique fait partie de la série sur le thème « Numérique et RSE ». Ils sont publiés par une équipe de consultants Aneo. Ada Lovelace, la pionnière ayant créé la machine analytique… Grace Hopper, « the Queen of software » témoin du premier…
En savoir plus
Article
DevOps et modèles organisationnels
14 septembre 2021

DevOps et modèles organisationnels

Le mouvement DevOps se présente comme une culture impliquant les process, l’outillage et l’organisation. Si les deux premiers axes semblent couverts par des démarches méthodologiques et techniques éprouvées (lean, agilité, automatisation, craft …etc), l’axe organisationnel manque – quant à lui – d’abaques fiables. De ce fait il devient primordial avant…
En savoir plus
Article
Tech Intelligence #6 – Développement du numérique en Afrique : où en est-on ?
20 juillet 2021

Tech Intelligence #6 – Développement du numérique en Afrique : où en est-on ?

La série Tech Intelligence explore des sujets variés de la tech : cloud, cybersécurité, blockchain … Aujourd’hui, découvrez un rapide coup d’oeil sur le développement du numérique sur le continent africain. Cliquez ici pour retrouver les autres sujets traités par Tech Intelligence.  Tout miser sur la téléphonie…
En savoir plus
Article
Vous avez un projet de transformation
digitale pour votre entreprise ?