Emerging Tech

The First Exascale Computer

Exascale computing refers to supercomputers capable of at least one exaFLOPS (a billion billion calculations) per second. Such capacity represents a thousandfold increase over the first petascale computer that came into operation in 2008. (One exaflops is a thousand petaflops or a quintillion, 1018, floating point operations per second.)

Exascale computing is considered as potentially the most significant achievement in computer engineering, for it is believed to be the order of processing power of the human brain at neural level (functional might be lower). It is for instance, the target power of the Human Brain Project.

Titan, Oak Ridge National Laboratory - 20+ Petaflops 299,008 cores (Opteron) and 18,600 NVIDIA GPUs >20,000,000,000,000,000 floating point operations per second

Titan, Oak Ridge National Laboratory: 20+ Petaflops, 299,008 cores (Opteron) and 18,600 NVIDIA GPUs >20,000,000,000,000,000 floating point operations per second

Currently the fastest systems in the world perform between 10 and 33 petaflops, or ten to 33 million billion calculations per second – roughly one to three percent the speed of exascale. Put into context, if exascale computing is the equivalent of an automobile reaching 1000 miles per hour, today’s fastest systems are running within a range between ten and 33 miles per hour.

The History Of The Exascale Computer

In January 2012 Intel purchased the InfiniBand product line from QLogic for 125 million US dollars in order to fulfil its promise of developing exascale technology by 2018.

Then in February 2013, the Intelligence Advanced Research Projects Activity started Cryogenic Computer Complexity (C3) program which envisions a new generation of superconducting supercomputers that operate at exascale speeds based on Superconducting logic. In December 2014 it announced a multi-year contract with International Business Machines, Raytheon BBN Technologies and Northrop Grumman to develop the technologies for C3 program.

At the end of July 2015, US President Obama, in a new executive order, demanded for a new initiative dedicated exclusively on supercomputing research. Titled “Creating a national strategic computing initiative,” the president’s order outlined plans to create the world’s first exascale computer system in order to establish the country’s position in high-performance computing (HPC) research and development.

most-powerful-computer-in-the-world-2016

National Strategic Computing Initiative (NSCI), is an united government effort, designed to create a cohesive, multi-agency and Federal investment strategy, executed in collaboration with industry and academia.

By “united government effort” it is understood that the initiative will primarily be a partnership between the US Department of Energy (DOE), the Department of Defence (DOD), and the National Science Foundation (NSF), although the private sector will also be consulted.

More recently (today actually – 28th of August 2015), IBM together with GENCI – the high performance computing agency in France – announced a collaboration aimed at speeding up the path to the exascale computer. The collaboration, planned to run for at least 18 months, will focus on preparing complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to the exascale computer

Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem. Supported by more than 140 OpenPOWER Foundation members and thousands of developers worldwide, the OpenPOWER ecosystem includes a wide variety of computing solutions that use IBM’s licensable and open POWER processor technology.

Do Current Architectures Matter?

We are currently following three different paths. The multicore path is built around high-end CPUs, such at Intel’s x86, SPARC and IBM’s Power 7. Then, the Manycore/embedded approach which uses many simpler, low power cores from embedded systems. Finally, there is the GPU/accelerator path using highly specialised processors from the gaming/graphics market space, such as NVIDIA’s Fermi, the Cell processor and Intel’s Xeon Phi (MIC).

Faster than 50 million laptops - the race to achieve exascale computing by 2020 is on

Faster than 50 million laptops – the race to achieve exascale computing by 2020 is on.

One way to look at the race to exascale is as a swim meet. There are three swim lanes, each heading toward the same goal. But who do you bet on to win the race? If you choose too soon, your users cannot follow you. If you choose too late, you fall behind on the performance curve. And if you choose incorrectly, your users face multiple disruptive changes in the technology they rely on said Horst Simon, the Deputy Director of Lawrence Berkeley National Laboratory

The Leader Is China (for the time being)

The supercomputer that holds the current (mid 2014) speed record is the Tianhe-2 of the National Super Computer Centre in Guangzhou (China). The Tianhe – ‘Milky Way’ in English – has a top speed of 33.860.000.000.000.000 computations per second, in computer speak 33,86 petaflops. It is a super computer of the petascale generation, a successor to IBM’s Roadrunner, the first petascale computer built in 2008.

If the growth curve of supercomputers doesn’t flatten out, we expect to see the first exascale computer around 2020. This “dinosaur” will be 1,000 times faster than the IBM Roadrunner, and 30 times faster than Tianhe-2.

Tianhe 2, has a peak processing speed of 33.86 quadrillion floating-point operations per second (petaflops), derived from 16,000 computer nodes, while it has a theoretical peak processing power of 54.9 petaflops.

Tianhe 2, has a peak processing speed of 33.86 quadrillion floating-point operations per second (petaflops), derived from 16,000 computer nodes, while it has a theoretical peak processing power of 54.9 petaflops.

Tianhe-2 has 3,120,000 cores, the power of a million desktop computers, and when it computes at full power consumes as much energy as 50,000 families (24 megawatt with cooling).

“To have this computer work at full power is a real challenge,” says Wilfried Verachtert, Lab Manager ExaScience – Intel Labs Europe
Project Manager High Performance Computing – IMEC. “Already with petascale computers, the fundamental limits of the current technology begin to come to light. We can just about work around. But for the exascale computers, we’ll need real breakthroughs, both in hardware and software.”

Exascale technologies are the foundation for significant competitive, intellectual, technological and economic future advantages. Achieving the power efficiency and reliability goals we need for exascale will have enormous positive impacts on consumer electronics and business information technologies and facilities.

Exascale Computing, Already Available

The bitcoin network is now more powerful than the top 500 supercomputers, combined. Yes, combined. Add up the combined computing power of the 500 fastest supercomputers in the world – that is billions upon billions of dollars worth of hardware – and stack it up against to the raw processing power of every computer currently producing the alternative currency bitcoin, and you’ll find out that it is eight times more powerful, in fact.

Subscribe To WT VOX Newsletter

Subscribe To WT VOX Newsletter

Join our mailing list to receive the latest news from wearable tech, fashion tech and all emerging technologies.

Thank you for subscribing to our newsletter.

14 Comments

14 Comments

  1. Rowan Gonzalez

    29th August 2015 at

    I’ve never heard of exascale computers before, but it seems they are similar to what is meant with neuromorphic computers. Or am I wrong?

  2. Clint Salmon

    31st August 2015 at

    The bitcoin comparison is good why can’t we use it in medicine oe SETI

    • wblut

      6th September 2015 at

      Uh, SETI@home has been running since 1999

    • juanohost

      8th September 2015 at

      there are altcoins using the concept of desentralization to do so, one is RonPaulCoin, for medicine, and Prime coin for discovering prime numbers beyond human calculations

  3. nicolas barradeau

    2nd September 2015 at

    the first picture is a Z machine https://en.wikipedia.org/wiki/Z_Pulsed_Power_Facility not a computer…

    • wblut

      6th September 2015 at

      Yep, but no reasons are needed to haul in a few more clicks, are there now? Also the whole “exascale computing = equal to neural computing power of human brain” thing, that’s just some blah put together. There’s no magical threshold where neurology will suddenly solve everything, this isn’t high-energy physics where a fundamental energy level has to be reached to study phenomena (perhaps that’s where the picture Sandia’s Z machine comes in?).
      “Exascale technologies are the foundation for significant competitive, intellectual, technological and economic future advantages.” Why? Because all we need to solve problems are faster calculations? A bit naive I find.

    • Joao Reis

      7th September 2015 at

      Agree, at least put a reference where you get the pictures instead of “stealing”…. Shame on you. The picture was created when the Z Machine is working. To study clean energy! Do a article about it, be deep and respect where you get the pictures!

      • Shaqil

        19th September 2015 at

        dude, that’s your take from the whole article? that the pic is “stolen” from wikipedia lol

    • Raymond

      11th September 2015 at

      a picture with the queen, a pentium 4 or a bicycle, same sugar, can’t see what’s the issue here. anything about the exascale computers?

  4. Marzio Balducci

    7th September 2015 at

    WOW. Thanks for your work.

  5. Jim Vandiveer

    8th September 2015 at

    Brace yourself, memes are coming:
    World’s most powerful computer…still crashes when loading Windows.
    World’s most powerful computer…becomes self-aware and addicted to WoW.

  6. jay

    8th September 2015 at

    @ Clint
    The Bitcoin comparison is bad, because most bitcoin processing is now done by ASIC’s, specialized hardware which can do exactly one kind of calculation, you can’t compare it to other computers because they’re a 100 – 1000 times more efficient.

    • Shaqil

      19th September 2015 at

      he meant the overall network

  7. M. Usman Ashraf

    11th May 2016 at

    Great Information for exascale researchers. I am also working on exascale computing system in Ph.D research work to enhance system performance through massive parallelism.

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top
SUBSCRIBE TO WT VOX NEWSLETTER

SUBSCRIBE TO WT VOX NEWSLETTER

Join our mailing list to receive the latest news from all emerging technologies.

 

Thank you for subscribing to our newsletter.