Posts

Showing posts with the label Advanced High Performance Computing Systems

Getting the Most From the New Intel® Xeon® Scalable Processors on HPC Workloads

July brought news of the launch of the new Intel® Xeon® Scalable Processors, previously known by the codename Skylake, and with it new features and capabilities that can bring performance enhancements to both legacy codes and new applications. This may leave you wondering what the best approaches are to getting the most out of these new workhorses. In this article, we’ll let you know about a couple of the key insights we’ve gained from running the benchmarks we’ve collected and optimized over the last four decades on the latest in the line of processors that are ubiquitous in high-performance computing (HPC).HPC codes require balanceParallel programming is about the balance between computation and data movement. Many of the benchmark codes we use at Cray to optimize systems are limited by memory bandwidth. This balance is even more important as processor core counts have increased to address the end of Dennard scaling and the subsequent impact on clock frequency. In HPC codes, cores a…

How HPC Can Help Tap the Power of Ocean Waves

Image
“What’s amazing about ocean wave energy is the enormity of the resource sitting there,” says Ashkan Rafiee. “Whoever solves this riddle will make a huge impact on the world.”Dr. Rafiee is the hydrodynamics team leader for Carnegie Clean Energy — an Australian wave, solar and battery energy company well on its way to making wave power a reality. For the last decade, Carnegie has been developing a wave energy device that converts ocean swell into zero-emission, renewable power and desalinated freshwater. Dubbed “CETO,” the device is already in use off of Western Australia’s Garden Island, helping power the country’s largest naval base.But deploying wave energy technology at scale is another matter. “The potential is phenomenal,” says Jonathan Fievez, Carnegie’s chief technology officer. “The amount of energy hitting the coast alone could power half the country. But juxtaposed with that is the challenge of capturing it. We’re dealing with an extremely harsh environment.” Between the salt…

CHIUW 2017: Surveying the Chapel Landscape

Image
CHIUW 2017 — the 4th Annual Chapel Implementers and Users Workshop — was held last month in Orlando, Fla., in conjunction with IEEE IPDPS 2017. Right out of the gate, attendees heard about a number of positive trends in the annual “state of the project” talk which summarizes Chapel progress over the past year. This year’s highlights included:Chapel performance is competitive with hand-coded C+OpenMP for single-node workloads as demonstrated by benchmarks like the Livermore Compiler Analysis Loop Suite (LCALS).For key communication benchmarks like ISx and HPCC RA, Chapel performance is increasingly competitive with the MPI/SHMEM reference versions, and occasionally beats them (see Figure 1 below).In May, Chapel became the fifth-fastest language in the Computer Language Benchmarks Game, beating languages like Java, Fortran, Swift and Go; and trailing only C, Rust, C++ and Ada. Chapel’s entries also tend to be far more compact and readable than these competitors (see Figure 2 below).The …

Data-Intensive Computing to Simulate the Brain

Understanding how the human brain works will take more than brains. Along with the planet’s smartest scientific minds, it will take never-before-achieved computing capabilities.The science and technology required to decode the human brain is a scientific final frontier …  and Professor Dr. Dirk Pleiter is on the front lines. The theoretical physics professor and research group leader at the Jülich Supercomputing Centre (JSC) in Jülich, Germany, is part of the Human Brain Project (HBP), a 10-year-long European research initiative tasked with creating a working simulation of the brain.“Understanding the human brain is one of the greatest challenges facing 21st century science,” states the HBP’s report to the European Commission. “If we can rise to the challenge, we can gain fundamental insights into what it means to be human, develop new treatments for brain diseases, and build revolutionary new information and communications technologies.This ambitious project requires a leap in scien…

AI Meets HPC at ISC 2017

Image
Running astrophysics applications with 200,000 MPI tasks … diagnosing and treating autism spectrum disorders … researching the secure nature of the Advanced Encryption Standard algorithm. Those are among the many research dreams that HPC professionals talked about at the Cray booth at ISC High Performance 2017 in Frankfurt, Germany last week.Cray’s mission is to help customers realize their most challenging data science and research goals, and at ISC we took the next step with the introduction of the Cray® Urika®-XC analytics software suite, which embodies the convergence of supercomputing and analytics. With the new Urika-XC software suite, users can now run analytics, graph and AI workloads simultaneously on the same system, benefiting from an unprecedented breadth of results.Need more info on the AI-HPC link? High-performance computing “helps AI algorithms scale performance with data,” notes a new white paper from research firm Tractica. “Standard techniques in HPC like low-latency…

Bio-IT World 2017: Life Sciences Embrace Cloud

Hope you had a chance to attend the 16th annual, three-day Boston Bio-IT World Conference & Expo this year. Competitions included: Best Practices awards announcements and the Benjamin Franklin Award, and the show planners named the 2017 Best of Show winners. Tracks for the conference included: data and storage management, cloud computing, networking hardware, bioinformatics, next-gen sequencing informatics, data security and more.Face to face
As many times as I’ve walked Bio-IT’s exhibit hall floor, the excitement of seeing in person someone I’ve been interacting with via email and phone – in some cases for years – and hearing their thoughts still pumps me up. True, social media has diminished the impact of conferences, but the floor is crammed with genius – most of it humble and open. Who wouldn’t seize the opportunity to deeply connect with at least a few new great people?Where’s the hardware?
Five years ago, the Bio-IT floor brimmed with racks of hardware – today they have all …

Cray’s New Urika-XC Suite: the Convergence of Supercomputing and Analytics

Cray today announced the launch of the Cray® Urika®-XC analytics software suite, which brings graph analytics, deep learning and robust big data analytics tools to the company’s flagship line of Cray® XC™ supercomputers. The Cray Urika-XC analytics software suite empowers data scientists to make breakthrough discoveries previously hidden within massive datasets, and achieve faster time to insight while leveraging the scale and performance of Cray XC supercomputers.With the Urika-XC software suite, analytics and artificial intelligence (AI) workloads can run alongside scientific modeling and simulations on XC supercomputers, eliminating costly and time-consuming movement of data between systems. Cray XC customers will be able to run converged analytics and simulation workloads that expand the boundaries of discovery across a wide array of scientific and commercial endeavors, such as real-time weather forecasting, predictive maintenance, precision medicine and comprehensive fraud detect…

Data Analytics Rule at Spark Summit West 2017

Spark Summit West is always well attended, and this year was no exception. Data engineers, data scientists, programmers, architects and technology enthusiasts descended on San Francisco’s Moscone Center earlier this month to learn all about the latest developments with Apache Spark™ and its massive ecosystem.Complexity of analytics use cases and data science was a dominant theme throughout this year’s event. The keynote by the CEO and co-founder of Databricks, Ali Ghodsi, highlighted some of the challenges with implementing large-scale analytics projects. Ghodsi discussed how the continued growth of Apache Spark has resulted in myriad innovative uses cases, from churn analytics to genome sequencing. These applications are difficult to develop, as they often involve siloed teams of different domain experts; their complex workflows take too long from data access to insight; and the infrastructure is costly and difficult to manage.AI, ML, DLData scientists like to explore data by transfo…

Hail Powers Precision Medicine on the Urika-GX Platform

Image
Hail is an open-source platform for analyzing variants in genomic data on top of Apache Spark™. It takes advantage of three key elements of Spark’s design:Scalability. Datasets are multi-terabyte and growing rapidly.Simpler APIs. They hide the complexity of distributed computing and parallel execution, and let biologists explore data using familiar biological terms.Algorithms for large-scale linear algebra and ML. Performant code leverages both linear algebra legacy and custom libraries (for example, Cray compilers and tools).Hail provides a parallel Scala API as well as parallel Python API, and adds powerful, expressive high-level layers that include: fast, easy data ingest in various data formats, especially if you’re using parallel file I/O from file systems like Lustre®; expressive methods to manipulate and visualize high-dimensional data; and statistics and ML methods specific to apps in genetics.Figure 1. Summary of the power of three prevalent variant analysis technologies – ge…

Leidos and Cray to Offer Multi-Level Security Solutions

Cray and Leidos, a global science and technology solutions leader, have signed a strategic alliance agreement to offer Multi-Level Security (MLS) solutions that include the Cray® CS™ series of cluster supercomputers to federal and commercial customers.Customers are facing rapidly evolving challenges: increasing cyberattacks, a competition-driven need to reduce time to market and a constant focus on increasing efficiencies. Through this strategic alliance, Leidos and Cray can now work together to expand current MLS solutions that are designed to give customers the ability to:Manage risk and collaborate more efficiently by allowing teams at varying security clearances to access the same system in a single environment, while maintaining data access levels.Save money and time by being able to consolidate multiple computing systems.Streamline implementation by getting a comprehensive MLS solution from a single vendor.“We look forward to working with Cray to evolve the capabilities and tech…

Cray System Helps Cut Jet Fuel Burn by 15 Percent

Consider the engine power required to get a plane off the ground. And then consider the engineering that goes into it.Brian Mitchell does. Every day. The senior principal engineer at GE Global Research leads the development of GE’s in-house computational fluid dynamics (CFD) code. Engineers utilize CFD to design better compressor blades, fan blades, turbine blades — all the stuff of the modern jet engine.“What we do is write the software that GE Aviation uses to simulate the air flow in jet engines,” says Dr. Mitchell.In fact, engine design, CFD — and the powerful computers to process it all — have a strongly symbiotic relationship. Market demand for more fuel-efficient engines requires more creative designs. Those designs demand a more detailed understanding of the physics, which in turn demands more advanced CFD. At the end — or beginning — of it all? Computational power.“As computers have gotten exponentially more powerful, we’ve been able to go after increasingly sophisticated bla…

Cray CEO Discusses New Supercomputing as a Service

Cray’s president and chief executive officer, Peter Ungaro, offers insights into the company’s new supercomputing-as-a-service offering, in partnership with Markley. The first industry offering is the Cray® Urika®-GX agile analytics platform for life sciences organizations.Q: Pete, tell us about this announcement.Ungaro: One of the things that customers have been asking us for is the ability to get access to a Cray supercomputer in different ways, other than purchasing one, putting it into their datacenter and managing it themselves, as not all customers have the facility, staff or budget. Some want the option to use OpEx dollars and to not have to host and manage the system themselves. Our first example of this was our partnership with Deloitte where we have a Cray supercomputer on the Deloitte cloud and Deloitte offers cybersecurity services to customers with it. This announcement is a further step in this direction as we are partnering with Markley to provide true supercomputing-as…

Cray Users from Around the World Meet at CUG 2017

Image
Across Lake Washington from Cray’s downtown Seattle headquarters is the small city of Redmond, Wash. Redmond is best known as the home of Microsoft. But last week, it was the location for this year’s Cray User Group (CUG) conference.Cray CEO Pete Ungaro presents at CUG 2017. The Cray User Group is an independent, international corporation of member organizations that own Cray Inc. computer systems. The annual CUG conference brings together users from around the world to share several days of tutorials, technical sessions and networking. I serve on the CUG board of directors as the Cray advisor, a role I’ve enjoyed for more than 10 years.New to the program this year was an analytics track, which included a panel discussion titled “Future Directions of Data Analytics and High Performance Computing.” That was followed by a variety of talks including one about Argonne National Laboratory’s use of the Cray® Urika®-GX system for high-throughput image analysis and a Cray presentation on quant…

Markley and Cray Offer Supercomputing as a Service

Global supercomputer leader Cray Inc. and Markley, a premier provider of datacenter space and cloud computing services, today announced a partnership to provide supercomputing-as-a-service solutions that combine the power of Cray supercomputers with the premier hosting capabilities of Markley. Through the partnership, Markley will offer Cray supercomputing technologies, as a hosted offering, and both companies will collaborate to build and develop industry-specific solutions.The availability of sought-after supercomputing capabilities both on-premises and in the cloud has become increasingly desirable across a range of industries, including life sciences, bio-pharma, aerospace, government, banking and more – as organizations work to analyze complex datasets and research, and reduce time to market for new products. Through the new supercomputing-as-a-service offering, Cray and Markley will make it easier and more affordable for research scientists, data scientists and IT executives to …

Cray Announces Production-Ready AI Cluster Supercomputers

Image
Cray today announced the launch of two new Cray® CS-Storm™ accelerated cluster supercomputers – the Cray CS-Storm 500GT and the Cray CS-Storm 500NX. Purpose-built for the most demanding artificial intelligence (AI) workloads, the new Cray systems will provide customers with powerful, accelerator-optimized solutions for running machine learning and deep learning applications.The new CS-Storm systems are designed for organizations looking for the fastest path to new discoveries, a building block approach to scalability, and the assurance of collaborating with a trusted partner with a long history of designing and deploying tightly integrated, highly scalable systems. Leveraging NVIDIA® Tesla® GPU accelerators, the new CS-Storm systems expand Cray’s portfolio of integrated systems and will give customers a broader range of accelerated supercomputers for computational and data-intensive applications.“Customer demand for AI-capable infrastructure is growing quickly, and the introduction of…

Original Cray-1 Supercomputer Unveiled in Seattle

Image
If seeing an original Cray-1TM or Cray-2TM supercomputer is on your bucket list, it’s time to book a trip to Seattle. Living Computers: Museum + Labs, which offers hands-on experiences with all types of computers, has added two of the most important supercomputers ever created to its permanent collection. The Cray-1 supercomputer is now on display at Living Computers. It will be joined later this year by the Cray-2 supercomputer. Living Computers intends to recommission the Cray-2 and make it available to the public.The world’s first supercomputer, the 1965 Control Data Corporation 6000 series, was designed by the brilliant engineer Seymour Cray and represented a threefold increase in computing power. Living Computers, which has the world’s only operating CDC 6500, wished to add to their collection by obtaining Cray’s next invention, the Cray-1 supercomputer. Released in 1976 from Cray Research Inc. in Seymour’s hometown of Chippewa Falls, Wis., the Cray-1 was a fixture at elite labs …

Cray Analytics and Alluxio – Wrangling Enterprise Storage

Image
For business to not just survive — but to flourish — it’s become imperative to make decisions with near immediacy, continuously pivot strategy and tactics, and merge streams of inquiries into meaningful action. Executing requires high-frequency insights — the competitive advantage in today’s frenetic business landscape. Together with Alluxio, Inc., we enable businesses to gain the competitive advantage with faster time to insights with our integrated solution of Cray high-performance analytics platform and Alluxio’s memory-speed virtual storage system — Alluxio Enterprise Edition.For businesses on the cusp of innovation and seeking that information advantage, Cray has fused supercomputing with an open, standards-based framework to deliver an industry first: the Cray® Urika®-GX agile analytics platform. This advanced platform has an unprecedented combination of versatility and speed to tackle the largest problems at super scale and uncover hidden patterns with a fast time to insight.Al…

SeisSpace and Parallel File Systems: What are You Using?

Image
Here’s a few questions for all you SeisSpace® doodlebuggers out there. What parallel file system are you using for your secondary storage requirements? Have you ever thought about using a different file system? Have you ever wondered if you could combine both your primary and secondary storage requirements on the same parallel file system — and if you did, what would happen?Well, stop wondering. Over the last several months, Cray invested the expertise of its oil & gas performance engineering team in testing those questions, along with a few others that you may be interested in. Cray’s team, along with Dan Grygier, CTO of Taming Traces Consulting, used Cray’s CS400™ cluster supercomputer (which we qualified for SeisSpace last year) and three separate storage platforms to look at how workflow performance would be affected if you did a few things:Combined primary and secondary data on the same parallel file systemUsed GPFS instead of Lustre®Modified your file system block sizeUsed L…

Cray HPC Awards Honor Achievements in India

Image
Cray presented its 2017 Dr. A.P.J. Abdul Kalam HPC Awards earlier this month in New Delhi, India. Dr. Kalam was the president of India from 2002 to 2007, and was a world-renowned space scientist.The 2017 awardees are:U.N. Sinha, retired, CSIR – National Aerospace Laboratories, Lifetime Achievement Award for his contributions to high performance computing.Professor Sourendu Gupta, Department of Theoretical Physics, Tata Institute of Fundamental Research, in the Researcher category, for his contributions to the quantum field theory, making progress in discovering the phase diagram of QCD at finite temperature and density.Kalam Award recipients, jury members and Cray representatives. Professor Sanjay Mittal, Department of Aerospace Engineering, IIT Kanpur, in the Researcher category, for his contributions to computational fluid dynamics (CFD).Professor N. Balakrishnan, Department of Aerospace Engineering and Supercomputer Education Research Centre (SERC), Indian Institute of Science (IISc…

Cray Centers of Excellence Help Advance HPC for All

In 2003 Cray signed a contract with Oak Ridge National Laboratory (ORNL) for the installation of a Cray® X1™ supercomputer. This contract led to the creation of a program that continues to support Cray research and development worldwide.The agreement with ORNL included funding for a group of Cray experts to help the U.S. Department of Energy’s Office of Science researchers port and optimize their applications for the new system. This group was called the Cray Supercomputing Center of Excellence (COE).The mission of the Center of Excellence was multifaceted:Assist the DOE’s Office of Science researchers in porting their application from their existing IBM system.Train the researchers and members of ORNL’s Scientific Computing group in how best to utilize the system.Refactor important applications to improve their performance.Communicate DOE requirements to Cray R&D to ensure that future software releases satisfy ORNL’s requirements.Give the DOE early experience on future generation…