Posts

Showing posts with the label Advanced High Performance Computing Systems

Cray HPC Awards Honor Achievements in India

Image
Cray presented its 2017 Dr. A.P.J. Abdul Kalam HPC Awards earlier this month in New Delhi, India. Dr. Kalam was the president of India from 2002 to 2007, and was a world-renowned space scientist.The 2017 awardees are:U.N. Sinha, retired, CSIR – National Aerospace Laboratories, Lifetime Achievement Award for his contributions to high performance computing.Professor Sourendu Gupta, Department of Theoretical Physics, Tata Institute of Fundamental Research, in the Researcher category, for his contributions to the quantum field theory, making progress in discovering the phase diagram of QCD at finite temperature and density.Kalam Award recipients, jury members and Cray representatives. Professor Sanjay Mittal, Department of Aerospace Engineering, IIT Kanpur, in the Researcher category, for his contributions to computational fluid dynamics (CFD).Professor N. Balakrishnan, Department of Aerospace Engineering and Supercomputer Education Research Centre (SERC), Indian Institute of Science (IISc…

Cray Centers of Excellence Help Advance HPC for All

In 2003 Cray signed a contract with Oak Ridge National Laboratory (ORNL) for the installation of a Cray® X1™ supercomputer. This contract led to the creation of a program that continues to support Cray research and development worldwide.The agreement with ORNL included funding for a group of Cray experts to help the U.S. Department of Energy’s Office of Science researchers port and optimize their applications for the new system. This group was called the Cray Supercomputing Center of Excellence (COE).The mission of the Center of Excellence was multifaceted:Assist the DOE’s Office of Science researchers in porting their application from their existing IBM system.Train the researchers and members of ORNL’s Scientific Computing group in how best to utilize the system.Refactor important applications to improve their performance.Communicate DOE requirements to Cray R&D to ensure that future software releases satisfy ORNL’s requirements.Give the DOE early experience on future generation…

Australian CIO Honored for Deployment of Cray System

Image
Putting a 1.6 PF supercomputer into production on time and on budget with no interruptions to one of the world’s top meteorological agencies: amazing. And it brought well-earned kudos to Dr. Lesley Seebeck, the chief information officer for Australia’s Bureau of Meteorology (BOM), who was named the 2017 Australian CIO of the year for the federal government sector.Dr. Seebeck and Tim Pugh of BOM iTnews, a publisher of IT-related news and research, presented the award to Dr. Seebeck at its annual awards event in February. According to iTnews, “Seebeck’s team correctly predicted several years ago that the agency’s high performance computing system would no longer be up to the task of processing BoM’s complex climate modelling by 2016. The switch to a new Cray XC40 facility last September was seamless, and the power offered by the kit cements the bureau within the top ten meteorological agencies in the world.”“We are in the middle of a technology revolution,” said Dr. Seebeck at the award …

DataWarp™ I/O Accelerator Speeds Scientific Discovery at NERSC

Scientists and researchers worldwide rely upon the scalability of Cray® XC™ supercomputing systems to solve their problems faster than our competition can. So when the productivity of the highly capable Cray system is throttled by the limited capabilities of the storage system, it has to be aggravating for them to have so much valuable compute power at their fingertips and then have to … wait … for … the …. data …. to … arrive.In a recent podcast led by Addison Snell, CEO of Intersect360 Research, Debbie Bard, a big data analyst at the National Energy Research Scientific Computing Center (NERSC), talks about the fifth-fastest computing system in the world*, the Cray® XC40™-based “Cori” system at NERSC. Six thousand scientists and researchers worldwide use Cori, and the system supports 700 different research projects. In straight talk, Debbie relates the performance challenges of the everyday researcher in a way that all of us can easily understand. Slow I/O causes scientists to modify…

Supercomputing in Oil and Gas: Cray CEO Looks Forward

At the 2017 Rice Oil & Gas HPC Conference, Cray CEO Peter Ungaro provided insight into current and future realities for HPC in oil and gas. His presentation, titled “Supercomputing: Yesterday, Today and Tomorrow,” is now available as a video.According to Ungaro, “What got us here today will not get us where we want to be tomorrow. As we start to think about things going forward, a lot of new possibilities open up to us which weren’t really available before. It’s going to require us to think a little differently.”“What got us here today will not get us where we want to be tomorrow.”The O&G industry, Ungaro said, is at a transition point in HPC deployment that is characterized by:A transition from a “best-price” mentality to a “best-value” mentality.New survey techniques and technologies that are driving new geoscience and new workflows.Deeper memory architectures that are beginning to include new storage options.Planning for double- and triple-digit petaflops systems utilizing …

Cray CEO to Speak on the Convergence of Big Data and Supercomputing at TechIgnite

Image
Cray president and CEO Peter Ungaro will give a presentation on “The Convergence of Big Data and Supercomputing” at TechIgnite, an IEEE Computer Society conference exploring the trends, threats and truth behind technology.The convergence of artificial intelligence technologies and supercomputing at scale is happening now. As a featured speaker at TechIgnite’s “AI and Machine Learning” track, Ungaro will examine how the convergence of big data and modeling and simulation run on supercomputing platforms at scale is creating new opportunities for organizations to discover innovative ways of extracting value from massive datasets.Peter Ungaro Cray’s history in supercomputing and analytics has given the company front-line experience in developing systems, software and toolkits to help organizations accelerate their machine learning and deep learning projects. Forrester, a leading research and advisory firm, recently interviewed Cray and other experts in the field, along with numerous potent…

Big Data Analytics: How Much Will it Cost?

If you’re concerned about how long will it take before you see ROI from an investment in big data analytics technology, you’re not alone. According to research by Enterprise Strategy Group (ESG), 77 percent of individuals who lead their organizations’ big data and analytics strategies believe ROI can take six months to start showing up.Add to that the fact that data analytics technology is just one part of a much broader environment that cuts across numerous teams, and it’s no wonder that finding the best solution can be a daunting task.To help organizations sift through the challenges, ESG, an IT industry analysis and strategy firm, has published a white paper titled “Improving Analytics Economics with Cray; Comparisons to the Cray® Urika®-GX Agile Analytics Platform.” Authors Nik Rouda and Mike Leone begin with this question:“Given that 1) analytics is a top priority, 2) time-to-value is generally too long, 3) quality will depend on having a well-defined and tightly integrated stack…

ExxonMobil Sets Record with Cray Supercomputer

Image
ExxonMobil, working with the National Center for Supercomputing Applications (NCSA), has achieved a major breakthrough with proprietary software using more than four times the previous number of processors used on complex oil and gas reservoir simulation models to improve exploration and production results.The breakthrough in parallel simulation used 716,800 processors, the equivalent of harnessing the power of 22,400 computers with 32 processors per computer. ExxonMobil geoscientists and engineers can now make better investment decisions by more efficiently predicting reservoir performance under geological uncertainty to assess a higher volume of alternative development plans in less time.The record run resulted in data output thousands of times faster than typical oil and gas industry reservoir simulation. It was the largest number of processor counts reported by the oil and gas industry, and one of the largest simulations reported by industry in engineering disciplines such as aero…

2017 Spark Summit East Reveals Progress but Not Disruption

This was a conference to marvel at — and not just for the lineup. How many gigs start with a New England Patriots Super Bowl victory parade and end with a quasi-blizzard? Boston’s “ambiance” aside, engaged attendees sunk their teeth into a feast of tech, much of it centered on real-time performance of analytics workflows, especially in the context of the latest buzz — machine and deep learning (ML and DL), along with AI. It’s hard to believe we’re in the third wave of AI! I started contributing during the second wave in the late 1980s, doing research with neural networks and heuristic search algorithms to auto-sort packages for shipping companies. Among the challenges in the latest wave, tech like Apache® Spark™ plays a significant role on the software side of performant systems.Having proven you can realize revolutionary speedups running in-memory with fewer lines of code, much of what this conference showcased was evolutionary (vs. disruptive) progress to the Spark ecosystem. Target…

Why PGS Made the Move from Clusters to Supercomputers

PGS is a marine seismic company that acquires high-resolution seismic data for use in imaging and modeling the earth’s subsurface.When the company launched its Triton survey in the Gulf of Mexico in November 2013, they knew they’d end up with the largest seismic survey they’d ever collected. When they finished acquiring the data in August 2014, they had the most complex imaging challenge they’d ever faced.The high-fold, long-offset, dual-sensor, broadband survey provided full-azimuth data and covered approximately 10,000 square kilometers in a notoriously difficult-to-image area of the Gulf. The result: a 660 terabyte dataset.Triton overcomes deepwater challengesConsidered the most revolutionary and technologically advanced survey ever conducted in the region, Triton is impressive not just for its size, but because with it, PGS deployed a breakthrough survey design intended to overcome the challenges of this structurally complex deepwater area.Their specially designed acquisition temp…

Cray “Blue Waters” Supercomputer Tackles Gerrymandering

Redistricting — the process by which congressional and state legislative district boundaries are drawn — sounds like an unremarkable government chore. And, in theory, it should be. But, too often, it is subject to “gerrymandering,” or manipulation, by the majority political party.Decades ago, University of Illinois political science professor Wendy K. Tam Cho (pictured above) realized that what’s needed is a computational tool that would help the courts objectively measure the fairness of a legislative map. She developed a tool that could generate hundreds of millions of voter district maps that would serve as a “comparison set” — a way to measure the level of partisanship exhibited by any particular electoral map. But any further work was stymied by the lack of compute power.Then she heard about the “Blue Waters” supercomputer at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign. Blue Waters is a Cray® XE™/XK™ hybrid supercomput…

Inventions at Cray: Solving the Hard Problems

Image
In the U.S., Feb. 11 is National Inventor’s Day, timed to coincide with the birthday of Thomas Edison. That’s reason enough to celebrate the inventors among us.The phrase “computer vector register processing” may not sound very inspiring, but that’s what Seymour Cray’s patent for the supercomputer, issued in 1976, was called. Forty years later, his invention still inspires scientists and engineers to change the world.Cray continues to nurture the spirit of invention both internally and among its customers and partners.Larry Kaplan For his part, Seymour Cray obtained numerous patents throughout his career, but it was U.S. Patent No. 4,128,880 (“Computer vector register processing”) that got him inducted into the National Inventors Hall of Fame. Today’s Cray engineers are still inventing in many fields to ensure its customers have access to the best new technology. Here are just two interesting examples.Larry Kaplan, a senior principal engineer, and his team received three patents for a …

“Titan” Supercomputer Helps Scientists Analyze the Atoms in a Nanoparticle

Image
Scientists used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum. The Cray supercomputer “Titan” was then used to simulate the nanoparticle’s behavior.The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles.Microscopy data was obtained and analyzed by scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) at the Molecular Foundry, in collaboration with Foundry users from UCLA, Oak Ridge National Laboratory an…

Forrester TechRadar™ Reports on Current State of Artificial Intelligence

Artificial intelligence is an idea that’s been around for years – think Hal in “2001: A Space Odyssey” and C-3PO from “Star Wars.” But recent advances in big data, computing and the cloud are enabling a new world of possibilities, according to a new TechRadar from Forrester1. Forrester, a leading research and advisory firm, interviewed Cray and other experts in the field, along with numerous potential customers and users, to produce this extensive report.For this report, Forrester talked to experts at 64 vendors in the field, as well as representatives from 12 companies using AI. The authors’ key takeaways are that AI technologies and systems:Will augment and enhance human work.Still demand considered design, knowledge engineering and model buildingDemand new skills, not a new team.But it won’t come easy. The report found that some of the key challenges obstructing the adoption of AI technology include the lack of:A clear business case.Specialized skills to build, deploy, and manage A…

Behind the Name: “SahasraT,” India’s Fastest Supercomputer

“SahasraT” holds a special place in Indian supercomputing: It is the country’s first petaflops supercomputer. The Cray® XC40™ system is housed at the Supercomputing Education and Research Center (SERC) at the Indian Institute of Science (IISc) in Bangalore, India.The 1.4 PF system serves scientific and engineering researchers at India’s leading computing center. The SERC leads several national initiatives on HPC, and is actively involved in research projects and consultancy in collaboration with government agencies and private companies. SahasraT serves as the primary system for cutting-edge research in science and engineering at the IISc.The fact that this system is India’s first petascale supercomputer played a role in its naming. One PF is equal to 1,000 teraflops (TF). “Sahasra” means “1,000 spokes” or “1,000 arms.” The student who won the naming contest started with Sahasra and added a T for teraflops.Read more about SahasraT here.The post Behind the Name: “SahasraT,” India’s Fas…

“LEBM”: Cray Creating New Extension to LUBM Benchmark

Image
I’ve written a few posts about the Cray Graph Engine (CGE), a robust, scalable graph database solution. CGE is a graph database that uses Resource Description Framework (RDF) triples to represent the data, SPARQL as the query language and extensions to call upon a set of “classical” graph algorithms.There are two main advantages of CGE. One is that it scales a lot better than most other graph databases — because the other ones weren’t designed by supercomputer wizards. (Of course I would say that.) The other advantage is that not only does CGE scale well, it performs unusually well on complex queries on large, complex graphs.Typical of a lot of complex graph queries:
Where are all the places where the red pattern matches a pattern in the large graph?We can use existing benchmarks to demonstrate CGE’s performance, including LUBM (the Lehigh University Benchmark), SP2B (the SPARQL Performance Benchmark) and BSBM (the Berlin SPARQL Benchmark). But the problem is that none of them really …

‘Shaheen II’ System Enables World’s First Trillion-Cell Simulation

Saudi Aramco, one of the world’s largest oil and gas companies, has set a record for oil and gas reservoir simulations. The achievement was made possible by “Shaheen II,” a Cray® XC40TM system at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.“The ‘trillion cell’ designation refers to how dense and high-resolution the imagery is,” explains Saber Feki, KAUST computational scientist lead, in a KAUST report on the breakthrough. “Think of it this way: We all carry megapixel cameras in our pockets. The Saudi Aramco team just simulated subsurface images with six orders of magnitude higher resolution on our machine. It’s research that will yield much more reliable information for oil exploration and production.”In order to virtually see the reservoir, we have to “model” or “simulate” the reservoir using production data, special applications and high performance computing. Standard modeling today uses around 10 million to 100 million cells, which could run hund…

2017 Supercomputing Forecast: “Partly Cloudy”

Can cloud and supercomputing provide solutions better together than separately? That’s one question Barry Bolding, Cray’s senior vice president and chief strategy officer, addresses in an article on ScientificComputing.com.In the article, “4 Predictions for Supercomputing in 2017,” Bolding gives his predictions related to:Artificial intelligence and machine learningChina’s strategic and rapidly growing role in the supercomputing marketThe incessant demands on processor technologyA “paradigm shift” in thinking about the intersection of cloud and platformWhat do you think 2017 holds for the supercomputing industry?The post 2017 Supercomputing Forecast: “Partly Cloudy” appeared first on Cray Blog.

from Cray Blog http://ift.tt/2hW26mr
via IFTTT

The Other P is Destroying the Ability to Achieve Performance Portability

Image
An important consideration when moving to the next generation of multi/many-core systems is striving to create a refactored application that can run well on available systems.There are numerous similarities between the multi/many-core systems of today. Today’s systems have very powerful nodes, and the application must exploit a significant amount of parallelism on the node, which is a mixture of MIMD (multiple instruction, multiple data) and SIMD (single instruction, multiple data).The principal difference is how the application utilizes the MIMD parallelism on the node. Multicore Xeons and many-core Intel Phi systems can handle a significant amount of MPI on the node, whereas GPU systems cannot. There is also a difference in the size of the SIMD unit. The multi/many-core systems’ vector units are less than 10, and the GPU is 32. Since longer vectors on the multi/many-cores systems do run better than shorter vectors this is less of an issue. All systems must have good vectorized code …

Machine Learning at Scale for Full Waveform Inversion at PGS

Image
Seismic imaging in the ultra-deep Gulf of Mexico is incredibly challenging: The hydrocarbon-bearing sedimentary rocks are deep and buried under evaporite salt domes and geological layers that come in complex shapes and feature strong acoustic impedance and velocity contrasts. Proper imaging requires application of advanced velocity model-building techniques to address not only velocity variations, but also seismic velocity anisotropy. Then wave propagation must be performed with numerical schemes that accurately model reflections, refractions and even converted waves. While such features are known (and advanced) textbook examples of wave propagation in complex media, computing them requires massive supercomputer resources and clever algorithms.It is for that reason that PGS — a Norwegian seismic exploration company with a major presence in Houston — acquired Abel a few years ago. Abel is a Cray® XC40™ system, at one point the largest commercial supercomputer on the Top500 list and sti…