Contents
- What is HPC?
- HPC vs Supercomputers
- Key industries that benefit the most from HPC
- A brief history of HPC
- Main competitors in the HPC field
- Microsoft Corporation
- Intel
- Amazon Web Services
- DataDirect Networks
- Penguin Computing
- Dell Technologies
- IBM
- Alphabet
- Atos
- Exagrid
- Rescale
- Advanced HPC
- HPE
- Storj
- Bacula Enterprise
- The challenges of HPC
- The future of HPC
What is HPC?
HPC is High-Performance Computing – a process of solving extremely complex computational problems with the help of computer clusters and supercomputers. HPC utilizes elements such as algorithms, computer architectures, programming languages, digital electronics, and system software to create a complex infrastructure capable of solving incredibly sophisticated tasks using parallel data processing.
It is a technology that has existed for decades now, receiving a relatively recent spike in popularity due to the sudden expansion in the number of AI-related developments and use cases. HPC infrastructures are regularly tasked with storing, analyzing, and transforming large data masses that companies and governments generate regularly.
HPC systems strive to be faster and faster as time goes on, performing increasingly complex computations, but the challenges of optimizing various applications and designs can sometimes be a strong headwind against this progress. Since data management is essential for HPC calculation efficiency, this challenge is something that software and hardware providers try to solve in many different ways.
Nevertheless, the HPC market keeps growing at an impressive pace. It is projected by Straits Research that the HPC market is going to grow from $49.99 billion USD (2023) to $91.86 billion USD (2030), with a CAGR of 9.1%.
The fact that daily data generation is still growing all over the world puts even more pressure on HPC environments first, motivating them to look for better and faster options. The rise of cloud HPC deployments is proving to be an additional, effective option to mitigate these issues, potentially offering some users a more efficient and cheaper alternative to on-premise HPC infrastructures.
In this context, it is quite important to learn how HPC works and what are the difficulties that it is facing now (and in the near future).
HPC vs Supercomputers
There is quite a lot of overlap and confusion between High Performance Computing and Supercomputers. Each solution has several definitions, and there are some similarities between the two, as well. The biggest difference between HPC and Supercomputers is the fact that a Supercomputer is a single system that can be several times more powerful than any customer-grade computer. HPC, on the other hand, tends to be more a combination of multiple systems and resources used in parallel.
It is easy to see how Supercomputers and HPC are so similar yet so different. There is also the fact that Supercomputers are usually far more expensive and are custom-tailored for a specific task, while HPC is a much more versatile system that can be configured to perform different tasks, if necessary.
Sometimes, some variants of older software may not be able to reap all of the benefits of the HPC due to the inability to utilize parallel computing features effectively. In these kinds of use cases, Supercomputers can have a significant advantage and are often the only option.
Key industries that benefit the most from HPC
There are many different industries that are actively using HPC in their work, be it for engineering, design, modeling, etc. Here are some of the largest use case groups for HPC solutions:
- Oil and gas. HPC are used to analyze new potential well locations and improve existing drills’ productivity.
- Fintech. HPC is capable of performing many different forms of financial modeling, and it can also track stock trends in real time.
- Defense. HPC dramatically improves the capability of government entities to manage massive data pools to perform various security-related actions and operations.
- Entertainment. HPC has many different use cases here, including rendering special effects for videos and movies, creating animations, 3D environments, transcoding, and more.
- Healthcare. HPC is instrumental in drug development and cure research processes in the industry.
- Research. Science projects are the bread and butter of High-Performance Computing capabilities, offering a quick and convenient way to manage massive data volumes for a specific purpose.
A brief history of HPC
The ongoing overlap between HPC and Supercomputers is the big reason why a lot of the history of Supercomputers is often treated as the history of HPC, as well. The entire hardware category dates back to the 1940s, going through several iterations before rising in popularity after the 1950s (IBM 7090, CDC 6600).
The concept of parallel computing was introduced shortly after (after the 1980s), as one of the first mentions of this concept in history, along with the development of computer clusters that could perform complex tasks as a single interconnected environment.
At the same time, the popularity of Personal Computers continued to rise, as well, bringing in more and more interest into the industry as a whole. HPC clusters continued to grow and develop as a concept over the years, with the idea of cloud computing being one of the more recent trends that a lot of the best HPC companies on the market now offer. HPC is an extremely effective concept as it is, and with the future being more and more dependent on technologies such as quantum computing and Artificial Intelligence – this concept will only continue to grow and flourish as time goes on.
Main competitors in the HPC field
The market for HPC solutions is surprisingly large, considering how complex and resource-consuming these solutions can get. And yet, the overall demand for these kinds of offerings seems to be growing regularly across many industries – which is why most HPC companies keep growing into very large businesses. The list below goes over 15 different examples of companies that offer HPC capabilities in one way or another.
It should be noted that the term “HPC solution” is relatively broad and can cover several different company groups. Some of these companies offer HPC as a cloud service, others provide HPC on-premise deployment, and there are also several options that are well-known for their contribution to the industry in terms of nothing but hardware capabilities.
Microsoft Corporation
Microsoft is a well-known technological giant, and its Azure cloud service is undoubtedly one of the largest competitors in its field. Two different elements contribute to HPC deployments specifically: Azure CycleCloud and Azure Batch.
The former is a complex solution that offers HPC workload management with a lot of valuable features attached. The latter is a scale-up and scheduling solution that can calculate and scale the necessary resources with the amount of work that needs to be done using an HPC environment. It is not uncommon for Microsoft to also collaborate with various hardware vendors in order to create custom-fit hardware for their Azure infrastructure to be able to handle HPC workflows.
The combination of hardware and software under Microsoft’s watchful eye creates what is known as Microsoft Azure HPC – a comprehensive purpose-built infrastructure that houses HPC-ready solutions with multiple advantages over traditional on-premise HPC versions.
It is a fast, scalable, and cost-effective solution that greatly reduces the upfront cost for HPC deployment, supports multiple HPC workload types, and can be custom-fit to have just enough capabilities for a client’s specific goals and use cases. It can also be integrated with other Azure products, such as Azure Machine Learning, creating multiple new opportunities in HPC.
Intel
Another famous technological company is Intel Corporation, one of the biggest CPU fabrication companies on the planet. Intel Xeon processors are made specifically for HPC and similar environments, regardless of the industry they are used in. Intel also provides multiple toolkits and documents to simplify the programming process for Xeon-based systems.
Some of the most significant advantages of Intel Xeon processors in the context of HPC are:
- Scalability.
- Core performance.
- Memory performance.
- Simulation capabilities.
- ISA.
Xeon processors are known for their multi-core structure, which is purpose-built to distribute the load between dozens of processor cores simultaneously. This is a perfect use case for HPC workloads, speeding up the time it takes to run an average calculation process for researchers and other HPC experts.
Calculations themselves in Xeon processors are much faster because of both the larger number of cores and the higher clock speed of each core, offering a significantly higher performance across the board, especially when it comes to complex calculations that HPC usually deals with.
Since HPC often works with incredibly large data sets, high memory performance is practically a requirement. Luckily, Xeon processors come with faster memory controllers and wider memory channels to ensure that the memory hardware’s full potential is unlocked and there are no bottlenecks on the CPU side.
Performance is not everything HPC needs from hardware as a whole – stability for prolonged periods of intensive calculations is just as important. The industry refers to this as RAS, or Reliability, Availability, and Serviceability. It is a combination of features such as advanced diagnostics, error correction, and plenty of others that ensure minimal downtime and complete data integrity.
ISA stands for Instruction Set Architecture, it is a set of detailed instructions for mathematical and scientific calculations. The main purpose of ISA is to maximize the convenience and improve the performance of HPC workloads when working with Intel Xeon processors.
The rest of Intel’s capabilities in the field of HPC still revolves around Xeon processors, one way or another. For example, Intel’s oneAPI HPC Toolkit is a combination of various development tools that make it easier to improve performance and optimize programming operations that run on Xeon processors. Alternatively, there is also the HPC Software and Tools package that provides several solutions for system optimization, performance analysis, and workload management for HPC solutions that run on Xeon processors.
Amazon Web Services
Amazon Web Services is a subsidiary of Amazon, one of the biggest companies in the world. AWS’s primary specialty is cloud computing in different industries and for different target audiences, including regular customers, businesses, and even government agencies. It can also provide cloud-based HPC capabilities to financial institutions, research organizations, engineering businesses, and health-oriented science companies.
AWS tries hard to keep up with the modern trends in the technological field, with their attempts to bring in the power of AI and ML into their services being the most recent example. That way, Amazon SageMaker now can improve its data analysis capabilities via the introduction of machine learning into this workflow.
That’s not to say that Amazon’s current cloud offering is not amazing in its own right. It offers plenty of customization in terms of how many resources are needed for each client, combining scalability with affordability in a single package. AWS as a whole is relatively easy to manage, and its global infrastructure makes it possible to deploy HPC cloud infrastructures in many different parts of the world with little to no issues.
Since AWS is a massive platform with dozens of different resources and features, it is only wise to mention which of these resources are directly connected with Amazon’s HPC capabilities:
- Amazon FSx is a high-performance file system that is used to manage HPC data sets that tend to be extremely large in size.
- AWS Batch is a dedicated tool for scaling and job scheduling for HPC workloads specifically.
- Amazon EC2 is a collection of on-demand virtual infrastructures, including powerful GPUs, fast CPUs, and other hardware/software explicitly built for HPC workloads.
- AWS ParallelCluster makes it easier to deploy and control HPC clusters, with the capability to increase or decrease the number of clusters when necessary.
- EFA (Elastic Fabric Adapter) is a low-latency networking infrastructure that offers the highest possible communication speed between clusters in the HPC infrastructure.
DataDirect Networks
Even though DataDirect Networks is not as well-known as some other competitors on this list, it is considered the most significant privately-held data storage company. It is among the most well-known names in the HPC market, offering high-performance infrastructures for specific purposes.
DataDirect’s capabilities include improvements in fields such as collaboration (with the help of multi-cloud data management), optimization (with better storage performance), and lower cost (due to scalable and efficient HPC solutions).
Some of the most significant achievements and advantages of DataDirect Networks are:
- Parallel File Systems allow HPC nodes to access the same data simultaneously, improving performance across the board.
- DDN’s over 20 years of experience give it unprecedented industry experience and knowledge, enabling it to provide some of the best HPC environments on the market.
- Scalability, security, and stability are just as crucial to DDN. They ensure that sensitive research data is protected while also ensuring that the environment is scalable but stable.
- Exascaler performance is within the realm of possibility for DataDirect Networks’ HPC solutions, significantly improving the performance of research and other HPC-oriented tasks.
Exascale computing is a type of supercomputer system that can perform computing operations at the exascale levels, which is an entirely new level of computing performance that requires a specifically modified storage system to be capable of working with such performance at its fullest.
Penguin Computing
Penguin Computing represents another privately owned HPC vendor that was created back in 1998. There is a range of products and services that Penguin Computing can provide, be it Linux servers, cluster management software, cloud computing solutions, AI solutions, and more.
Penguin Computing offers streamlined HPC solutions for its clients, with high performance and low management complexity. These solutions can be scaled easily, combining hardware and software in multiple ways to meet the requirements of every target audience.
Penguin Computing’s contribution to the HPC industry is quite significant. It offers cloud-based HPC infrastructures with AI framework support, making it possible to combine the two if it is possible as the means of improving HPC workloads. There is also the fact that Penguin Computing’s software makes it a lot easier to manage complex HPC environments, no matter how large or sophisticated they get.
The company’s offering also covers physical HPC environments and even Linux-optimized servers for the same purpose, as well. A combination of fast memory, high-performance processors, and efficient GPU hardware is included in every package. Penguin Computing also supports TrueHPC – a combined initiative of AMD and NVIDIA, fostering collaboration and innovation within the market with best practices and open standards.
Dell Technologies
Dell is another familiar name in the overall technological environment. As a brand, Dell is owned by Dell Technologies, its parent company, which was created in 2016 as a result of the merger between Dell and EMC. Dell Technologies offers many services and solutions, including hardware and software options for different clientele.
This also includes HPC capabilities, such as production implementation, assessment, testing, creation of proof-of-concept pieces, etc. Dell’s offering in the HPC environment is not that different from other companies that provide cloud-based HPC infrastructure on demand. It is a fast and relatively cheap alternative to on-premise HPC deployments that can also be easily scaled both ways and require much less maintenance. Dell also takes pride in its HPC deployment times, offering extremely fast HPC deployments for clients with time constraints.
Dell’s expertise as a technological company is backed up by decades of work in the industry. It offers a deep understanding of how HPC works and what it needs to function properly. Dell’s cloud-based HPC solutions are distributed using thousands of Dell EMC servers and three powerful supercomputers connected in a single infrastructure using sophisticated storage management systems.
There are plenty of hardware that Dell can provide as a part of its HPC infrastructure, be it networking hardware, storage hardware, or server hardware. All of these components are custom-fit for HPC workloads from the start. At the same time, Dell’s capabilities don’t stop at providing hardware in multiple forms – there are also services such as:
- Proof-of-concept development
- Ongoing support
- Product implementation
- Initial assessment
IBM
IBM is an American technology company that has been around for over 100 years. Its IBM Spectrum Computing branch was created to provide HPC services to its clients in multiple different ways. There are separate offerings that IBM can offer, including:
- High-Performance Services for Analysis, a perfect fit for finance or life science industries – or any other field of work that requires data-intensive workload computations on a regular basis.
- Spectrum HPC, a complete toolset for optimizing and managing existing HPC environments or creating new ones.
- High-Performance Services for HPC, a solution for the entire HPC infrastructure lifecycle, starting with planning and deployment and ending with ongoing support until shutdown.
The company is well-known for its investments in computing technologies over the years – ATM, DRAM, floppy disks, and hard disk drives are just a few examples of IBM’s creations. The long list of inventions that could be attributed directly to IBM is a testament to their capabilities when it comes to innovation and technology development.
IBM also supports hybrid HPC deployments with ease, offering the ability to connect its cloud-based HPC capabilities with on-premise hardware the client might already have. IBM’s HPC capabilities are fast and customizable, leveraging decades of experience in the field to create an impressive level of service in the industry.
Alphabet
Alphabet is a massive technology conglomerate based in California; it is often considered one of the most valuable businesses on the planet. Alphabet was created after a restructuring of a well-known company called Google in 2015, remaining Google’s parent company to this day.
It is possible to split Google’s HPC-related capabilities into six categories:
- Google Cloud can provide custom-built infrastructure for very specific and narrow use cases, offering an incredible computing power-storage combination. This is enhanced by cloud computing power, data storage solutions, and high-performance network infrastructure, which are required to maintain all this infrastructure. Google’s Cloud HPC solutions are probably their main direct involvement in the HPC space.
- Google also frequently participates in and is involved with various research organizations and educational institutions to develop new technologies in the HPC market and improve the existing ones. This helps different companies in the market reach new markets that also need the power of HPC, including climate science, biotechnology, quantum computing, etc.
- Google’s overall status as one of the biggest technological companies in the world makes it a great choice for HPC services due to its fast networking capabilities, high levels of efficiency, constant availability, and impressive scalability.
- Google’s broad versatility is a massive advantage on its own, offering a solution package that works for both academic and commercial environments when necessary. This allows Alphabet to create value in different markets, improving the overall levels of service in different industries.
- Google’s ability to integrate new technologies into existing solutions drives innovation while also improving the performance and versatility of its HPC service.
- That’s not to say that more traditional technologies are not constantly being developed and improved under Alphabet. Far from it – the sheer dedication of Google to working with cutting-edge technologies constantly serves to improve the capabilities of solutions such as HPC for different industries.
Atos
Atos is a large IT service company that mostly focuses on providing and managing HPC infrastructures. It can deploy these infrastructures, manage them, and consult their users when it comes to issues that may happen in these infrastructures.
Atos can offer both on-premise and cloud-based HPC infrastructure options. It also provides HPC management services, alleviating the heavy burden of managing complex HPC infrastructures from its users. Other services of Atos include advanced training programs revolving around HPC capabilities to make sure the clients can use the software and the hardware to its fullest, if they want to manage it themselves.
Atos can provide and manage hybrid HPC deployments, as well as on-premise and cloud infrastructures separately. This is combined with impressive scalability, which is a very valuable capability in a modern-day environment with growing data demands.
For HPC users, Atos is a reliable orchestrator and advisor in the industry, offering a complete package of HPC infrastructures and the capability to manage them within the same company. This leaves the end users with a lot more time to focus on research or other tasks that require HPC in some way, shape, or form.
Exagrid
ExaGrid is a primarily hardware-oriented backup storage solution built specifically for large data volumes. It relies on a tiered storage model and a clever backup policy that makes the most recent backups always accessible with no compression needed. It is a fast, scalable, and reliable backup solution that can also be excellent for HPC data protection, combining performance and cost-efficiency.
Exagrid was designed to handle large data masses to begin with, making it specifically useful for HPC deployments. It also offers extremely quick restoration processes for regular storage and VMs without any kind of rehydration necessary before the data can be used again.
ExaGrid’s other capabilities include impressive cost-efficiency due to the tiered pricing architecture and reliance on a combination of hardware and software for its backup and recovery tasks. Every unit is its own standalone system with storage, memory, processor, and other necessary elements, making scalability much easier in the long run because certain elements can be replaced instead of the entire appliance.
Rescale
Rescale is a relatively recent development in this industry – it is a software company that was created back in 2011 and is now offering both cloud services and cloud software capabilities. Rescale’s offering is called Cloud Intelligent Computing; it can be used to optimize existing HPC workflows (mostly on-premise examples).
The company also regularly introduces new and improved cloud technologies in HPC to facilitate the connection of on-premise HPC workflows with their cloud-centric counterparts. For example, the capability to access HPC resources remotely is a massive advantage for collaboration and innovation efforts in the industry since the connection is safe and secure, and the capability to contact these resources dramatically improves the mobility of HPC operations.
Additionally, Rescale can offer rapid provisioning for cloud-based HPC clusters, solving one of the biggest problems of original HPC deployments (long provisioning time). That way, HPC solutions can be scaled up or down quickly, significantly improving both the convenience and performance of these deployments.
Rescale also does not try to lock their clients into working with a single cloud storage provider, creating support opportunities with various cloud HPC providers while still being able to manage them all via Rescale.
The company can still offer all of the basic advantages of a cloud HPC environment, be it faster deployment time when compared with on-premise HPC, faster scalability, easier management, and lower upfront cost. These kinds of advantages work well with Rescale’s own improvements to the existing HPC workflow, creating a rather interesting package of services and environments.
Advanced HPC
Another relatively small company (in comparison with Microsoft and Amazon) that specializes in HPC services is called just that – Advanced HPC. It was founded back in 2009 and still remains one of the top HPC providers on the market. Not only can Advanced HPC offer High-Performance Servers, Networking solutions, and Infrastructure offerings, but there are also plenty of training opportunities to choose from.
Other capabilities of Advanced HPC include multiple professional services in the market, including the capability to manage:
- HPC clusters,
- NAS to business solutions,
- Parallel file systems, and more.
AHPC offers the capability to build complete turnkey HPC solutions from scratch instead of just selling separate components for HPC systems. Each of these systems can be customized in a specific fashion necessary for the client’s field of work, creating a unique approach for each client AHPC works with.
The regular package of advantages also applies to AHPC – lower upfront cost is included, the infrastructure is much more flexible, easier to manage, and the deployment time is leaps and bounds above any on-premise deployment.
HPE
HPE stands for Hewlett Packard Enterprise, a multinational information technology business from the United States. It was created in 2015 as part of the Hewlett-Packard company splitting. HPE is mostly focused on the B2B segment of the market, offering features such as networking, servers, storage, containerization, and more.
HPE’s capabilities as one of the more prominent HPC vendors include:
- High-Performance Storage – a storage solution built specifically for high-volume fast workloads that HPC is known to expect from its computations and calculations.
- HPC-optimized servers – combinations of fast networking capabilities, high-speed processors, and extremely large memory pools.
- HPE Superdome Flex Server – a unique modular server for HPC workloads.
HPE can offer consultation capabilities in the field of HPC (optimal infrastructure design and expected performance goals), cloud-based HPC capabilities (basic cloud HPC deployment capabilities), and comprehensive customer support for existing HPC environments (extensive technical expertise on the topic of HPC, troubleshooting, ongoing maintenance, and more).
HPE’s cloud HPC capabilities offer the same set of benefits that most cloud HPC providers can have, including easier infrastructure management, lower upfront deployment cost, high deployment speed, and even better performance thanks to HPE’s hardware that is custom-built and optimized for HPC workloads.
Storj
Storj is a distributed cloud storage service that utilizes blockchain technologies to offer a safe and secure solution for data storage, especially for sensitive information such as HPC training data. Storj can offer highly efficient access to data, no matter where it is located specifically. It can also work with large data volumes with ease, which makes it a prime contender for HPC-oriented use cases.
It is a cost-efficient solution with a decentralized structure, creating an unusual combination of high security and low price in the same package. The structure in question also offers plenty of redundancy by default, making it extremely valuable for all use cases that value high availability and durability (HPC is one of the prime examples of such clients).
Bacula Enterprise
Bacula Enterprise is a highly secure, comprehensive backup and recovery platform that excels in HPC environments, with many HPC-specific capabilities and features to choose from. It supports many different storage types, including physical storage, virtual storage, cloud storage, databases, applications, and so on. The solution itself was designed to handle vast and complex data systems without reliance on capacity-based pricing, making it an exciting option for many industries and fields of specialization, including HPC.
Bacula’s modular system makes it a great choice for practically any complex environment out there due to the ability to expand its original functionality with minimal effort. It is also great at managing and handling large data volumes in different forms – a significant capability that HPC infrastructures are always looking for. Bacula’s software is also highly scalable and customizable, significantly expanding its capabilities in terms of potential clientele. The subscription system that Bacula Enterprise uses is another advantage for industries that work with large data masses regularly, such as the HPC industry. For example, the licensing model is highly modular, which means that users only pay license fees for the modules (or plugins) used. Even better, Bacula does not structure its licensing by data volume, which means the software is not only easily scalable by design, but by price, too.
Another testament to Bacula’s capabilities in the field of HPC is the number of different clients it gathered over the years, including organizations that are using HPC infrastructures on a regular basis – Queen’s University School of Computing, Texas A&M University, University of Ghent, and even NASA itself. As a result of its higher levels of security, Bacula is relied on by the largest defense organization in the World, and the largest non-bank lender in the world.
An important area where Bacula contributes to the HPC world is Compliance. For many organizations that are increasingly finding themselves needing to meet regulatory requirements and compliance needs, Bacula’s extensive reporting across entire HPC environments helps them achieve the standards and certifications for them to be properly operational.
The challenges of HPC
HPC can be a very powerful tool in the right circumstances, but the technology has its fair share of disadvantages and challenges. Some of these challenges are relatively common for a rapidly-evolving field such as HPC, while others are somewhat more unusual in comparison.
- On-premise HPC infrastructures are often extremely expensive in terms of their up-front price. The cloud HPC service is a good alternative that is a lot cheaper in comparison, but it may not offer the flexibility and convenience of a personally managed physical infrastructure. As such, plenty of companies try to work with some form of a hybrid HPC environment – which provides its own complexity-related challenges, as well.
- HPC systems utilize dozens of GPUs and CPUs in a single solution, and compatibility is a very problematic subject for such solutions – requiring plenty of knowledge and resources to make sure the system is operating properly (a large issue for on-premise HPC deployments). The same logic applies to the software side of the topic – parallelization is not a plug-and-play technology, it requires plenty of optimization and setup to ensure that the computational tasks are distributed evenly between HPC’s resources.
- The overall rapid advancement of the field, especially when it comes to AI-related improvements, makes it even more expensive to keep up with HPC-related advancements in terms of both hardware and software. Granted, this is a much bigger problem for on-premise HPC installments, but the cloud-based HPC are also susceptible to the same issue – and the inconvenience of regular upgrades would result in a higher service cost sooner or later.
- Managing HPCs as a whole can be an extremely difficult task that not a lot of IT professionals can handle. The overall issue is even worse for hybrid HPC deployments due to the combination of physical and virtual HPC solutions. Finding even one such professional can be quite challenging, and keeping that same person in the company is an even bigger challenge in a modern environment with a high level of competition.
- Data security as a whole remains a significant problem for any modern industry, including HPC – especially when it comes to cloud-based and hybrid HPC environments.
- The existing energy consumption concerns are also present for HPC solutions since these solutions use multiple hardware units at once, raising the overall energy consumption levels exponentially. Energy efficiency is a very important topic in this context.
The future of HPC
While it is true that a lot of recent HPC popularity can be attributed to the sudden rise of AI as the latest “IT trend”, the overall landscape of these high-level technologies is relatively volatile, forcing HPC vendors to adapt and evolve as fast as possible to stay relevant. As such, the HPC industry is going to continue existing as long as there is demand for massive computing power for specific use cases – be it AI, IoT, 3D imaging, combined with one of the many different types of application areas such as particle physics simulation, weather analysis, drug discovery and molecular modeling, genome sequencing, space exploration, oil and gas exploration, seismic imaging and earthquake modeling, etc. Exciting and significant HPC developments on both the technical and strategic front means that this discipline has a bright future, and will certainly contribute a lot to mankind going forward.