Pressesenter

17. oktober 2018

Nextron kommenterer påstander i Bloomberg artikkel




Bakgrunn:
Bloomberg.com publiserte torsdag 4. oktober en artikkel der det hevdes at noen Supermicro hovedkort har blitt modifisert med en chip som kan gi en bakdør inn til server management (BMC og IPMI). Dette skal ha blitt gjort av kinesisk etterretning på fabrikker som Supermicro bruker som underleverandører. Den påståtte hendelsene ligger tre år tilbake i tid. I artikkelen pekes det på en type hovedkort som er sendt til kunder i USA, blant annet Apple og Amazon. I en oppfølgingsartikkel ble det også hevdet at det var oppdaget manipulert RJ45 connector på et hovedkort.


Litt om Supermicro:
Supermicro et firma med hovedsete i Silicon Valley, USA. I tillegg har de fabrikker i Taiwan. Det er altså ikke kinesisk, men de bruker i noen grad kinesiske underleverandører når de ikke klarer å dekke etterspørselen med egen kapasitet. De er ett av få firma i bransjen som både designer, utvikler og produserer i egen regi. De har dermed bedre kontroll på verdikjeden enn de fleste andre og kjenner sine produkter og design veldig godt.


Supermicro offisiell respons:
Supermicro sin offisielle respons på artikkelen kan leses her:
https://www.supermicro.com/en/news/CEO-letter


Supermicro har ikke blitt kontaktet av amerikanske sikkerhetsmyndigheter eller vært kjent med at det har vært en pågående etterforskning. De har heller aldri funnet en chip som det henvises til i artikkelen eller fått dette fra kunder. Amazon og Apple har også kategorisk avvist påstandene i Bloomberg artikkelen. Hardware manipulasjoner som det hevdes er gjort ville være relativt lett for Supermicro å avdekke. Ut ifra dette er det dermed ingen Supermicro produkter som man vet er berørt.


Supermicro har også engasjert eksterne sikkerhetseksperter for å ha en nøytral gjennom av rutiner og produkter.


Respons fra myndighetene:
FBI-sjef Christopher Wray uttalte i en senatshøring i USA på onsdag 11. oktober at «vær forsiktig med hva du leser. Vi vil ikke ha feilaktig informasjon der ute.» I samme høring sa DHS-sjef Kirstjen Nielsen at de ikke hadde bevis som støtter artikkelen. De utviser altså en forsiktighet som tyder på at saken er langt fra så klar som artiklene har antydet. Nasjonal Sikkerhetsmyndighet i Norge har også uttalt at de ikke har noen informasjon som underbygger påstandene i artikkelen.


Andre reaksjoner:
En av "vitnene" som stå fram etter Bloomberg artikkelen gir en litt annen versjon av historien her: https://www.servethehome.com/yossi-appleboum-disagrees-bloomberg-is-positioning-his-research-against-supermicro/
Han uttaler selv at Supermicro eventuelt er et offer her – som alle andre. Han uttaler at det er utallige punkter i verdikjeden der gjenstander kan bli manipulert. Manipulasjon er en generell risiko i hele sektoren


Betraktninger fra Nextron:
Man skal ta slike påstander på alvor og ta sikkerhetsmessige forholdsregler. Denne typen saker er en risiko for hele bransjen, alle leverandører og produsenter. De aller fleste produserer i Kina, det er stor grad av OEM design og produksjon i bransjen og det er flere kjente tilfeller av tilsvarende manipulasjon, f.eks. Cisco saken som ble kjent etter WikiLeaks avsløringene.


Tekniske vurderinger:
Disse sakene dreier seg om en påstått sårbarhet i "remote server management". Maskiner/hovedkort som ikke har management/IPMI vil dermed uansett ikke være berørt. Hvis maskinen har management kan den "hardware disables" med en jumper på hovedkortet hvis man ikke ønsker å bruke det.


Remote management kan uansett være en sikkerhetsrisiko hvis det ikke er satt opp riktig. Det gjelder fra alle serverprodusenter. Her er en "best practice guide" for oppsett: https://www.supermicro.com/products/nfo/files/IPMI/Best_Practices_BMC_Security.pdf


Bare ta kontakt med din kontaktperson i Nextron hvis det er kommentarer, spørsmål eller ytterligere bekymringer!


Geir Elstad
Daglig leder


Email Geir at nextron.no





4. oktober 2018

Supermicro Refutes Claims in Bloomberg Article




Supermicro along with Apple and Amazon refute claims in Bloomberg story


SAN JOSE, Calif., October 4, 2018 — Super Micro Computer, Inc. (SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, strongly refutes reports that servers it sold to customers contained malicious microchips in the motherboards of those systems.


In an article today, it is alleged that Supermicro motherboards sold to certain customers contained malicious chips on its motherboards in 2015. Supermicro has never found any malicious chips, nor been informed by any customer that such chips have been found.


Each company mentioned in the article (Supermicro, Apple, Amazon and Elemental) has issued strong statements denying the claims:


Apple stated on CNBC, “We are deeply disappointed that in their dealings with us, Bloomberg’s reporters have not been open to the possibility that they or their sources might be wrong or misinformed. Our best guess is that they are confusing their story with a previously reported 2016 incident in which we discovered an infected driver on a single Supermicro server in one of our labs. That one-time event was determined to be accidental and not a targeted attack against Apple."


Steve Schmidt, Chief Information Security Officer at Amazon Web Services stated, "As we shared with Bloomberg BusinessWeek multiple times over the last couple months, at no time, past or present, have we ever found any issues relating to modified hardware or malicious chips in Supermicro motherboards in any Elemental or Amazon systems.?"


Supermicro has never been contacted by any government agencies either domestic or foreign regarding the alleged claims.




Supermicro takes all security claims very seriously and makes continuous investments in the security capabilities of their products. The manufacture of motherboards in China is not unique to Supermicro and is a standard industry practice. Nearly all systems providers use the same contract manufacturers. Supermicro qualifies and certifies every contract manufacturer and routinely inspects their facilities and processes closely.





11. oktober 2018

Supermicro Designs New Open Software-Defined Networking (SDN) Platform Optimized for 5G and Telco Applications and Launches verified Intel® Select Solution for uCPE




Powerful and Compact Systems Optimized for SD-WAN, uCPE, CDN and Security Applications supporting Intel® Xeon® Scalable processors, Intel® Xeon® D Processors and Intel® QuickAssist Technology with 1G, 10G and 25G Networking all available


SAN JOSE, Calif., October 11, 2018 — Super Micro Computer, Inc. (SMCI), a global leader in enterprise and edge computing, storage, networking solutions and green computing technology, earlier this week announced that it is developing a new modularized 36-port networking platform optimized for a wide range of 5G software-defined networking (SDN) applications at the SDN & NFV World Congress. The company also launched the new SuperServer 5019D-FN8TP, a verified Intel® Select Solution for uCPE designed to accelerate infrastructure deployment for a more efficient and future-ready network.


As internet technologies continue to advance and new data-intensive applications are developed, the amount of data generated and sent across today’s networks is escalating exponentially. With so much data going across the network, it is vital that modern data centers have cutting-edge networking infrastructure to support the speeds and reliability needed for these new technologies.


To address this need, Supermicro’s new 5G network edge server design combines best-in-class compute, memory, storage and modular networking interface into a compact, short-depth 1U system with redundant power. This flexible new server supports the latest Intel® Xeon® Scalable processors and Intel® Xeon D processor to deliver more compute performance compared to previous generations and also supports integrated Intel® QuickAssist Technology (Intel® QAT), providing cryptography engines for faster (up to 100Gbps) encryption and decryption of information for authorized and intended use.


“As the market moves to 5G, Supermicro is addressing the demand for more intelligent network edge solutions by offering a comprehensive selection of compact server solutions to service a wide range of vertical markets including networking, communications, security, and industrial automation,” said Charles Liang, President and CEO of Supermicro. “Our flexible and powerful new 1U modularized network edge platform supports up to 36 network ports and is built for network function virtualization (NFV) and software-defined networking (SDN) to provide the agility and performance for software-defined wide area network (SD-WAN), universal customer premises equipment (uCPE), 5G cloud and centralized RAN applications.”


Supermicro’s new networking edge platform not only delivers balanced compute, storage and networking for the intelligent edge, but also long life availability. With up to 512GB DDR4 four-channel memory operating at 2400MHz, WAN and LAN communication support, redundant power and operation at 0 to 45 degrees C ambient temperature range, this new design is a game changer for SDN and telco companies.






Introducing Supermicro’s Intel® Select Solution for uCPE
In addition, as the market adoption of universal customer premises equipment (uCPE) continues to grow, Supermicro’s verified Intel® Select Solution for uCPE enables service providers to quickly and efficiently deploy various network function virtualization (NFV) applications securely and easily. With the SuperServer 5019D-FN8TP verified as an Intel® Select Solution for uCPE, customers looking to easily adopt SD-WAN can have confidence that this system solution offers not just easy deployment but also verified, workload-optimized performance.


For more details, visit Supermicro Embedded ISS






27. september 2018

Supermicro Celebrates 25-Year Anniversary as Leading USA-based Green Computing Server & Storage System Provider



Supermicro’s Resource-Saving Application-Optimized Systems Share Reaches 10% of Worldwide Data Center Market as Rapid Growth Continues



SAN JOSE, Calif., Sept. 27, 2018 — Super Micro Computer, Inc., the third largest worldwide supplier of servers according to IDC in 2017 and a global leader in enterprise computing, storage, networking solutions and green computing technology, is celebrating the company’s 25th anniversary this week.



Supermicro is a multi-billion dollar Fortune 1000 company. The establishment of the company as a provider of leading enterprise data center solutions has continued to drive Supermicro’s impressive and consistent growth for a quarter of a century. Impressively, Supermicro achieved 6x revenue growth over the past ten years while being profitable every year since its inception.

Starting with founder, CEO and president, Charles Liang’s very first System Board design in 1993 at Paragon Drive, San Jose surrounded by huge redwood trees, Supermicro has long promoted solutions that reduce emissions and protect the environment. Supermicro’s US-based focus enables it to design and manufacture industry-leading server and storage products with superior performance, quality and time-to-market for rapid adoption by the industry. Based in the heart of Silicon Valley, the innovation capitol of the world, Supermicro has beat the odds by continuing to engineer and assemble its products at the company’s San Jose headquarters while competitors outsource these functions overseas. By keeping most R&D efforts in-house, Supermicro increases the communication and collaboration between design teams, which streamlines the development process and reduces time-to-market.

Always first-to-market with the latest innovative server technologies, Supermicro’s product innovations include its flagship BigTwin™ four-node 2U systems with maximum compute, memory, NVMe flash storage, improved TCO and TCE (Total Cost to the Environment) leadership. Regarding NVMe, the company was the first to offer hot-plug NVMe storage and now offers over 150 different system models that support NVMe, including new all-flash 1U systems that support 32 NVMe SSDs in U.2, NF1, Ruler or EDSFF form factors. Supermicro innovations enable solutions that deliver maximum density and power efficiency. One Fortune 100 data center that has deployed tens of thousands of Supermicro’s MicroBlade™ nodes is achieving 1.06 Power Usage Effectiveness (PUE) and 280 server nodes per 24-inch rack.

A proven global leader in high-performance, high-efficiency server technology and innovation, Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiatives. The company recently developed Resource-Saving Architecture designs that reduce data center energy consumption and e-waste while saving customers money on both acquisition costs during refresh cycles and TCO. Supermicro’s latest innovative server solutions will not only give customers a competitive edge, but also an ecological one.

“Over the past 25 years, Supermicro has built an extremely strong foundation for rapid expansion and growth, and we expect to continue leading the industry with our innovative architectures and solutions,” said Charles Liang, president and CEO of Supermicro. “As the world transitions to 5G and businesses use more AI, machine learning and cloud applications, the industry demands more and better computation-intensive solutions – especially in data centers. At Supermicro, green computing to reduce the impact to the environment not only inspires our technological innovation and business growth, but also fuels our passion to protect our one and only Mother Earth.”

To learn more about Supermicro’s Resource-Saving innovations and commitment to green computing, please visit Supermicro Resource Savings Architecture.

For more information on Supermicro's complete range of high performance, high-efficiency Server, Storage and Networking solutions, visit Supermicro.

Follow Supermicro on Facebook and Twitter to receive their latest news and announcements.


20. september 2018

Supermicro Introduces AI Inference-optimized New GPU Server with up to 20 NVIDIA Tesla T4 Accelerators in 4U



Inference-optimized system extends Supermicro’s leading portfolio of GPU Servers to offer customers an unparalleled selection of AI solutions for Inference, Training, and Deep Learning including Singe-Root, Dual-Root, Scale-up and Scale Out designs



SAN JOSE, Calif., Sept. 19, 2018—Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today is introducing the latest additions to its extensive line of GPU-optimized servers.

Artificial intelligence (AI) is quickly becoming one of the most crucial components to business success now and in the foreseeable future. Today, the necessity of deploying powerful computing platforms that can accelerate and cost-effectively scale their AI-based products and services has become vital for successful enterprises.

Supermicro’s new SuperServer 6049GP-TRT provides the superior performance required to accelerate the diverse applications of modern AI. For maximum GPU density and performance, this 4U server supports up to 20 NVIDIA® Tesla® T4 Tensor Core GPUs, three terabytes of memory, and 24 hot-swappable 3.5” drives. This system also features four 2000-watt Titanium level efficiency (2+2) redundant power supplies to help optimize the power efficiency, uptime and serviceability.

“Supermicro is innovating to address the rapidly emerging high-throughput inference market driven by technologies such as 5G, Smart Cities and IOT devices, which are generating huge amounts of data and require real-time decision making,” said Charles Liang, president and CEO of Supermicro. “We see the combination of NVIDIA TensorRT and the new Turing architecture based T4 GPU Accelerator as the ideal combination for these new demanding and latency-sensitive workloads and are aggressively leveraging them in our GPU system product line.”

“Enterprise customers will benefit from a dramatic boost in throughput and power efficiency from the NVIDIA Tesla T4 GPUs in Supermicro’s new high-density servers,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “With AI inference constituting an increasingly large portion of data center workloads, these Tesla T4 GPU platforms provide incredibly efficient real-time and batch inference.”

Supermicro’s performance-optimized 4U SuperServer 6049GP-TRT system can support up to 20 PCI-E NVIDIA Tesla T4 GPU accelerators, which dramatically increases the density of GPU server platforms for wide data center deployment supporting deep learning, inference applications. As more and more industries deploy artificial intelligence, they will be looking for high density servers optimized for inference. The 6049GP-TRT is the optimal platform to lead the transition from training deep learning, neural networks to deploying artificial intelligence into real world applications such as facial recognition and language translation.

Supermicro has an entire family of 4U GPU systems that support the ultra-efficient Tesla T4, which is designed to accelerate inference workloads in any scale-out server. The hardware accelerated transcode engine in Tesla T4 delivers multiple HD video streams in real-time and allows integrating deep learning into the video transcoding pipeline to enable a new class of smart video applications. As deep learning shapes our world like no other computing model in history, deeper and more complex neural networks are trained on exponentially larger volumes of data. To achieve responsiveness, these models are deployed on powerful Supermicro GPU servers to deliver maximum throughput for inference workloads.

For comprehensive information on Supermicro NVIDIA GPU system product lines, please go to Supermicro GPU.



For more information on Supermicro's complete range of high performance, high-efficiency Server, Storage and Networking solutions, visit Supermicro.

Follow Supermicro on Facebook and Twitter to receive their latest news and announcements.


12. september 2018

Supermicro New NVIDIA HGX-2 based SuperServer is World’s Most Powerful Cloud Server Platform for AI and HPC



Designed for the Next-Generation of AI, New HGX-2 System with 16 Tesla V100 GPUs and NVSwitch leverages over 80,000 Cuda Cores to deliver unmatched performance for deep learning and compute workloads



TOKYO, Japan, September 13, 2018—Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, networking solutions and green computing technology, today announced that the company’s upcoming NVIDIA® HGX-2 cloud server platform will be the world’s most powerful system for artificial intelligence (AI) and high-performance computing (HPC) capable of performing at 2 PetaFLOPS.

Supermicro hosted a Platinum Sponsor Booth at the GPU Technology Conference (GTC) Japan 2018 in Tokyo on September 13-14.

“Supermicro’s new SuperServer based on the HGX-2 platform will deliver more than double the performance of current systems, which will help enterprises address the rapidly expanding size of AI models that sometimes require weeks to train,” said Charles Liang, president and CEO of Supermicro. “Our new HGX-2 system will enable efficient training of complex models. It combines sixteen Tesla V100 32GB SXM3 GPUs connected via NVLink and NVSwitch to work as a unified 2 PetaFlop accelerator with half a terabyte of aggregate GPU memory to deliver unmatched compute power.”

From natural speech by computers to autonomous vehicles, rapid progress in AI has transformed entire industries. To enable these capabilities, AI models are exploding in size. HPC applications are similarly growing in complexity as they unlock new scientific insights. Supermicro’s HGX-2 based SuperServer (SYS-9029GP-TNVRT) will provide a superset design for datacenters accelerating AI and HPC in the cloud. With fine-tuned optimizations, this SuperServer will deliver the highest compute performance and memory for rapid model training.

Supermicro GPU systems also support the ultra-efficient Tesla T4 that is designed to accelerate inference workloads in any scale-out server. The hardware accelerated transcode engine in Tesla T4 delivers multiple HD video streams in real-time and allows integrating deep learning into the video transcoding pipeline to enable a new class of smart video applications. As deep learning shapes our world like no other computing model in history, deeper and more complex neural networks are trained on exponentially larger volumes of data. To achieve responsiveness, these models are deployed on powerful Supermicro GPU servers to deliver maximum throughput for inference workloads.

With the convergence of big data analytics and machine learning, the latest NVIDIA GPU architectures, and improved machine learning algorithms, deep learning applications require the processing power of multiple GPUs that must communicate efficiently and effectively to expand the GPU network. Supermicro’s single-root GPU system allows multiple NVIDIA GPUs to communicate efficiently to minimize latency and maximize throughput as measured by the NCCL P2PBandwidthTest.

For comprehensive information on Supermicro NVIDIA GPU system product lines, please go to Supermicro GPU.



For more information on Supermicro's complete range of high performance, high-efficiency Server, Storage and Networking solutions, visit Supermicro.

Follow Supermicro on Facebook and Twitter to receive their latest news and announcements.


19. mai 2018

NVIDIA Introduces HGX-2, Fusing HPC and AI Computing into Unified Architecture





HGX-2 Cloud-Server Platform Accelerates Multi-Precision Workloads; Its Two Petaflops of Processing Power Sets Record for AI Performance


NVIDIA introduced NVIDIA HGX-2™, the first unified computing platform for both artificial intelligence and high performance computing.

The HGX-2 cloud server platform, with multi-precision computing capabilities, provides unique flexibility to support the future of computing. It allows high-precision calculations using FP64 and FP32 for scientific computing and simulations, while also enabling FP16 and Int8 for AI training and inference. This unprecedented versatility meets the requirements of the growing number of applications that combine HPC with AI.


A number of leading computer makers today shared plans to bring to market systems based on the NVIDIA HGX-2 platform.


“The world of computing has changed,” said Jensen Huang, founder and chief executive officer of NVIDIA, speaking at the GPU Technology Conference Taiwan, which kicked off today. “CPU scaling has slowed at a time when computing demand is skyrocketing. NVIDIA’s HGX-2 with Tensor Core GPUs gives the industry a powerful, versatile computing platform that fuses HPC and AI to solve the world’s grand challenges.”


HGX-2-serves as a “building block” for manufacturers to create some of the most advanced systems for HPC and AI. It has achieved record AI training speeds of 15,500 images per second on the ResNet-50 training benchmark, and can replace up to 300 CPU-only servers.


It incorporates such breakthrough features as NVIDIA NVSwitch™ interconnect fabric, which seamlessly links 16 NVIDIA Tesla® V100 Tensor Core GPUs to work as a single, giant GPU delivering two petaflops of AI performance. The first system built using HGX-2 was the recently announced NVIDIA DGX-2™.


HGX-2 comes a year after the launch of the original NVIDIA HGX-1, at Computex 2017. The HGX-1 reference architecture won broad adoption among the world’s leading server makers and companies operating massive datacenters, including Amazon Web Services, Facebook and Microsoft.


OEM, ODM Systems Expected Later This Year


Four leading server makers — Lenovo, QCT, Supermicro and Wiwynn — announced plans to bring their own HGX-2-based systems to market later this year.


Additionally, four of the world’s top original design manufacturers (ODMs) — Foxconn, Inventec, Quanta and Wistron — are designing HGX-2-based systems, also expected later this year, for use in some of the world’s largest cloud datacenters.


Family of NVIDIA GPU-Accelerated Server Platforms


HGX-2 is a part of the larger family of NVIDIA GPU-Accelerated Server Platforms, an ecosystem of qualified server classes addressing a broad array of AI, HPC and accelerated computing workloads with optimal performance.


Supported by major server manufacturers, the platforms align with the datacenter server ecosystem by offering the optimal mix of GPUs, CPUs and interconnects for diverse training (HGX-T2), inference (HGX-I2) and supercomputing (SCX) applications. Customers can choose a specific server platform to match their accelerated computing workload mix and achieve best-in-class performance.


Broad Industry Support


Top OEMs and ODMs have voiced strong support for HGX-2:


“Foxconn has long been dedicated to hyperscale computing solutions and successfully won customer recognition. We’re glad to work with NVIDIA for the HGX-2 project, which is the most promising solution to fulfill explosive demand from AI/DL.”


— Ed Wu, corporate executive vice president at Foxconn and chairman at Ingrasys                                                                             


“Inventec has a proven history of delivering high-performing and scalable servers with robust innovative designs for our customers who run some of the world’s largest datacenters. By rapidly incorporating HGX-2 into our future designs, we’ll infuse our portfolio with the most powerful AI solution available to companies worldwide.”


— Evan Chien, head of IEC White Box Product Center, China Business Line Director, Inventec


“NVIDIA’s HGX-2 ups the ante with a design capable of delivering two petaflops of performance for AI and HPC-intensive workloads. With the HGX-2 server building block, we’ll be able to quickly develop new systems that can meet the growing needs of our customers who demand the highest performance at scale.”


— Paul Ju, vice president and general manager of Lenovo DCG


“As a leading cloud enabler, Quanta is committed to developing solutions for the next generation of clouds for a variety of innovative use cases. As we have seen a multitude of AI applications on the rise, Quanta works closely with NVIDIA to ensure our clients benefit from the latest and greatest GPU technologies. We are thrilled to broaden our GPU compute portfolio with this critical enabler for AI clouds as an HGX-2 launch partner.”


— Mike Yang, senior vice president, Quanta Computer, and president, QCT


“To help address the rapidly expanding size of AI models that sometimes require weeks to train, Supermicro is developing cloud servers based on the HGX-2 platform. The HGX-2 system will enable efficient training of complex models.”


— Charles Liang, president and CEO of Supermicro


“We are very honored to work with NVIDIA as a partner. The demand for AI cloud computing is emerging in today’s modern technology environment. I strongly believe the high performance and modularized flexibility of the HGX-2 system will make great contributions to various computing areas, ranging from academics and science to government applications.”


— Jeff Lin, president of Enterprise Business Group, Wistron


“Wiwynn specializes in delivering hyperscale datacenter and cloud infrastructure solutions. Our collaboration with NVIDIA and the HGX-2 server building block will enable us to provide our customers with two petaflops of computing for computationally intensive AI and HPC workloads.”


— Steven Lu, Vice President, Wiwynn

19. Juni 2016

NVIDIA Tesla P100 Supercharges HPC Applications by More Than 30X





Powered by Pascal Architecture, Tesla P100 Delivers Massive Leap in Data Center Throughput


ISC16 - To meet the unprecedented computational demands placed on modern data centers, NVIDIA today introduced the NVIDIA® Tesla® P100 GPU accelerator for PCIe servers, which delivers massive leaps in performance and value compared with CPU-based systems.

Demand for supercomputing cycles is higher than ever. The majority of scientists are unable to secure adequate time on supercomputing systems to conduct their research, based on National Science Foundation data.1 In addition, high performance computing (HPC) technologies are increasingly required to power computationally intensive deep learning applications, while researchers are applying AI techniques to drive advances in traditional scientific fields.

The Tesla P100 GPU accelerator for PCIe meets these computational demands through the unmatched performance and efficiency of the NVIDIA Pascal™ GPU architecture. It enables the creation of "super nodes" that provide the throughput of more than 32 commodity CPU-based nodes and deliver up to 70 percent lower capital and operational costs.2

"Accelerated computing is the only path forward to keep up with researchers' insatiable demand for HPC and AI supercomputing," said Ian Buck, vice president of accelerated computing at NVIDIA. "Deploying CPU-only systems to meet this demand would require large numbers of commodity compute nodes, leading to substantially increased costs without proportional performance gains. Dramatically scaling performance with fewer, more powerful Tesla P100-powered nodes puts more dollars into computing instead of vast infrastructure overhead."

The Tesla P100 for PCIe is available in a standard PCIe form factor and is compatible with today's GPU-accelerated servers. It is optimized to power the most computationally intensive AI and HPC data center applications. A single Tesla P100-powered server delivers higher performance than 50 CPU-only server nodes when running the AMBER molecular dynamics code,3 and is faster than 32 CPU-only nodes when running the VASP material science application.4

Later this year, Tesla P100 accelerators for PCIe will power an upgraded version of Europe's fastest supercomputer, the Piz Daint system at the Swiss National Supercomputing Center in Lugano, Switzerland.

"Tesla P100 accelerators deliver new levels of performance and efficiency to address some of the most important computational challenges of our time," said Thomas Schulthess, professor of computational physics at ETH Zurich and director of the Swiss National Supercomputing Center. "The upgrade of 4,500 GPU-accelerated nodes on Piz Daint to Tesla P100 GPUs will more than double the system's performance, enabling researchers to achieve breakthroughs in a range of fields, including cosmology, materials science, seismology and climatology."

The Tesla P100 for PCIe is the latest addition to the NVIDIA Tesla Accelerated Computing Platform. Key features include:



  • Unmatched application performance for mixed-HPC workloads -- Delivering 4.7 teraflops and 9.3 teraflops of double-precision and single-precision peak performance, respectively, a single Pascal-based Tesla P100 node provides the equivalent performance of more than 32 commodity CPU-only servers.
  • CoWoS with HBM2 for unprecedented efficiency -- The Tesla P100 unifies processor and data into a single package to deliver unprecedented compute efficiency. An innovative approach to memory design -- chip on wafer on substrate (CoWoS) with HBM2 -- provides a 3x boost in memory bandwidth performance, or 720GB/sec, compared to the NVIDIA Maxwell™ architecture.
  • PageMigration Engine for simplified parallel programming -- Frees developers to focus on tuning for higher performance and less on managing data movement, and allows applications to scale beyond the GPU physical memory size with support for virtual memory paging. Unified memory technology dramatically improves productivity by enabling developers to see a single memory space for the entire node.
  • Unmatched application support -- With 410 GPU-accelerated applications, including nine of the top 10 HPC applications, the Tesla platform is the world's leading HPC computing platform.


Tesla P100 for PCIe Specifications

  • 4.7 teraflops double-precision performance, 9.3 teraflops single-precision performance and 18.7 teraflops half-precision performance with NVIDIA GPU BOOST™ technology
  • Support for PCIe Gen 3 interconnect (32GB/sec bi-directional bandwidth)
  • Enhanced programmability with Page Migration Engine and unified memory
  • ECC protection for increased reliability
  • Server-optimized for highest data center throughput and reliability
  • Available in two configurations:
    • 16GB of CoWoS HBM2 stacked memory, delivering 720GB/sec of memory bandwidth
    • 12GB of CoWoS HBM2 stacked memory, delivering 540GB/sec of memory bandwidth
  • 16GB of CoWoS HBM2 stacked memory, delivering 720GB/sec of memory bandwidth
  • 12GB of CoWoS HBM2 stacked memory, delivering 540GB/sec of memory bandwidth


Availability
NVIDIA Tesla P100 GPU accelerator for PCIe-based systems is expected to be available beginning in Q4 2016 from NVIDIA reseller partners and server manufacturers, including Cray, Dell, Hewlett Packard Enterprise, IBM, Lenovo and SGI.



Additional Resources
Boost throughput for HPC (video)



Pascal deep dive (blog)



Keep Current on NVIDIA
Subscribe to the NVIDIA blog, follow us on Facebook, Google+, Twitter, LinkedIn and Instagram, and view NVIDIA videos on YouTube and images on Flickr.










TILBAKE
Denne siden krever JavaScript :: Alle priser oppgitt i ekskl.mva. :: Copyright © 2018, Nextron AS