Date: April 2, 2019
Author(s): Rob Williams
Intel’s Xeon Scalable processors enter their second generation, so to complement that, the company has upgraded all models, and added some new SKUs for good measure. Complementing the launch is even more hardware and tech, including Intel’s DL Boost, Xeon D-1600 SoC, more Optane, and even some FPGA action.
In case it hasn’t been obvious enough, Intel has been on a serious tear as of late. With competition having become stronger in recent years, the Santa Clara native has proven that it’s both nimble and quick to turn out new product – either current kit beefed-up to better tackle the new competition, or new tech that will help strengthen the company’s position across many markets.
With its first-gen Xeon Scalable processors, Intel seemed to release a SKU for everyone, but with the new second-gen launch, it’s shown that there are even more gaps that can be filled, at a wide range of price points. That even includes the top-end, which has made room for new processors breaking through the 28-core limit of last-gen. The “big one” becomes the Xeon Platinum 9282, a processor that every power-hungry user will drool at the thought of owning. 56 cores (112 threads) and a 12-channel memory controller can do that.
Today is a big day for Intel, because it’s not only releasing its second-gen Xeon Scalable processors, but many complementary technologies. Everything revolves around Intel’s “Data Centric” focus, which takes care not only of processing data, but moving it around in the most efficient and cost-effective manner possible. There’s so much to talk about – almost too much – so let’s jump right in and tackle the company’s most notable new announcements.
We thought about creating a big table to show all of Intel’s current Xeon Scalable processors, but after taking a look at the slides below, you can probably understand why it felt easier to simply share those. There are a lot of models listed here, and every single one of them is a second-gen chip, denoted by the 2 that occupies the second slot in the model name.
Some of these chips speak for themselves, but with this generation, Intel is taking care of customers with very specific needs. There’s a new SKU dedicated for search-specific applications, and another for improving VM density. Then we have others for network optimizations, and for those with specific clock control, there are three classes of Speed Select SKUs available. Specific SKUs will help Intel better target edge use cases, and with 5G soon to become a big part of our lives, a lot of focus is of course being given there (it helps that Intel itself is heavily invested in 5G itself).
On the first slide above, the entire stack is shown off, but in the second, everything but the 9200 series is. That leaves pricing for the biggest chips up in the air, but as Intel is promising immediate availability for most chips in this list, it won’t take long to find that out. With the 28-core Xeon Platinum 8280 priced at $10,009, it’s safe to say that the 9282, which doubles its number of cores and cache, is going to be significantly higher.
With these new 9200 series chips, Intel manages to apply the hurt to the top of AMD’s EPYC stack, which currently tops out with a 32-core model. Of course, AMD teased its second-gen EPYC “Rome” in recent months, and since it will feature a 64-core model, this 56-core chip from Intel may not last as the core leader for long – but what will ultimately matter is total performance and features. Intel could not be more aggressive right now, so it’ll be interesting to see how AMD plans to counter this launch in the future.
Before we move too much further, Intel decided to share some information about Xeon Scalable model names. The first number in the model name represents the series it’s in, where 9 and 8 are Platinums, 6 and 5 is Gold, 4 is Silver, and 3 is Bronze. The third and fourth numbers represent simple digits to create scaling of the models, and the final characters, if there are any, represent options; eg: L means 4.5TB memory support, M means 2.0TB memory support, Y means Intel Speed Select, and so on.
With so much AI/deep-learning talk revolving around GPUs in recent years, it’s refreshing to see that Intel has a heavy AI acceleration push with these latest-gen processors. With Intel DL Boost, the company highly accelerates inference, and gives us an extreme example of how far things have come in the past few years with the slide below.
Here, you can see mammoth growth in performance on the same CPU after a two-year span. Hardware isn’t the only part of the equation here that matters; so too does software, and in this particular case, the gains seen are tied to Caffe ResNet-50, a popular framework that also happens to double as a popular performance gauge.
When using INT8 on these new processors, the gains with DL Boost are significant, but even more impressive is taking FP32 and retooling it as INT8, it can offer an even greater boost. We owe these improvements to some new AVX-512 instructions, such as AVX512_VNNI (Vector Neural Network Instruction).
Like Xeon, Intel continues to innovate with it storage, because again, the company is hugely focused on a “Data Centric” future, and that requires not only processing power, but lots of moving data. The storage triangle seen below is probably one of Intel’s favorite slides to show off, but it’s for a fair reason.
At the top of this performance triangle, nothing is going to displace DRAM for those who need the fastest possible memory bandwidth possible. But for those who don’t mind sacrificing some performance for improved ROI and increased storage space, Optane DC persistent memory is hugely attractive, and we hope its allure is only going to grow with time.
Last year, we chatted to MySQL legend and current primary author of MariaDB Michael “Monty” Widenius while in Singapore for Acronis’ 15th anniversary, and he spoke highly of Optane and its future, noting that the improved latencies over traditional storage, and also increased storage in general, would make a huge difference in database work. Not surprisingly, that was a major use case Intel is talking about with these new solutions.
Optane DC memory is designed to fit in between DRAM and fast SSD storage, which would imply PCIe-based devices. Regular SSDs have become part of the “warm tier”, where data isn’t entirely important, but could be needed at any given time. Below that, QLC 4-bit-per-cell SSDs fit in, with the slowest of the slow storage, tape and mechanical HDD, dedicated to the cold tier.
Intel says that with its Optane DC persistent memory, up to 36TB of system-level capacity could be added in an eight-socket system, representing a 3x improvement compared to the original Xeon Scalable series.
Further Optane love is seen with the new Optane SSD DC D4800X, a dual-port drive that adds some redundancy in the event one of the ports (on the drive or motherboard) decides to die on you. This is quite rare, but when dealing with important drives, any extra redundancy adds another layer of confidence to the operation. Intel is also releasing a new PCIe drive for read-heavy workloads, SSD D5-P4326, which uses QLC NAND to allow it to be offered at a more affordable price. QLC might seem a bit strange in an enterprise environment, but with read-heavy workloads, it can make a ton of sense.
For environments where space is extremely limited, and more modest Xeon performance is acceptable, Intel has a D to give you, in the form of the Xeon D-1600. This SoC best suits limited space environments, or those where even the power is limited. These chips will offer between 2-8 cores, and TDPs of 27-65W.
We remember a story floating around the internet a couple of months ago of someone claiming that quad-core CPUs are effectively dead, but the fact that Intel has just announced a brand-new 2-core Xeon proves that to be very incorrect. Of course, 2-core CPUs are awful by today’s standards for real-world desktop use, but for fixed-focused machines that don’t need a lot of processing power, their huge power and thermal savings makes a lot of sense.
These chips tie into Intel’s 5G future, as well. Despite being a more modest platform than Xeon Scalable, Xeon D-1600 still avails 4x10GbE Ethernet, and a 1.2~1.5x performance increase over the first-gen chips. This all makes these chips great for use in switches, routers, security appliances, and many hardware options involving 5G.
As mentioned earlier, Intel has so much to talk about with these new announcements, it’s hard to process it all, and admittedly, we are just scratching the surface of what some of these products are capable of.
Another product being announced is Agilex FPGA, Intel’s latest solution for those who want to want to build chips with custom ASICs, as well as a programmable core. The result is a chip with cache-coherency as with Xeons, big bandwidth, and small latencies. Intel is touting flexibility overall, which is pretty much to be expected with FPGAs, but Agilex in particular aims to be as flexible as possible while delivering uncompromising performance. As the slide shows, Agilex is already equipped to handle next-gen technologies, like DDR5 and PCIe 5.0.
To move data across a network faster, Intel is also announcing its new 100Gbps Ethernet 800 series, with application device queues (ADQ) and dynamic device personalization (DDP). Both technologies are meant to optimize the performance of the network adapter, smoothing out performance and avoiding bottlenecks at all costs. Avoiding bottlenecks can ultimately mean improved throughput, and of course, reduced latencies.
You probably couldn’t predict that we were going to talk about prediction, but the Ethernet 800 series adapter can greatly reduce predictability latency. Redis is used as an explicit example, which saw a 50% increase in performance, extremely important when dealing with dozens or a hundred users. DDP allows users to customize the packet pipeline to better suit their particular needs. Unlike ADQ, DDP is also available on the Ethernet 700 series.
If there was any doubt about Intel’s efforts to dominate the enterprise market, today’s announcements should squash it. There’s so much to talk about here, and it’s because Intel is tackling so many parts of the market, from warm storage to close-to-DRAM memory Optane DIMMs, to Xeon processors dedicated for simpler edge use cases to those that offer so much performance in a single chip. Things are exciting right now.
Intel’s DL Boost should prove to be invaluable to those working with AI and deep-learning, at least where interference work is concerned. With 112 threads per Xeon Platinum 9200 CPU, there’s a ton of power here, and there will be even more if you stack them up in a two-socket machine. We just don’t really want to think of the price tag of such a machine, though we can infer that it will be expensive.
Then we have Agilex FPGA’s, 100Gbps Ethernet adapters, a new Xeon SoC, specific Xeon Scalable SKUs for super-focused use cases, and more. It will be great to see these products begin to hit the market, and of course see what the competition thinks of these new launches. We also look forward to seeing how AMD will try to counter things in the future, since competition is a very good thing.
Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.