SmartKevin logo

Battle of the 16-cores: Intel’s Core i9-7960X vs. AMD’s Threadripper 1950X

AMD Ryzen Threadripper 1950X & Intel Core i9-7960X Processors

Date: January 6, 2018
Author(s): Rob Williams

It still feels a little hard to believe, but both AMD and Intel offer the enthusiast market their own take on a 16-core chip. Remember when quad-cores seemed overkill for desktops? At the top-end, the CPU you choose can greatly affect your workload for better or for worse. So, let’s see what these beefy chips are made of.



Battle Of The 16-core Enthusiast CPUs

I feel like I’m not very good at choosing the best time to post content. After completing a huge round of benchmarking on six processors, news hit the web of a massive security hole found in many processors, and at the current time, it seems Intel’s the most affected party.

Currently, it’s expected that a security patch will reduce the performance in some workloads, though how much degradation will occur is yet to be seen. It originally felt like performance could be severely impacted, but initial tests conducted by other outlets around the world haven’t really given much reason for concern (although it does seem like it’s largely I/O-intensive workloads that will be affected).

If it’s discovered that this patch notably decreases performance (as in, it impacts the benchmarks), I’ll revisit testing. At this point, I don’t expect the results in this review to greatly change, though if you want to remain cautious, you’d get no sneer from me.

Intel & AMD 16-Core Systems

Bad timing aside, this article was a fun one to produce, because… how couldn’t benchmarking 16-core chips be enjoyable? With its Core X series, Intel one-upped AMD by adding 2 cores to its top-end model, so for that reason, I wanted to focus this article more around the chips equal in cores and threads.

The article could also be read from the stand-point of comparing two $999 chips: 1950X and i9-7900X. With 6 cores and 12 threads over Intel in that match-up, don’t be surprised to see impressive gains from AMD in some tests. Currently, AMD offers some excellent value with its Threadripper chips, but Intel remains firm with its pricing thanks to its proven performance, and platform dominance. Zen 2 will undoubtedly be interesting to watch for, because it could really shake things up. If this whole security thing doesn’t manage to, that is.

A Look At What AMD & Intel Offer

AMD and Intel are targeting the same crowd with their respective lineups here. If you’re more of a mainstream user, one who doesn’t want to splurge upwards of $2,000 for a CPU, GPU, and motherboard, you’ll want to look towards AMD’s Ryzen, and Intel’s 8th-gen Core. For enthusiasts – those who demand the best out of their rig – there’s Threadripper, and Core X.

The current Threadripper lineup is pretty simple, carrying a mere three SKUs. Considering the fact that EPYC boasts a 32-core model, it seems obvious that AMD could release a 32-core Threadripper if it wanted to, but that would likely eat more into its enterprise margins than deliver on the desktop margins, because based on all of Ryzen up to this point, AMD doesn’t want to rip you off.

AMD Ryzen Threadripper CPUs
ClockTurboCoresCacheMemoryPowerPrice
1950X3.4 GHz4.0 GHz16 (32T)8+32MBQuad180W$999
1920X3.5 GHz4.0 GHz12 (24T)8+32MBQuad180W$799
1900X3.8 GHz4.0 GHz8 (16T)4+16MBQuad180W$549

Another highlight of Threadripper is that it boasts an industry-leading number of PCIe lanes – 64 on the chip – although I don’t personally treat that as something overly important. I haven’t yet seen a real-world example where so many lanes are needed, but if they are, you’re going to be the most hardcore of the hardcore type of user. Even so, it’s appreciated that AMD took one thing we used to be limited on, and gave us more than we’ll ever need (don’t quote me).

Intel’s Core X lineup is grander than AMD’s, but it has a much wider price gap to cover. SRP for SRP, Intel is only offering a 10-core chip to go against AMD’s 16-core, and as the benchmark results will show, the underdog is fierce in that match-up. AMD isn’t touching any of Intel’s top models, though, and as the benchmark results will also show, even when Threadripper should technically dominate, Intel can still burst ahead.

Intel Core X-Series CPUs
ClockTurboCoresCacheMemoryPowerPrice
i9-7980XE2.6 GHz4.2 GHz18 (36T)24.75MBQuad165W$1,999
i9-7960X2.8 GHz4.2 GHz16 (32T)22MBQuad165W$1,699
i9-7940X3.1 GHz4.3 GHz14 (28T)19.25MBQuad165W$1,399
i9-7920X2.9 GHz4.3 GHz12 (24T)16.5MBQuad140W$1,199
i9-7900X3.3 GHz4.3 GHz10 (20T)13.75MBQuad140W$999
i7-7820X3.6 GHz4.3 GHz8 (16T)11MBQuad140W$599
i7-7800X3.5 GHz4.0 GHz6 (12T)8.25MBQuad140W$389
i7-7740X4.3 GHz4.5 GHz4 (8T)8MBDual112W$339
i5-7640X4.0 GHz4.2 GHz4 (4T)6MBDual112W$242

I have to reiterate how bizarre some of the bottom models in this lineup are. The two SKUs at the bottom of the Core X line directly overlap with certain Coffee Lake chips, and in my mind, I wouldn’t choose a Kaby Lake 4-core 7740X at $325 over a Coffee Lake 6-core 8700K at $389. The X299 platform is more expensive than Z370, so you’d need a really good reason to want to go quad-core (and dual-channel) on a platform like this.

I digress. In some match-ups, Intel is very competitive, but at the top-end, AMD makes its pricing look a little extreme. But, to be fair, “X” does in fact imply “extreme”.

AMD Ryzen Threadripper 1950X and Intel Core i9-7960X Processors

If you’ve been following the CPU space since Ryzen launched, you probably already know the conclusion to this story (and if so, cheers for pressing on anyway). Intel still dominates overall, with its strong IPC and residual optimizations from being the performance leader the past decade. Meanwhile, AMD competes hard on price, knowing that it can’t win every match-up.

Where 16-cores are concerned, I’m glad they exist at all. I truly didn’t anticipate benchmarking a 16-core from both AMD and Intel at this point. We went from 10 cores after 8-cores had been available for a while, and all of a sudden, we have 16 and even 18-core chips available.

The last time I felt this excited about benchmarking CPUs was with Intel’s Skulltrail platform, which brought 8 cores to enthusiasts with the help of a dual-socket system. Trying to exercise all of those cores was a challenge, but a fun one. I felt a bit of nostalgia as I sought out to find benchmarks to take advantage of the gargantuan 18-core.

Ahem, enough of that. If you’re curious about the methodologies used for testing, and the PCs used, head to the next page. Otherwise, jump to page 3 to get on with the benchmarks.

Test Systems & Methodologies

Benchmarking a CPU may sound like a simple enough task, but in order to deliver accurate, repeatable results, and not to mention results that don’t favor one vendor over another, strict guidelines need to be adhered to. That in turn makes for rigorous, time-consuming testing, but we feel that the effort is worth it.

This page exists so that we can be open about how we test, and give those who care about testing procedures an opportunity to review our methodology before flaming us in the comments. Here, you can see a breakdown of all of our test machines, specifics about the tests themselves, and other general information that might be useful.

Let’s start with a look at the test platforms, for AMD’s TR4, and Intel’s LGA2011-v3. We’re in the process of benchmarking every current-gen chip we have over, so for now, only six CPUs are included in these results.

The focus of this article is to compare the 16-core options from AMD and Intel, but since baselines are useful, a quad-, eight-, and ten-core have also been tested to see how our tests scale their way upward. Intel’s 18-core has also been included for good measure – and because it was the first chip tested.

To prevent unexpected performance results, the “Multi-Core Enhancement” optimizations (effectively overclocking all cores to max turbo, instead of just two cores) offered by ASUS and GIGABYTE on their respective motherboards is disabled. All of the CPUs have been tested with the same memory modules, clocked at the same DDR4-3200 with 16-18-18 timings. The only caveat is with the odd duck i7-7740X, which is stuck to a dual-channel controller.

SmartKevin’s CPU Testing Platforms

AMD TR4 Test Platform
ProcessorAMD Ryzen Threadripper 1950X (3.4GHz, 8C/16T)
MotherboardGIGABYTE X399 AORUS Gaming 7
CPU tested with BIOS F3g (Oct 13, 2017)
MemoryCorsair VENGEANCE RGB (CMU32GX4M4C3200C16) 8GB x 4
Operates at DDR4-3200 16-18-18-36 (1.35V)
GraphicsNVIDIA TITAN Xp (12GB; GeForce 388.13)
StorageCrucial MX300 525GB (SATA 6Gbps)
Power SupplyEnermax RevoBron 80+ Bronze (600W)
ChassisEnermax Equilence
CoolingEnermax Liqtech TR4 AIO (240mm)
Et ceteraWindows 10 Pro (Build 16299), Ubuntu 17.10 (4.13 kernel)
NVIDIA GeForce GTX 1080 Ti - SLI Configuration NVIDIA GeForce GTX 1080 Ti - SLI Configuration
As tested configuration: AMD Ryzen Threadripper 1950X (Zen)
NVIDIA GeForce GTX 1080 Ti - SLI Configuration
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
As tested configuration: AMD Ryzen Threadripper 1950X (Zen)
Sony Ps4 Pro Angled View

There’s not too much to say here, which is a good thing. While we didn’t find a huge difference to be made with the Core optimization left on (default), we disabled it to keep things as “reference” as possible. I truly dislike the EFI GIGABYTE has equipped on this board, as it’s pretty clunky, and in my opinion, disorganized. But, where it lacks in EFI polish, the board has made up for in stability.


Intel LGA2011-3 Test Platform
ProcessorsIntel Core i9-7980XE (2.6GHz, 18C/36T)
Intel Core i9-7960X (2.8GHz, 16C/32T)
Intel Core i9-7900X (3.3GHz, 10C/20T)
Intel Core i7-7820X (3.6GHz, 8C/16T)
Intel Core i7-7740X (4.3GHz, 4C/8T)
MotherboardASUS ROG STRIX X299-E GAMING
CPU tested with BIOS 1004 (Nov 14, 2017)
MemoryCorsair VENGEANCE RGB (CMU32GX4M4C3200C16) 8GB x 4
Operates at DDR4-3200 16-18-18-36 (1.35V)
GraphicsNVIDIA TITAN Xp (12GB; GeForce 388.13)
StorageCrucial MX300 525GB (SATA 6Gbps)
Power SupplyCorsair Professional Series Gold AX1200 (1200W)
ChassisCorsair Carbide 600C
CoolingNZXT Kraken X62 AIO (280mm)
Et ceteraWindows 10 Pro (Build 16299), Ubuntu 17.10 (4.13 kernel)
NVIDIA GeForce GTX 1080 Ti - SLI Configuration NVIDIA GeForce GTX 1080 Ti - SLI Configuration
As tested configuration: Intel Core i9-7980XE (Skylake-X)
NVIDIA GeForce GTX 1080 Ti - SLI Configuration
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
As tested configuration: Intel Core i9-7980XE (Skylake-X)
Sony Ps4 Pro Angled View
NVIDIA GeForce GTX 1080 Ti - SLI Configuration NVIDIA GeForce GTX 1080 Ti - SLI Configuration
As tested configuration: Intel Core i9-7960X (Skylake-X)
NVIDIA GeForce GTX 1080 Ti - SLI Configuration
CyberPowerPC AMD VR Gaming PC - Keyboard Switches CyberPowerPC AMD VR Gaming PC - Keyboard Switches
As tested configuration: Intel Core i9-7960X (Skylake-X)
CyberPowerPC AMD VR Gaming PC - Keyboard Switches
NVIDIA GeForce GTX 1080 Ti - SLI Configuration NVIDIA GeForce GTX 1080 Ti - SLI Configuration
As tested configuration: Intel Core i9-7900X (Skylake-X)
NVIDIA GeForce GTX 1080 Ti - SLI Configuration
NVIDIA GeForce GTX 1080 Ti - SLI Configuration NVIDIA GeForce GTX 1080 Ti - SLI Configuration
As tested configuration: Intel Core i9-7900X (Skylake-X)
NVIDIA GeForce GTX 1080 Ti - SLI Configuration
AMD Ryzen 7 1600X (CPU-Z & GPU-Z) AMD Ryzen 7 1600X (CPU-Z & GPU-Z)
As tested configuration: Intel Core i7-7820X (Skylake-X)
AMD Ryzen 7 1600X (CPU-Z & GPU-Z)
AMD Ryzen 7 1600X - As Tested EFI Overview AMD Ryzen 7 1600X - As Tested EFI Overview
As tested configuration: Intel Core i7-7820X (Skylake-X)
AMD Ryzen 7 1600X - As Tested EFI Overview
AMD Ryzen 7 1500X (CPU-Z & GPU-Z) AMD Ryzen 7 1500X (CPU-Z & GPU-Z)
As tested configuration: Intel Core i7-7740X (Kaby Lake-X)
AMD Ryzen 7 1500X (CPU-Z & GPU-Z)
AMD Ryzen 7 1500X - As Tested EFI Overview AMD Ryzen 7 1500X - As Tested EFI Overview
As tested configuration: Intel Core i7-7740X (Kaby Lake-X)
AMD Ryzen 7 1500X - As Tested EFI Overview

As with the TR4 platform, the Core “Enhancements” option is disabled on the LGA2011-v3 platform.

Windows Benchmarks

For the bulk of our testing, we use Windows 10 build 16299 with full updates as the base. After installation, LAN, audio, and chipset drivers are installed even if they are not explicitly needed (because Windows can use generic driver versions). Our basic guidelines beyond that are:

Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Adobe Lightroom Classic CC: RAW to JPEG Export
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Autodesk 3ds Max 2015: (SPECapc 3ds Max 2015)
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Autodesk 3ds Max 2018: Fish Bowl Render (Arnold Renderer)
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Adobe Premiere Pro CC 2018: Blu-ray Concert Encode
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Adobe Premiere Pro CC 2018: 4K RED Encode
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Adobe Premiere Pro CC 2018: 8K RED Encode
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Blender: Pavillon Render
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Cinebench R15.038
Sony Ps4 Pro Angled View
dBpoweramp - Convert FLAC to MP3
dBpoweramp R15.1
dBpoweramp - Convert FLAC to MP3
Intel Core i7-6700K (CPU-Z & GPU-Z) Intel Core i7-6700K (CPU-Z & GPU-Z)
SiSoftware Sandra 2017 SP3
Intel Core i7-6700K (CPU-Z & GPU-Z)

All of the tests shown above are used in their stock configuration. If you’re a Blender user and wish to compare your system’s performance to ours, you can download the project files for free here.

Gaming Benchmarks

Because the biggest bottleneck in a game is the graphics card, the workload needs to be put on the CPU as much as possible in order to better understand the raw performance scaling. As such, some of the games tested here were run at 1080p and 4K, with moderate detail levels.

Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Ashes of the Singularity Escalation: Benchmark Screenshot
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Ashes of the Singularity Escalation: Game Settings
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Grand Theft Auto V: Benchmark Screenshot
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Grand Theft Auto V: Game Settings
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Grand Theft Auto V: Game Settings
Sony Ps4 Pro Angled View
Futuremark 3DMark
Grand Theft Auto V: Game Settings
CyberPowerPC AMD VR Gaming PC - Keyboard Switches CyberPowerPC AMD VR Gaming PC - Keyboard Switches
Grand Theft Auto V: Game Settings
CyberPowerPC AMD VR Gaming PC - Keyboard Switches
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Total War 2: WARHAMMER II: Benchmark Screenshot
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Total War 2: WARHAMMER II: Game Settings
Sony Ps4 Pro Angled View
Intel Core i7-6700K (CPU-Z & GPU-Z) Intel Core i7-6700K (CPU-Z & GPU-Z)
Watch_Dogs 2: Benchmark Screenshot
Intel Core i7-6700K (CPU-Z & GPU-Z)
Intel Core i7-6700K (CPU-Z & GPU-Z) Intel Core i7-6700K (CPU-Z & GPU-Z)
Watch_Dogs 2: Game Settings
Intel Core i7-6700K (CPU-Z & GPU-Z)
Intel Core i7-6700K (CPU-Z & GPU-Z) Intel Core i7-6700K (CPU-Z & GPU-Z)
Watch_Dogs 2: Game Settings
Intel Core i7-6700K (CPU-Z & GPU-Z)

Because Ashes offers the ability to act only as a CPU benchmark, it makes sense to use that for a CPU performance article. GTA V isn’t very GPU intensive, leading us to see similar results at both 1080p and 4K. As such, only 1080p is used for that title. WARHAMMER II and Watch_Dogs 2 are tested at 4K, as well as 1080p.

Linux Benchmarks

Ubuntu 17.10 is the OS of choice for our test bed, as it’s both simple to set up, and so de facto that everyone reading the results should feel at home. The OS is left as stock as possible, with minor software added, and everything updated.

Before testing begins, we take the Phoronix Test Suite suggestion of enabling the “performance” power profile; something that actually improved the encode test by 15%-ish. The command run (sudo):

echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Blender: Pavillon Render
Sony Ps4 Pro Angled View
Sony Ps4 Pro Angled View Sony Ps4 Pro Angled View
Phoronix Test Suite 7.0
Sony Ps4 Pro Angled View

Both the Blender and HandBrake tests are shared with the Windows testing. Phoronix Test Suite is used for the bulk of our Linux testing, as it supplies the tests we need, and makes them easy to use.


If you think there’s some information lacking on this page, or you simply want clarification on anything in particular, don’t hesitate to leave a comment.

Rendering: (3ds Max 2015 & 2018), Blender, Cinebench, POV-Ray & V-Ray

(All of our tests are explained in detail on page 2.)

Design and rendering is one of the greatest areas of computing to benchmark to highlight the benefits of faster hardware, whether it be a CPU, GPU, memory, and even storage. On a low-end system, a production render might take hours, for example, whereas on a high-end system, that render could be shaved down to the tens of minutes.

With these results, it’s up to you to gauge where the best value can be found. In some cases, it might be beneficial to go with more modest hardware if the time-to-render isn’t of a great concern; in other cases, spending more on faster hardware might actually save you money in the long-run.

For our rendering tests, we use Autodesk’s 3ds Max (2015, for SPECapc, and 2018, for our real-world model render), the popular open source design suite Blender, as well as Cinebench, POV-Ray, and V-Ray Benchmark for some quick-and-dirty results.

Autodesk 3ds Max 2015 & 2018

AMD & Intel 16-core CPU Performance - SPECapc 3ds Max 2015
AMD & Intel 16-core CPU Performance - Autodesk 3ds Max 2018

Both sets of results are saying similar things here, but Intel enjoyed stronger performance in SPEC’s test, with the i9-7900X placing ahead of AMD’s 1950X. With the real-world render, Threadripper came within striking distance of the 7960X. AMD doesn’t mention Autodesk too often when it comes to performance, not even in relation to its professional GPU lines, but where Threadripper is concerned, it’s no slouch.

Blender

AMD & Intel 16-core CPU Performance - Blender Renders

Results like these highlight the fact that not all projects are built alike. The 16-core Intel chip somehow outperformed the 18-core with the Pavillon render, but it scaled as expected with the more time-intensive Agent 327 (both projects can be downloaded for free here.) In the given match-up of 16-core vs. 16-core, Intel’s clearly winning the fight. But don’t worry – AMD has some tricks up its sleeves.

Synthetic Renderers: Cinebench, POV-Ray, V-Ray

AMD & Intel 16-core CPU Performance - Cinebench
AMD & Intel 16-core CPU Performance - POV-Ray
AMD & Intel 16-core CPU Performance - V-Ray Benchmark

Synthetic tests are not entirely representative of the real-world in all cases, but when we need reliable, repeatable numbers, they can truly prove useful. That’s especially true when they don’t really seem to favor one architecture over another; AMD’s scaling here is to be expected from its slightly weaker IPC performance. Meanwhile, the 7980XE offers modest gains over the 7960X, as its 12.5% increase in cores would suggest.

Media: Adobe Premiere Pro, Adobe Lightroom, dBpoweramp & HandBrake

(All of our tests are explained in detail on page 2.)

As seen on the previous page, rendering can take amazing advantage of even the biggest processors, but video encoding is not that far behind – if at all. Even the free conversion tool HandBrake can take advantage of our sixteen-thread processors to significantly decrease encode times. For our video encoding purposes, we use Adobe’s Premiere Pro, as well as HandBrake.

To a lesser degree, music conversion and image manipulation can also see benefits on beefier chips, so Adobe’s Lightroom and dBpoweramp will be used to help us gauge that performance.

Adobe Premiere Pro

AMD & Intel 16-core CPU Performance - Adobe Premiere Pro (Blu-ray & 4K RED)
AMD & Intel 16-core CPU Performance - Adobe Premiere Pro (8K RED)

This article marks the first where I’ve been able to include 8K tests, and I’m glad I was able to, because it paints a better picture of performance scaling overall. In many cases, I’ve found 1080p encodes to utilize a many core chip poorly, but based on the results of the Pixies Blu-ray concert encode here, it does indeed seem to scale quite well today. We’re talking a straight-forward Blu-ray rip – one encode to another. The 4K encode scales pretty much perfectly with what I’d expect, with Threadripper falling behind a bit, but still proving impressive given Intel’s long-standing prowess in encoding.

At 8K, I decided to test the encode both with the software encoder, and NVIDIA’s CUDA encoder. In the past, I didn’t see notable gains in a GPU-based encode on a beefier CPU, but these results changed that up completely.

When an encode can take advantage of the GPU, it’s like sinking a hot knife into cold butter. That might make it seem like the CPU is irrelevant, but not so. The faster the CPU, the faster the overall encode. In some cases. 8K is a specific beast, and chances are, if you’re dabbling with it, you probably already know that you need a ton of horsepower.

One last thing before moving on:

Adobe Premiere Pro 8K Encode - Software
Premiere Pro 8K Software Encode
Adobe Premiere Pro 8K Encode - CUDA
Premiere Pro 8K CUDA Encode

In the first shot above, you’ll see the software-based encode running. In Task Manager, there’s sporadic usage across the cores, hinting at possible inefficiencies. On the bottom, we have the CUDA encode, which doesn’t just encode faster overall, it seems to maximize the CPU better – getting more work done in less time. Since the gains from moving to the GPU encoder are so extreme, it’s hard to gauge the true before/after difference of the CPU with that type of encode. Still, results like these are not just good for NVIDIA, but AMD and Intel, too.

There could be some cases where performance won’t always scale like this, because not all projects are alike. But for a straight-forward encode with a RED file right out of the camera, it’s clear that managing 8K projects is not for the faint of heart. Or, at least those with weak hardware.

HandBrake

AMD & Intel 16-core CPU Performance - HandBrake

Intel’s long-standing encoding dominance can be seen again with HandBrake, although AMD’s Threadripper provides enough brute force to compete hard. At the $1,000 price point, AMD wins at the game Intel technically should. That’s pretty disruptive, but the good kind (for consumers anyway).

Adobe Lightroom & dBpoweramp

AMD & Intel 16-core CPU Performance - Adobe Lightroom and dBpoweramp

dBpoweramp can’t take advantage of more than 16 threads, but interestingly, the CPUs with more than 16 threads continue to scale decently well. In another oddity, the 18-core chip falls to Intel’s own 16-core, and meanwhile, the 1950X keeps right on up. But, tying into that Intel media prowess again, Intel’s strengths in Lightroom are easy to spot. I’d be remiss to not highlight the fact that Threadripper bests Intel’s same-priced 7900X. This is why competition is so important, and why content like this is so much fun to put together.

SiSoftware Sandra: Computation, Memory & Cache Tests

(All of our tests are explained in detail on page 2.)

SiSoftware’s Sandra needs no introduction, but I’ll give one anyway. It’s been around for as long as the Internet, and has long provided both diagnostic and benchmark features to its users. SiSoftware keeps on top of architectural updates as they’re revealed, and often, the software supports a specific processor feature or design before consumers can even get their hands on the product.

As a synthetic tool, Sandra can give us the best possible look at the top-end performance from the hardware it can benchmark, which is the reason we use it to test CPUs, memory, motherboards, and even graphics cards (for compute). It also allows us to benchmark very specific tests, such as inter-core bandwidth and latency, financial and scientific scenarios, as well as cache performance.

Arithmetic & Multi-Media

AMD & Intel 16-core CPU Performance - SiSoftware Sandra 2016 Arithmetic & Multi-Media

I harped on quite a bit about Intel’s strong media performance, and Sandra has come to solidify that fact. As seen on the previous page, AMD’s Threadripper isn’t weak at all when it comes to intensive media tests, so a result like this is a little off-putting. In reality, Intel’s chips were tested with the wide AVX512, while AMD’s was tested using AVX2, the same as the Kaby Lake-X 7740X, which also reflected obviously severed performance.

In the arithmetic test, AMD’s chip fell short of Intel’s 16-core, but again, this isn’t too surprising given the known IPC advantages on Intel. From a value perspective, AMD’s overall performance is strong (just look elsewhere for AVX512).

Cryptography

AMD & Intel 16-core CPU Performance - SiSoftware Sandra 2016 Cryptography

Intel’s biggest chips edge ahead of Threadripper, with strong gains seen in hashing. With encryption and decryption, AMD’s high clock speed helps deliver solid performance.

Financial & Scientific Analysis

AMD & Intel 16-core CPU Performance - SiSoftware Sandra 2016 Financial & Scientific Analysis

Here’s where things got weird. I’m not sure if it’s a bug in Sandra, or the fault of my system, but with the default configuration, the Scientific test refused to complete on Threadripper. However, enabling the NUMA memory mode did allow it to run; thus, the Scientific test above for Threadripper is using the NUMA mode (not that I expect much of a real-world difference given other tests conducted).

In the Financial test, AMD impresses. It didn’t beat Intel’s chip by much, but given the blue team’s stronger IPC performance, AMD did well to conquer here.

Memory & Core Bandwidth / Latencies

AMD & Intel 16-core CPU Performance - SiSoftware Sandra 2016 Memory Bandwidth
AMD & Intel 16-core CPU Performance - SiSoftware Sandra 2016 Memory Latency

As a one-off addition, both of the charts above include a NUMA configuration for Threadripper, simply to see what’s possible. In testing, I found a general lack of difference in performance in most tests except bandwidth and latency. In every other case, I actually saw reduced performance using NUMA, so if you use the setting, put your benchmarkers’ cap on, because you need to know how it affects your workflow.

With NUMA, AMD’s chip sweeps the floor with the rest of them, breaking through the 70GB/s mark. Without NUMA, Threadripper falls behind the Intel rigs using the same exact memory, but only the dual-channel 7740X slouch could really be pointed and laughed at here.

Gaming: 3DMark, Ashes, GTA V, TW: WARHAMMER 2 & Watch_Dogs 2

(All of our tests are explained in detail on page 2.)

It’s been easy to highlight the performance differences across our collection of CPUs on the previous pages, since most of the tests used take advantage of every thread we give them. But now, it’s time to move onto testing that’s a different beast entirely: gaming.

In order for a gaming benchmark to be useful in a CPU review, the workload on the GPU needs to be as mild as possible; otherwise, it could become a bottleneck. Since the entire point of a CPU review is to evaluate the performance of the CPU, running high detail and high resolutions in games won’t give us the most useful results.

As such, our game testing revolves around 1080p, and sometimes 1440p, with games being equipped with moderate graphics detail (not low-end, but not high-end, either). These settings shouldn’t prove to be much of a burden for the TITAN Xp GPU. For those interested in the settings used for each game, hit up page 2 (a link is found at the top of this page).

In addition to 3DMark, our gauntlet of tests includes four games: Ashes of the Singularity: Escalation (CPU test only), Grand Theft Auto V (Fraps), Total War: WARHAMMER II (built-in benchmark), and Watch Dogs 2 (Fraps).

Futuremark 3DMark

AMD & Intel 16-core CPU Performance - Futuremark 3DMark Overall Scores
AMD & Intel 16-core CPU Performance - Futuremark 3DMark Physics Scores

The scaling here isn’t unexpected, but it is worth noting that AMD’s chip performed very well against its equal-priced competition, Intel’s i9-7900X. But how about some actual games?

Ashes of the Singularity: Escalation

AMD & Intel 16-core CPU Performance - Ashes of the Singularity Escalation

Ashes is a very unusual game, in that its benchmark is actually a great benchmark for gauging the overall performance out of a GPU or CPU. The gains at the top are incredibly minor, and pretty minor in the real-world, but it does show that if games take proper advantage of all the cores they’re given, we could begin to see some really cool gaming enhancements (not that everyone is going to run out and buy an 18-core CPU).

Grand Theft Auto V

AMD & Intel 16-core CPU Performance - Grand Theft Auto V

I had plans to benchmark GTA V at both 1080p and 4K with the latest suite overhaul, but after testing a couple of CPUs, I found that the results were barely different at either of those resolutions. That’d imply the CPU doesn’t play much of a role in the game’s overall framerate, yet we can still see some clear differences here.

This is one game that doesn’t favor AMD’s hardware too much, and while the game won’t take advantage of too many cores, the beefier chips still somehow manage to deliver better overall framerates.

What’s really interesting is how much the i7-7740X croaked with this game. Technically, it should be capable, but as soon as I entered the test level with that chip, I’d experience regular stutters every 10 seconds or so. The problem was repeatable, and not seen on the other chips tested.

To give some back story, when I supply a minimum FPS here, it’s the lowest of the two runs that I conducted; not an average. If the deltas between the two runs vary too much, a third run is brought in to paint a better picture. With the 7740X and GTA V, I found that there was a 50% chance that the minimum would be low or high. For whatever reason, that chip does not like GTA V, despite delivering the best average FPS of the entire bunch. Thus, I think it’s safer to stick to the lower minimum reported FPS.

Total War: WARHAMMER II

AMD & Intel 16-core CPU Performance - Total War WARHAMMER 2

The results from this game are hard to figure out, based on the testing I’ve done up to this point. It feels like both the clock speed and number of cores can play a role in this game’s performance, but ultimately, there’s no clear sign – outside of the fact that the fastest chip here exhibits a strong lead at 1080p.

Threadripper falls behind Intel’s top chips again, but so does the Core i7-7820X. It begs the question of what a 7740X-clocked 7820X could do, which sounds like a fun test once I plop the CPU back in the test rig.

Oh – and in case it isn’t obvious, the CPU doesn’t matter too much for this game at 4K.

Watch Dogs 2

AMD & Intel 16-core CPU Performance - Watch Dogs 2 (1080p)
AMD & Intel 16-core CPU Performance - Watch Dogs 2 (1440p)

With this dual 1080p / 4K look, we see a picture we’ve seen painted many times before. AMD’s performance is definitively weaker than Intel’s (not a surprise when Intel’s logo graces the splash screen), but it’s not to a worrying degree. This is one game where a clearer difference is seen at 4K than with many games, though the overall performance of the Intel chips really seem to follow no rhyme or reason.

Linux: Blender, HandBrake & Phoronix Test Suite

(All of our tests are explained in detail on page 2.)

To wrap-up our performance results, we have a slew of Linux test results to pore over, which include two identical tests from the Windows suite (HandBrake and Blender).

The OS used in testing is Ubuntu 17.10, using the stock 4.13 kernel. As with the Windows tests, the Linux OS is kept as minimal as possible, with only required software packages installed on top of the stock software. HandBrake is procured through Ubuntu’s repository, while Blender is grabbed from the official source. As of the time of writing, the latest version of Phoronix Test Suite (7.6.0) is also used.

And speaking of, most of the Linux testing is performed with Phoronix Test Suite, which makes it ridiculously easy to benchmark a huge number of tests in one go, to let us, as Ronco famously said, “set it, and forget it!” Well, “forget it” until the next test needs to be run, anyway.

Blender & HandBrake

AMD & Intel 16-core CPU Performance - Blender & HandBrake (Linux)

Despite being real-world applications, both Blender and HandBrake can deliver benchmark scaling like synthetic benchmarks do (more so with Blender than HandBrake). Intel’s chips take the lead in both tests, but the mighty 1950X doesn’t lag very far behind.

Interestingly, for both architectures, the Pavillon project rendered faster for in Linux than in Windows, with the same 2.79 version of Blender. There wasn’t a clear gain like this with the previous test suite, so I’m currently unsure why there’s a fair degree of separation today.

Phoronix Test Suite

AMD & Intel 16-core CPU Performance - Compiler Performance (Linux)
AMD & Intel 16-core CPU Performance - Ray Tracing (Linux)
AMD & Intel 16-core CPU Performance - SciMark (Linux)
AMD & Intel 16-core CPU Performance - OpenSSL (Linux)
AMD & Intel 16-core CPU Performance - HMMer Search (Linux)
AMD & Intel 16-core CPU Performance - 7-Zip (Linux)
AMD & Intel 16-core CPU Performance - Stream (Linux)
John The Ripper (Encryption)
BlowfishMD5DES
Intel Core i9-7980XE31.3K395.0K101.6M
Intel Core i9-7960X29.5K373.1K95.7M
AMD Ryzen Threadripper 1950X23.2K363.0K26.0M
Intel Core i9-7900X20.5K259.1K66.9M
Intel Core i7-7820X16.5K207.5K53.7M
Intel Core i7-7740X8.9K112.0K26.7M

In almost every single test, AMD’s 1950X slots in at 3rd place, out of the 6 CPUs tested. There are some exceptions, like with SciMark, a single-threaded test, and HMMer, the only multi-threaded test where it fell far behind (Intel optimizations do help there from what I can tell).

Depending on what you want, either Intel’s or AMD’s chips will suit your purpose better, so as I like to say, it pays to know your workload. 16-core vs. 16-core, Intel leads the pack considerably in some tests (eg: OpenSSL), but $ vs. $, the arrangement of these CPUs would be a little different.

Even without NUMA mode, AMD’s Threadripper gave the highest Copy result of the bunch, but fell behind a wee bit with Add – another pitfall shared by the rather strange quad-core Intel part.

Power Consumption & Final Thoughts

To generate power-draw results for our collection of CPUs, we plug the test PC into a Kill-a-Watt for real-time monitoring, and stress the CPU with the help of POV-Ray’s multi-threaded test (which can peak 100% of cores in our tests). Idle power consumption is measured about 5 minutes after boot, once Windows decides to calm down and the wattage reading keeps stable.

Because AMD and Intel measure temperatures very differently, and there’s never a guarantee that software applications are reporting accurate temperatures, we forgo that testing. The only reliable method for capturing CPU temperatures is to go the hardware route, which is both very time-consuming, and expensive.

AMD & Intel 16-core CPU Performance - Power Consumption

I could do without oddities creeping up in my test results, so I’m sure glad to experience an awful lot of them. Here, the 18-core fell behind the 16-core overall, which isn’t scaling shared by other media outlets I’ve checked out. My theory is that POV-Ray is a weak test of power consumption, despite it proving suitable in initial testing. I feel like the 18-core completes the test so quickly, that it doesn’t have time to fully “heat up”, though I could find holes in the theory.

Nonetheless, this is one test where AMD proves extremely impressive. Intel’s 16-core draws some 61W more than AMD’s, which feels pretty damn weird to say, after years of the opposite being true. I will give some credit to Intel, though, because all of these CPUs are crazy efficient in the grand scheme of things. Remember when 600W power supplies felt required? Here’s a test PC with a top-of-the-line GPU with top-of-the-line CPUs and lots of memory – and we’re peaking at 326W.

Final Thoughts

The title of this article doesn’t pose a question as to whether or not one of these 16-core CPUs is better than the other, because it’d be a little dumb due to the obvious answer. At present, Intel shines in some scenarios, but leads in most. But AMD’s no slouch, with super-strong memory bandwidth and power consumption.

Actually, let’s face it: this article is kind of dumb. Everyone knew coming in that Intel’s Core i9-7960X was going to beat out AMD’s Threadripper 1950X, so really – what is this farce? Well, it acts as a solid set of results to highlight areas where each chip excels, especially where price is concerned.

Intel & AMD 16-Core Systems

As of the time of writing, Intel’s chip can be had for $1,650 over at Amazon, and a higher $1,695 at Newegg. AMD’s Threadripper 1950X, meanwhile, is an attractive $901 at Amazon, and a higher $950 at Newegg.

For those who want the ultimate in gaming and overall performance, and don’t mind spending more for it, Intel is way to go. That’s a fact that could be a bit of a saving grace for the company, as forthcoming security patches are said to affect certain workloads (namely I/O related, but it’s still early days). I personally believe that Intel’s SRP for its biggest chips are high, but with the performance crown, it’s stubborn.

Threadripper, overall, is the better value between these 16-cores. However, while some of AMD’s shortcomings were modest overall, there were some more pronounced differences speckled about. In gaming, AMD continually falls behind, but fortunately, the biggest drops are seen at lower resolutions. It can be assumed that if you’re planning to go with a thousand dollar chip, you’re probably going to opt for something better than 1080p, and even then… we’re talking 100+ FPS to begin with.

And that’s really all that can be said about that. Intel offers the highest performing desktop chips, and at the top end at least, you’ll be paying for it. Meanwhile, AMD is king if you want the best bang-for-the-buck. That all said: before you settle on either, you need to investigate how you plan to take advantage of the power availed to you. As clearly seen in the results on the previous pages, not all applications use AMD and Intel the same. One may have strengths more important to your typical workloads, so I’d caution against being hasty with that credit card.

If you have any reservations or further questions, leave a comment below!

Copyright © 2017 SmartKevin