The “x70” series of Nvidia graphics processors—the GeForce GTX 670, GTX 770, and GTX 970—has long been the “expensive sweet spot” in the company’s lineup, and that tradition continues with today’s launch of the GeForce GTX 1070, which rolls out with cards starting at $379. And what a landmark graphics-processing unit (GPU) the GTX 1070 looks like it will be.
This is the “Pascal”-architecture GPU that people with limited (though still substantial) gaming budgets have been waiting for, as it’s based on the same chip as the big-gun flagship GeForce GTX 1080, but cut down a bit to make it more affordable. Not to give too much away up front, but what the GTX 1070 card boils down to is a part that offers 75 percent of the GTX 1080’s performance at little more than half the price, making it a heck of a bargain.
Like the GeForce GTX 970 that preceded it, it’s easy to predict that the GeForce GTX 1070 will be an immediate best seller. We expect GTX 1070-based video cards will be the ones of choice for a wide swath of gamers who have well-endowed PCs and high-resolution monitors, but can’t quite justify spending $600 or more on a new GeForce GTX 1080 card.
Before we dive into the GeForce GTX 1070, we realize it’s possible you haven’t heard of the Nvidia Pascal architecture or any of Nvidia’s recent hardware news. So let us get you up to speed with a brief synopsis. (The preceding link has more.)
Over the past few years, Nvidia been working to upgrade its GPU silicon to a new architecture named “Pascal.” The previous architecture, named Maxwell, was built using transistors that were planar (or single-layered) in their orientation and manufactured using a 28-nanometer (nm) process. Pascal is a whole different animal, using “3D” transistors that are stacked vertically, and built on a much smaller 16nm process.
What the nanometer numbers mean: Since the transistors are a lot smaller, Nvidia can pack a whole lot more of them into its GPUs, without requiring more power consumption. This means its all-new Pascal GPUs are not only much more powerful than the previous generation, but also a lot more efficient.
The first Pascal-based gaming GPU to launch was the flagship GeForce GTX 1080, which went on sale on May 27. In our review of the GeForce GTX 1080, we saw that it was able to offer performance that exceeded that of the previous-gen flagship GPU, the GeForce GTX Titan X, with a die that is half the size, while requiring 70 watts less power.
What Nvidia has been able to achieve with Pascal is impressive, and our testing showed the GeForce GTX 1080 is hands-down the most powerful single-chip consumer graphics card ever made, and not by a small margin either.
The GeForce GTX 1080’s impressive performance is why so many people are so excited about the GeForce GTX 1070, as it’s a Pascal GPU that many more people will be able to afford, so long as they’re comfortable spending around $400 (or bit more) on a GPU.
We say “or more” because like the GeForce GTX 1080, the GeForce GTX 1070 will be offered in two broad versions. Nvidia will offer a GeForce GTX 1070 “Founders Edition” version of the card that will be available first and cost $449. After that, versions produced by Nvidia’s usual card partners, with custom coolers and designs, should roll out, and the least expensive of these are expected to start at $379.
Also, just with like the GeForce GTX 1080, we don’t have a clear idea what the actual differences will be between the different versions of the cards. So it’s impossible to say at this point whether the Founders Edition price premium will be worth it or not. It was also unclear when we wrote this before the launch, whether partner boards will be available when the GeForce GTX 1070 goes on sale June 10, or if just the Founders Edition will be available then. We’ll just have to wait and see, but we intend to ask around at the Computex 2016 trade show, which Computer Shopper will be attending and where we expect to see in person the first non-Founders Edition GTX 1070 and 1080 cards.
The Basics: The GTX 1070
The GeForce GTX 1070 that we tested and are looking at here is indeed a Founders Edition card, which bears the Nvidia branding and is the equivalent of what the graphics-card world used to call a “reference card.” Nvidia, with its Pascal-based gaming cards, has decided to start charging a premium for the cards it has designed and intends to sell these versions throughout the life of the product.
Since Nvidia’s cooler design is typically the only version of these cards with a “blower” design that exhausts air out the back of the chassis, and the company uses premium materials along with an illuminated nameplate, Nvidia felt justified to charge extra for its in-house designed cards, this time around. Another way to look at it is that Nvidia effectively priced itself out of the value-seeking part of the market to avoid “competing” with its card partners. Whether this strategy will be successful is anyone’s guess. But we have to admit, it does seem a tad weird to be charging a higher price for what used to be the entry-level design, despite Nvidia’s claim of “premium parts.”
We’ll look at the actual Founders Edition card a bit more closely in the upcoming sections of this review, but it’s the same size, shape, and design as its big brother, the GeForce GTX 1080. The GeForce GTX 1070 supports all the same technologies and has all the same advancements. It’s just not quite as powerful due to slower (though still speedy) memory, and generally lower specs.
As far as its competition goes, there really isn’t much since AMD has yet to release its next-generation cards, and the Pascal architecture is so far ahead of previous-generation parts that it’s tough to argue for an older chip—especially if you care about power efficiency. So even though we’ll be comparing the GeForce GTX 1070 to the Nvidia Maxwell-based GeForce GTX 970, the price difference between the two is so slim (at the time of this writing, at least), yet the performance delta is vast, that it’s hardly a competing card. That, of course, we’d expect to see change in short order.
We’re not even sure why Nvidia is launching the GTX 1070 just as the GTX 1080 is going on sale, seeing as the cheaper card may steal some of its big brother’s thunder from a value perspective. But Nvidia may simply want to beat AMD to market and gain the maximum advantage, while raising the bar for any future announcements.
Nvidia is leaving the midrange of the market exposed for now; it has yet to announce a $200-or-so variant of its Pascal architecture. But we have to imagine it’s certainly in the pipeline. And the company has the high end extremely well-covered, putting AMD in a tough spot. Team Crimson will have to deliver specs and performance equal to or better than what Nvidia’s rolling out here if it hopes garner serious interest and gain market share.
It’s also not clear if AMD is planning on competing in the high-end card market with new parts anytime soon. But we’ll find out soon enough, as the company may discuss its new “Polaris” architecture at Computex 2016, which is happening the week the GeForce GTX 1070 is being announced. AMD’s new architecture is ostensibly based on a 14nm FinFET process, which is similar to what Nvidia is using. So hopes are high that AMD will be able to achieve the same kinds of massive gains in performance and efficiency that Nvidia has already shown with Pascal. As always, we wait with great expectations for what AMD has to announce. But Nvidia has clearly set the bar high.
With the basics out of the way, let’s take a closer look at the GeForce GTX 1070 and compare it to not only the GeForce GTX 1080 but also its predecessor, the GeForce GTX 970.
Specs, Design New Technologies
The x70 variant of Nvidia’s cards is usually a cut-down version of the x80 chip, but the question in cases like this is always, “How much has has been cut, and from where?”
In the case of the Maxwell-based GeForce GTX 970, the surgery was quite limited, and with a little bit of overclocking the GeForce GTX 970 came within striking distance of the GTX 980 for about $200 less, making it a fantastic bang-for-the-buck GPU.
In the case of the GTX 1070, this time around Nvidia has been a bit more aggressive with its silicon scissors. It has snipped out quite a bit from the GPU, or at least more than most people expected, judging by online reactions when the specs were posted.
Here are the raw specs, so you can get a quick look at the major differences among the GeForce GTX 1070, the GTX 1080, and the competition…
Compared to the GeForce GTX 1080, the biggest difference is that the two GPUs use different types of memory, though both cards sport 8GB of VRAM.
The GeForce GTX 1070 uses more traditional GDDR5 memory clocked at an effective 8GHz, whereas the GeForce GTX 1080 uses a new type of memory built by Micron called “GDDR5X,” which runs at 10GHz. The difference in memory bandwidth is notable, as the GeForce GTX 1080 is capable of 320GB per second of memory throughput, while the GeForce GTX 1070 can only manage 256GB per sec. Both cards utilize a 256-bit memory bus, however. The primary difference between the memory allotment on the GeForce GTX 1070 and its predecessor is that the GTX 1070 has double the amount, 8GB compared to 4GB, and it uses slightly faster memory too. The difference grants the GTX 1070 32GB per second more memory bandwidth than the GTX 970 (which isn’t all that much in the graphics-card realm, really).
In terms of the all-important CUDA cores, the GeForce GTX 1080 has 2,560 of them, the GTX 1070 has 1,920, and the GTX 970 has just 1,664. (More more on many of these terms, see our explainer Buying a Video Card: 20 Terms You Need to Know.)
The GeForce GTX 1070 is clocked lower than the GTX 1080, but much, much higher than the GeForce GTX 970. (The GeForce GTX 1080 runs at a boost clock of 1,733MHz, while the GTX 1070 can boost up to “only” 1,683MHz.) The GTX 1070’s boost rating far exceeds the paltry 1,178MHz boost clock of the GTX 970, which now seems quaint by comparison.
In sum: Compared to the GTX 1080, then, the main differences are that the GTX 1070 has slower memory, fewer CUDA cores, and slightly lower clock speeds overall with less overclockability (at least with our test card; more on that later). Compared to the GeForce GTX 970, though, the GTX 1070 has double the memory, way higher clock speeds, and a lot more CUDA cores.
As far as the card itself goes, it is the same in physical appearance as the GeForce GTX 1080 (save for the switch from a “1080” to a “1070” on the cooler), has the same 10.5-inch length, and the same eight-pin PCI Express auxiliary power connector.
The power draw is slightly lower for the GTX 1070, too, in keeping with its reduced horsepower. The 1070 has a 150-watt thermal-design-power rating (TDP, a measurement of heat-dissipation requirements), compared to the GeForce GTX 1080’s 180 watts. Both cards come with three DisplayPort 1.4 connections, which allow for displaying screens at 4K resolutions at 120Hz or 8K at 60Hz, along with an HDMI 2.0b port that can power 4K output at 60Hz. There’s also a dual-link DVI port if you want to run at 2,560×1,440 or a lower resolution using this older-style port.
It should come as no surprise that the GeForce GTX 1070 supports all the advancements and technologies that were introduced with Pascal, including Ansel for screencaps, Simultaneous Multi-Projection for virtual reality (VR) and triple-monitor setups, and support for next-generation technologies such as HDR and better audio in VR.
In case you haven’t read about these new features yet, here’s a recap before we jump into the deep nitty-gritty of our benchmark tests.
Ansel’s aim? To allow for more creative control when taking screenshots in-game. If you’re like us, that probably wasn’t on your radar as something that needed improving. But it’s a big thing for some users.
Serious PC gamers have been taking very creative screenshots for some time now, but they are, of course, limited by where the camera can go, as well as by the resolution of the images. Ansel addresses both problems by allowing for a free-ranging camera in any game that supports it, and by letting you capture immense high-res screenshots.
In a demo Nvidia ran for the press before the launch of the GeForce GTX 1080, it captured a scene at 20x resolution, and the resulting file was a whopping 3GB in size and 46,000 pixels across, or “46K” to use the popular nomenclature. That allows for cropping and zooming in on extreme detail. The Ansel software, named after famed photographer Ansel Adams, also lets you apply filters to your photos, in semi-Instagram fashion. You can also rotate the horizon and make other changes.
A super resolution screenshot taken with Ansel.
Ansel will be a feature on games that decide to include it in their feature set, so it won’t be something that is available in every game by default. You activate it with a key combination, which pauses the game and causes an overlay to appear that allows you to move the camera, make adjustments, and ultimately capture the frame. Nvidia says this feature supported on Pascal and Maxwell cards.
Simultaneous Multi-Projection (SMP)
People who use three monitors, or who are into VR, will be excited about SMP, as it could mark a leap forward for both of these usage cases. What SMP does is allow the GPU to project into 16 “viewports” simultaneously and in stereo. What this means for the VR world is a massive increase in rendering speed, as the previous generation of cards had to render each eye in sequence.
The GTX 1080 and GTX 1070 can do both displays in one pass, however, which is why we heard things in our preview materials from Nvidia about how the former card is “twice as fast as a Titan X…” and then off to the side of the PowerPoint presentation it said “…in VR.”
This tech will also let you game on several monitors at once, and if you run three monitors with the side monitors angled toward you, SMP should be able to reduce the distortion that occurs on objects and more accurately project one image across all three monitors. You can watch the demo here…
…and as it states, the original projection is correct if all three monitors are side-by-side. It’s when you pull the side ones toward you that things get wonky. SMP fixes this issue, and it looks great too.
We all know what familiar old V-Sync is: It syncs the frame-rate output of the GPU with the refresh rate of the monitor (typically 60Hz/60 frames per second) or an even divisor of it. This produces gaming that is free from tearing, but it locks the top frame rate at 60 frames per second (fps), which is not ideal for a lot of e-sports competitors (apparently, we’re not them).
The solution, then, is to turn off V-Sync, which lets the GPU run at full speed. But when you are running a really high frame rate, you can experience latency, which is also bad for e-sports. To fix both of these problems, Nvidia has developed a new syncing mode named “Fast Sync,” and it’s only advisable to use it in scenarios with extremely high frame rates.