Maxwell in afterburner: review of the Gigabyte GeForce GTX 750 Ti video card

On February 18, NVIDIA introduced the new energy-efficient Maxwell architecture, the GM107 graphics processor and the first video cards based on it - the GeForce GTX 750 Ti and GTX 750. Reviewers were most impressed not so much by the low power consumption of the new video card (only 60 watts) or its performance per watt. excellent overclocking potential of the core. It is not surprising that simultaneously with the announcement of the GeForce GTX 750 Ti, all famous video card manufacturers presented original models with increased frequencies (at the time of the release of the new product, eight of them were announced at once).

Moreover, if, for example, in the case of the Radeon R9 290(X) we can only talk about a formal increase in frequency by 30, 50 or 80 MHz at best, then with the overclocking of the GeForce GTX 750 Ti the companies were clearly not modest. Thus, from nominal frequencies of 1020-1085/5400 MHz, EVGA overclocked its FTW version of the GeForce GTX 750 Ti to 1189-1268/5400 MHz, Palit released StormX Dual with frequencies of 1202-1281/6008 MHz, ASUS set frequencies to 1072-1150/5400 MHz, MSI - 1085-1163/5400 MHz, Gainward - 1202-1281/6008 MHz and so on. However, the highest factory overclocked GPU was found in the Gigabyte GeForce GTX 750 Ti 2 GB video card (GV-N75TWF2OC-2GI), which we will study and test today.

⇡#Technical characteristics and recommended cost

The technical characteristics and recommended price of the Gigabyte GeForce GTX 750 Ti video card are presented in comparison with the reference versions NVIDIA GeForce GTX 750 Ti and GTX 750, as well as the AMD Radeon R7 260X.

Name of technical characteristicsGigabyte GeForce GTX 750 TiNVIDIA GeForce GTX 750 TiNVIDIA GeForce GTX 750AMD Radeon R7 260X
GPU (manufacturer)GM107 "Maxwell" (TSMC)GM107 "Maxwell" (TSMC)GM107 "Maxwell" (TSMC)"Bonaire" (TSMC)
Technical process, nm28 (low-k)28 (low-k)28 (low-k)28 (low-k)
Crystal area mm2148148148160
Number of transistors, million1870187018702080
GPU frequency, MHz3D1215 (1294 - boost)1020 (1085 - boost)1020 (1085 - boost)1100
2D324135135302
Number of unified shader processors, pcs.640640512896
Number of texture blocks, pcs.40403256
Number of raster operations blocks (ROPs), pcs.16161616
Theoretical maximum fill rate, Gpix. /With 19,416,316,317,6
Theoretical maximum texture sampling speed, Gtex. /With 48,640,832,661,6
Pixel Shaders/Vertex Shaders version support5.0 / 5.05.0 / 5.05.0 / 5.05.0 / 5.0
Supported memory typeGDDR5GDDR5GDDR5GDDR5
Memory bus width, bits128128128128
Effective video memory operating frequency, MHz3D5400540050006500
2D648810810600
Memory capacity, MB20481024 / 204810241024 / 2048
Video memory bandwidth, GB/s86,486,480,0104,0
Peak power consumption, W3D606055115
2Dn/an/an/a3 / 20
Power supply power requirements, W400300300400
Video card dimensions, mm (LxWxT)190x133x38145x98x34145x98x34172x98x34
InterfacePCI-Express x16 (v3.0)PCI-Express x16 (v3.0)PCI-Express x16 (v3.0)PCI-Express x16 (v3.0)
ExitsDVI-I+DVI-D (Dual-Link), 2x HDMI v1.4aDVI-I+DVI-D (Dual-Link), mini HDMI v1.4aDVI-I+DVI-D (Dual-Link), mini HDMI v1.4aDVI-I+DVI-D (Dual-Link), HDMI v1.4a, Display Port v1.2
Recommended cost, US dollars169n/a / 149119119 / 139

Specifications

CoreGM107
Universal conveyors (SPU)640
SPU frequency1085 MHz
Core frequency1020 MHz
Memory frequency5400 MHz
Memory typeGDDR5
Memory1024/2048 MB
Memory bus128 bit
Texture units (TMU)80
Rasterization Units (ROP)16

additional characteristics

Technical process28 nm
Core area156 mm2
Memory Bandwidth88 Gbps
Power Consumption (Max)60 W
InterfacePCI-E x16 3.0
DirectX version11.1

⇡#Packaging and accessories

The front side of the compact box, made of thick cardboard, depicts the head of an owl with a very menacing look, as if emphasizing the most serious intentions of the product. Next to it, in addition to the video card model, there is information about overclocking its core and support for Ultra HD resolutions (4096 × 2160 at 60 Hz).

Traditionally, the more informative reverse side is reserved for a description of the features of the cooling system, printed circuit board elements and system requirements.

The package contents of the video card are minimalistic: an adapter cable for additional power, a CD with drivers and an overclocking utility, and brief installation instructions. All.

The video card is manufactured in China and should come with a three-year warranty. The recommended price is 169 US dollars, the actual retail price at the time of writing this article is from 5,600 rubles.

Originality of the model

The originality of this gaming reference model lies in the absence of additional power connectors and a cooler, the place of which is taken by a relatively small cylinder-shaped radiator equipped with a fan.

Next, we will look at the performance characteristics of the new product from four different video card manufacturers: Gigabyte, MSI, Zotac and Nvidia, respectively.

⇡#PCB design and features

The Gigabyte GeForce GTX 750 Ti turned out to be a compact, neat and at the same time quite attractive video card. Its dimensions are 190x133x38 mm. The sleek contours of the thin plastic fan frames and the fans themselves with translucent blades give the product a finished and sophisticated look.

The heat pipe extending upward somewhat spoils the overall picture, but in the end you can ignore it. The turquoise color of PCB, often used in Gigabyte products, is what I personally like most in printed circuit boards. But this is a matter of taste. All this, of course, is great, but you will rarely see the video card itself, so let’s move on to its equipment.

Unlike the reference version of the NVIDIA GeForce GTX 750 Ti, which has DVI-I, DVI-D and one mini HDMI outputs, Gigabyte has DVI-I and DVI-D (both Dual-Link), as well as two full-fledged connectors HDMI version 1.4a.

Thanks to the presence of the latter two outputs, the Gigabyte GeForce GTX 750 Ti can output images with Ultra HD resolution (4096x2160 pixels) at 60 Hz.

As noted above, the GeForce GTX 750 Ti uses 60-watt power via the PCI-Express bus, so the reference versions of this video card are not equipped with a connector for additional power. Another thing is the Gigabyte GeForce GTX 750 Ti, during the development of which the engineers did not skimp on power and installed one six-pin connector on the video card.

However, on the standard printed circuit board, which is used here, there is a contact pad for this connector, so the modification in this case turned out to be minimal.

The power supply system has not undergone any amplification (since the advent of the GeForce GTX 650 Ti); it uses two phases for the graphics processor and one for memory with power circuits.

The GPU crystal, measuring 12x13 mm, is placed on a substrate without a protective frame. It was not present on the reference version of the GeForce GTX 750 Ti, so it is strange that manufacturers of serial video cards did not bother to protect 1.87 billion of the most modern transistors. Our GM107 is an A2 version and was released in Taiwan in the 8th week of 2014 (mid-February).

The base GPU frequency in 3D mode has been increased by 195 MHz, or 19.1%. In boost mode, the frequency can increase to 1294 MHz, and according to monitoring data, it rose to - attention - 1346 MHz! Impressive, isn't it? To date, no commercially produced video card operates at this frequency in 3D mode. When switching to 2D mode, the frequency drops to 135 MHz. The ASIC quality of our Gigabyte GeForce GTX 750 Ti processor is rated at 70.4%.

As for video memory, everything is standard here: 2 GB, GDDR5, FCBGA packaging, SK Hynix chips (labeled H5GC4H24MFR-T2C), effective frequency - 5400 MHz, theoretical bandwidth - 86.4 GB/s.

Let us add that when the video card switches to 2D mode, the memory frequency is reduced to 810 MHz.

GPU-Z will tell us all the characteristics of the Gigabyte GeForce GTX 750 Ti.

We supplement the review of the video card with a link to its BIOS - and straight to the cooling system.

First implementation

GM107 consists of one GPC, five SMMs and two memory controllers. 128 CUDA cores per SMM is 640 units, which is 66% more than the 384 in the Kepler architecture used in the GTX 650. To be fair, the GeForce 650 Ti has 768 CUDA cores, but with almost double the TDP . The base frequency of 1020 MHz and the boost to 1085 MHz are actually very underestimated. Even with modest overclocking it's easy to reach 1300 MHz!

Peak theoretical compute performance is 1.3 TFLOPS (60% more than GK107), although memory bandwidth remains largely the same. This is why the introduction of 2MB L2 cache is critical to effectively optimizing the Maxwell architecture.

The GM107 is still based on TSMC's 28nm technology, but increases the chip size by 25% compared to the GK107 with 43% more transistors. Given Maxwell's 60% computational advantage over Kepler, the 25% change in area indicates Nvidia's focus on reducing chip size and increasing efficiency. When you add in the 2x increase in performance per watt consumed for the same 28nm process, the new architecture is impressive.

⇡#Cooling system - efficiency and noise level

The video card itself is matched by its cooling system. Simple, compact and effective WindForce 2X.

The aluminum radiator is pierced three times with an eight-millimeter heat pipe, which is pressed into its center, at the same time acting as a base and in direct contact with the GPU crystal.

However, judging by the imprint of the GPU crystal on the base, the tube does not cover its entire area. It would probably be more correct to use two six-millimeter tubes, soldered to each other at the base and extending in different directions. But as it's done, it's done. Let's hope this is enough.

The entire structure is cooled by two Power Logic fans (model PLD08010S12H) with eleven translucent blades.

The actual size of the impeller is 74 mm. The fan speed is automatically adjusted in the range from 1500 to 3650 rpm by changing the voltage.

To check the temperature conditions of the Gigabyte GeForce GTX 750 Ti video card, we used five test cycles of the very resource-intensive game Aliens vs. Predator (2010) with maximum graphics quality in a resolution of 2560x1440 pixels with anisotropic filtering at 16x level, but without activating MSAA anti-aliasing:

To monitor temperatures and all other parameters, MSI Afterburner version 3.0.0 beta 19 and GPU-Z utility version 0.7.7 were used. All tests were carried out at an average room temperature of about 25 degrees Celsius in a closed system unit case, the configuration of which you can see in the next section of the article.

Considering the extremely low heat dissipation of the new GM107, we had no doubt that WindForce 2X would cope with cooling this video card at any frequency. And so it happened.

Auto mode

Only 54 degrees on the GPU at peak load at 1346 MHz and maximum fan speed of 1915 rpm. By increasing the speed to a maximum of 3650 rpm, the temperature can be reduced by another 6 degrees Celsius.

Maximum speed

But is it necessary? After all, at maximum speed the cooler begins to make noise, and in the automatic adjustment mode it is simply inaudible even against the background of a very quiet system unit. So we give the Gigabyte WindForce 2X an excellent rating for its noise level.

GTX 750 review

This video accelerator is the best option for building an inexpensive computer. The main thing to do when assembling a PC is to decide on the processor and power supply.

Models from Intel, starting from i3 and higher, are suitable as a processor. The power of these chips will be enough to maximize the graphics card and run games. The average number of FPS when paired with i3 is 35-40 frames at medium settings.

Power consumption of GTX 750 – 55 W. From this it follows that the best option would be a 300 W power supply. Even if you have a powerful processor, the specified volume will be enough.

⇡#Overclocking potential

Understanding that the Gigabyte GeForce GTX 750 Ti was already seriously overclocked on the core at the factory, we did not hope for any further possible increase in frequencies, but we were still able to squeeze an additional 60 MHz out of the GPU. The video memory was an order of magnitude more pleasing, overclocking to 1160 MHz without loss of stability.

The final frequencies after overclocking the video card were 1275-1354/6560 MHz.

At higher frequencies under the same testing conditions, the Gigabyte GeForce GTX 750 Ti began to heat up slightly more: the GPU temperature increased by 2 degrees Celsius, and the fan speed increased by a few tens of revolutions per minute.

Note that at some points the frequency of the overclocked GPU increased to 1406 MHz. On air. Quiet air...

Tests in games

Tests were carried out at maximum resolution in several popular games and the following results were obtained:

GamesGTX 750GTX 750 Ti
The Witcher 3: Wild Hunt14 fps20 fps
GTA 516 fps20 fps
Battlefield 434 fps36 fps
Metro: Last Light15 fps17 fps

⇡#Test configuration, tools and testing methodology

Video card performance testing was carried out on the following system configuration:

  • Motherboard: Intel Siler DX79SR (Intel X79 Express, LGA2011, BIOS 0594 dated 08/06/2013);
  • Central processor: Intel Core i7-3970X Extreme Edition 3.5/4.0 GHz (Sandy Bridge-E, C2, 1.1 V, 6x256 KB L2, 15 MB L3);
  • CPU cooling system: Phanteks PH-TC14PE (2x900 rpm);
  • Thermal interface: ARCTIC MX-4;
  • Video cards: HIS Radeon R9 270 iPower IceQ X2 Boost Clock 2 GB 952/5600 MHz;
  • Gigabyte GeForce GTX 750 Ti 2 GB 1215-1294/5400 MHz and 1275-1354/6560 MHz;
  • NVIDIA GeForce GTX 750 Ti 2 GB 1020-1085/5400 MHz;
  • MSI GeForce GTX 650 Ti BOOST Twin Frozr III 2 GB 1033-1098/6008 MHz;
  • AMD Radeon R7 260X 2 GB 1100/6500 MHz;
  • RAM: DDR3 4x8 GB G.SKILL TridentX F3-2133C9Q-32GTX (XMP 2133 MHz, 9-11-11-31, 1.6 V);
  • System disk: SSD 256 GB Crucial m4 (SATA-III, CT256M4SSD2, BIOS v0009);
  • Disk for programs and games: Western Digital VelociRaptor (SATA-II, 300 GB, 10000 rpm, 16 MB, NCQ) in a Scythe Quiet Drive 3.5″ box;
  • Archive disk: Samsung Ecogreen F4 HD204UI (SATA-II, 2 TB, 5400 rpm, 32 MB, NCQ);
  • Sound card: Auzen X-Fi HomeTheater HD;
  • Case: Antec Twelve Hundred (front wall - three Noiseblocker NB-Multiframe S-Series MF12-S2 at 1020 rpm; rear - two Noiseblocker NB-BlackSilentPRO PL-1 at 1020 rpm; top - standard 200 mm fan at 400 rpm);
  • Control and monitoring panel: Zalman ZM-MFC3;
  • Power supply: Corsair AX1200i (1200 W), 120 mm fan;
  • Monitor: 27-inch Samsung S27A850D (DVI-I, 2560x1440, 60 Hz).
  • With a retail price of RUB 5,600. The Gigabyte GeForce GTX 750 Ti will compete with the original versions of the GeForce GTX 650 Ti BOOST, which are currently on the market in a fairly wide range. This video card is presented to us today by the MSI GeForce GTX 650 Ti BOOST Twin Frozr III. The HIS Radeon R9 270 iPower IceQ X2 Boost Clock costs a little more, which we included in the tests as a guideline, as the next step in performance.

    Reference versions of NVIDIA GeForce GTX 750 Ti with AMD Radeon R7 260X have also been added to testing.

    This will allow you to evaluate the advantage of the Gigabyte GeForce GTX 750 Ti over them. To be fair, we note that the Radeon R7 260X is cheaper than the GeForce GTX 750 Ti.

    To reduce the dependence of video card performance on the speed of the platform, the 32-nm six-core processor with a multiplier of 48, a reference frequency of 100 MHz and the Load-Line Calibration function activated was overclocked to 4.8 GHz while the voltage in the motherboard BIOS was increased to 1.38 V:

    Hyper-Threading technology is activated. At the same time, 32 GB of RAM operated at a frequency of 2.133 GHz with timings 9-11-11-20_CR1 at a voltage of 1.6125 V.

    Testing, which began on April 4, 2014, was carried out under the Microsoft Windows 7 Ultimate x64 SP1 operating system with all critical updates as of the specified date and with the installation of the following drivers:

    • Motherboard chipset Intel Chipset Drivers - 9.4.4.1006 WHQL from 09/21/2013;
    • DirectX End-User Runtimes libraries, released November 30, 2010;
    • Video card drivers for AMD GPUs - AMD Catalyst 14.3 Beta v1.0 from 03/17/2014;
    • Video card drivers for NVIDIA GPUs - GeForce 335.23 WHQL from 03/10/2014.

    Taking into account the relatively low performance of video cards tested today, they were tested only at a resolution of 1920x1080 pixels. For the tests, two graphics quality modes were used: “Quality + AF16x” - the default texture quality in the drivers with 16x level anisotropic filtering enabled, and “Quality + AF16x + MSAA 4x” with 16x level anisotropic filtering enabled and 4x level full-screen anti-aliasing. In some games, due to the specifics of their game engines, other anti-aliasing algorithms were used, which will be indicated further in the methodology and in the diagrams. Anisotropic filtering and full-screen anti-aliasing were enabled directly in the game settings. If these settings were not available in games, then the parameters were changed in the control panel of the Catalyst or GeForce drivers. Vertical synchronization was also forcibly disabled there. Apart from the above, no additional changes were made to the driver settings.

    The video cards were tested in two graphics tests and thirteen games, updated to the latest versions as of the start date of preparation of the material:

    • 3DMark (2013) (DirectX 9/11) - version 1.2.250.0, tested in the Cloud Gate, Fire Strike and Fire Strike Extreme scenes;
    • Unigine Valley Bench (DirectX 11) - version 1.0, maximum quality settings, AF16x and (or) MSAA 4x, resolution 1920x1080;
    • Total War: SHOGUN 2 - Fall of the Samurai (DirectX 11) - version 1.1.0, built-in test (battle of Sekigahara) at maximum graphics quality settings and when used in one of the MSAA 8x modes;
    • Sniper Elite V2 Benchmark (DirectX 11) - version 1.05, used Adrenaline Sniper Elite V2 Benchmark Tool v1.0.0.2 BETA maximum graphics quality settings (Ultra), Advanced Shadows: HIGH, Ambient Occlusion: ON, Stereo 3D: OFF, Supersampling: OFF, double sequential test run;
    • Sleeping Dogs (DirectX 11) - version 1.5, Adrenaline Action Benchmark Tool v1.0.2.1 was used, maximum graphics quality settings for all points, Hi-Res Textures pack installed, FPS Limiter and V-Sync disabled, double sequential test run with total anti-aliasing at Normal and High levels;
    • Hitman: Absolution (DirectX 11) - version 1.0.447.0, built-in test with graphics quality settings at Ultra, tessellation, FXAA and global illumination enabled.
    • Crysis 3 (DirectX 11) - version 1.2.0.1000, all graphics quality settings to maximum, blur level - medium, glare on, modes with FXAA and MSAA4x anti-aliasing, double sequential pass of a scripted scene from the beginning of the Swamp mission lasting 110 seconds;
    • Tomb Raider (2013) (DirectX 11) - version 1.1.748.0, Adrenaline Action Benchmark Tool used, quality settings at Ultra level, V-Sync disabled, modes with FXAA and 2xSSAA anti-aliasing, TressFX technology activated, double sequential pass built into the game test;
    • BioShock Infinite (DirectX 11) - version 1.1.25.5165, used Adrenaline Action Benchmark Tool with High and Ultra quality settings, double run of the test built into the game;
    • Metro: Last Light (DirectX 11) - version 1.0.0.15, used the built-in test, graphics quality and tessellation settings at High, Advanced PhysX technology turned off, tests with and without SSAA anti-aliasing, double sequential pass of the D6 scene.
    • GRID 2 (DirectX 11) - version 1.0.85.8679, used the test built into the game, graphics quality settings to the maximum level in all positions, tests with and without MSAA4x anti-aliasing, eight cars on the Chicago track;
    • Company of Heroes 2 (DirectX 11) - version 3.0.0.13106, double sequential run of the test built into the game with maximum graphics quality and physical effects settings;
    • Batman: Arkham Origins (DirectX 11) - version 1.0 (update 8), quality settings at Ultra, V-Sync disabled, all effects activated, all DX11 Enhanced functions enabled, Hardware Accelerated PhysX = Normal, double sequential pass of the built-in game test ;
    • Battlefield 4 (DirectX 11) - version 111433, all graphics quality settings on Ultra, double sequential playthrough of a scripted scene from the beginning of the TASHGAR mission lasting 110 seconds;
    • Thief (DirectX 11) - version 1.4 build 4133.3, graphics quality settings to maximum level, Paralax Occlusion Mapping and Tessellation technologies activated, double sequential pass of the benchmark built into the game.

    As you can see, in some games the maximum graphics quality settings were not used in order to maintain FPS at at least the minimum level acceptable for the game.

    If games implemented the ability to record a minimum number of frames per second, then this was also reflected in the diagrams. Each test was carried out twice; the best of the two values ​​obtained was taken as the final result, but only if the difference between them did not exceed 1%. If the deviations exceeded 1%, then testing was repeated at least once more to obtain a reliable result.

    ⇡#Performance test results and their analysis

    • 3DMark (2013)

    At nominal frequencies, the original Gigabyte GeForce GTX 750 Ti in 3DMark (2013) is 6-12.7% faster than the reference version of the same video card model and slightly faster than the MSI GeForce GTX 650 Ti BOOST. The AMD Radeon R7 260X is not its competitor here at all, however, no one expected this. When overclocked, the new product seriously increases its performance, but only reaches the level of the HIS Radeon R9 270 iPower IceQ X2 Boost Clock in the least resource-intensive testing mode.

    • Unigine Valley Bench

    In the Unigine Valley test the situation is different.

    Thanks to the significantly increased GPU frequency, the Gigabyte GeForce GTX 750 Ti is 12.3% faster than the regular GeForce GTX 750 Ti, but at the same time does not exceed the MSI GeForce GTX 650 Ti BOOST. Only additional overclocking of the core and memory of the video card brings it to first place, albeit with a very slight lead over the original HIS and MSI video cards.

    • Total War: SHOGUN 2 - Fall of the Samurai

    Total War: SHOGUN 2 - Fall of the Samurai runs faster on video cards with AMD GPUs, so even the cheapest video card in today's testing does not look like a “whipping boy” in this game.

    In turn, the Gigabyte GeForce GTX 750 Ti is about 12% faster than the NVIDIA GeForce GTX 750 Ti and is not inferior to the MSI GeForce GTX 650 Ti BOOST. Overclocking the Gigabyte video card increases its performance by another 6-8% and brings it very close to the results of the more expensive HIS Radeon R9 270 iPower IceQ X2 Boost Clock.

    • Sniper Elite V2 Benchmark

    The results in Sniper Elite V2 look interesting.

    The MSI GeForce GTX 650 Ti BOOST somehow incomprehensibly takes the lead in this game, and even the very high frequencies of the Gigabyte GeForce GTX 750 Ti do not help it. Probably, the 192-bit bus of the GTX 650 Ti BOOST plays a role here versus the 128-bit bus of the GTX 750 Ti. Gigabyte handles other video cards without difficulty, not counting the HIS Radeon R9 270 iPower IceQ X2 Boost Clock, which belongs to a higher class.

    • Sleeping Dogs

    In Sleeping Dogs, the new Gigabyte GeForce GTX 750 Ti is on par with the MSI GeForce GTX 650 Ti BOOST and is 9-12% faster than the regular GeForce GTX 750 Ti.

    At the same time, the performance level of the HIS Radeon R9 270 iPower IceQ X2 Boost Clock was not achieved by the Gigabyte video card even at frequencies of 1275-1354/6560 MHz.

    • Hitman: Absolution

    The situation is similar in the game Hitman: Absolution.

    • Crysis 3

    A special feature of the Crysis 3 test is the confident performance of the MSI GeForce GTX 650 Ti BOOST, which outperformed the overclocked Gigabyte GeForce GTX 750 Ti.

    The culprit of today's “triumph” is 1-2 FPS faster than the reference version of the same video card model and even the same amount faster than the AMD Radeon R7 260X. Note that for Crysis 3 at maximum quality settings, all today's video cards are depressingly weak.

    • Tomb Raider (2013)

    The situation is slightly better in the game Tomb Raider (2013).

    The balance of power between video cards does not change.

    • BioShock Infinite

    In this game, in terms of average frames per second, the Gigabyte GeForce GTX 750 Ti at factory frequencies is 9-11% ahead of the NVIDIA GeForce GTX 750 Ti and slightly faster than the MSI GeForce GTX 650 Ti BOOST. Moreover, the overclocked Gigabyte GeForce GTX 750 Ti is quite a bit inferior to the HIS Radeon R9 270 iPower IceQ X2 Boost Clock.

    • Metro: Last Light

    As in Crysis 3 or Tomb Raider, in Metro: Last Light, even without activating Advanced PhysX technology and with simplified graphics quality settings, the performance of the video cards tested today leaves much to be desired.

    But the overclocked Gigabyte GeForce GTX 750 Ti manages to reach the level of the HIS Radeon R9 270 iPower IceQ X2 Boost Clock, and at its nominal frequencies is 10-11% ahead of the regular GeForce GTX 750 Ti.

    • GRID 2

    The Gigabyte GeForce GTX 750 Ti is also fast in the GRID 2 game, but it is still far from the performance of the HIS Radeon R9 270 iPower IceQ X2 Boost Clock even when overclocked. In general, the balance of power has not changed.

    • Company of Heroes 2

    In Company of Heroes 2, video cards based on AMD GPUs are in the lead.

    The maximum that the Gigabyte GeForce GTX 750 Ti is capable of is the level of the nominal Radeon R7 260X. The advantage of the new product over the reference GeForce GTX 750 Ti is 14-16%.

    • Batman: Arkham Origins

    But fans of video cards on NVIDIA GPUs can rejoice at the imminent revenge in the game Batman: Arkham Origins.

    In addition to the overall victory of the “greens,” we note that the Gigabyte GeForce GTX 750 Ti lost to the MSI GeForce GTX 650 Ti BOOST. It is worth mentioning the relatively low minimum FPS on the NVIDIA GeForce GTX 750 Ti.

    • Battlefield 4

    The arrangement of video cards in the Battlefield 4 game generally repeats the general picture of the entire test.

    • Thief

    In Thief, some video cards were noted to have a low minimum FPS, but the overall picture also did not change.

    At the end of the main testing section, we present a final table with test results:

    Next we have pivot charts.

    Maxwell Architecture

    By designing the Kepler architecture and then implementing it in Tegra K1, Nvidia's engineering team gained experience and insight into how to improve the performance and efficiency of the underlying computing circuitry. Kepler represented a huge leap forward from Fermi, and Maxwell promises to be just as revolutionary. The manufacturing company wanted to reduce the power consumption of the video card, as well as find ways to increase performance while maintaining the same power level.

    The GPU design logic remained similar to Kepler. There is a GPC graphics processing cluster, which includes SMM multiprocessors created from a large number of CUDA cores (stream processors). Changes in the organization of various Maxwell blocks concern an increase in the number of partitions and groups, but a decrease in the number of CUDA cores per block. This redesign comes as part of Nvidia's efforts to improve the performance and power efficiency of the new graphics card.

    The biggest changes occurred in each core, with processing power increasing by 35%. Nvidia replaced the SMM resource allocation mechanism with a more intelligent one, which significantly reduced latency. New multiprocessors were created to improve performance, both per unit of power and per unit of area - a goal that all CPU and GPU designers have in mind.

    Nvidia was able to achieve this by changing partition management logic, load balancing, clock management, compiler-level scheduling, instructions per clock, and much more. Instead of one block of 192 shaders, the SMM is split into four, each with a separate instruction buffer, scheduler, and 32 dedicated CUDA cores. According to Nvidia, this design simplifies the design and scheduling logic needed to save die area and power.

    These blocks are grouped in pairs and share four filter and texture cache blocks. The memory is also divided between four blocks of each SMM. With these changes, multiprocessors provide 90% of the performance of the previous architecture, but on a smaller die area. GM107, the first chip based on Maxwell, consists of 5 SMMs (640 CUDA cores), while the previous GK107 had 2 SMKs (384 CUDA cores), giving a 2.3x increase in operating efficiency.

    Another significant change in the structure of the multiprocessor was the increase in the size of the second level cache to two megabytes. Considering that in Kepler its implementation was limited to 256 KB in size, an 8-fold increase in power should reduce the load on the integrated GM107 memory controller. With a 128-bit interface, the available storage capacity of the GTX 750 Ti will definitely not be a bottleneck.

    In the new architecture, Nvidia has also improved video playback capabilities by increasing encoding speed by 2 times and decoding speed by 10 times.

    A new GC5 power state has been created to reduce GPU power consumption during light workloads such as video playback. API support remains the same. This means that not all features of DirectX 11.2 are implemented.

    ⇡#Pivot charts

    First of all, let's evaluate the advantage of the original Gigabyte GeForce GTX 750 Ti 2 GB video card at frequencies of 1215-1294/5400 MHz over the reference version of the NVIDIA GeForce GTX 750 Ti 2 GB at its standard frequencies of 1020-1085/5400 MHz.

    The GPU frequency of the Gigabyte video card is higher than the nominal by 195 MHz, or 19.1%, and the performance advantage on average for all gaming tests resulted in 11% in modes without anti-aliasing and 9.6% when it is activated.

    Following is a summary chart comparing Gigabyte GeForce GTX 750 Ti 2 GB and AMD Radeon R7 260X 2 GB. In terms of price, these graphics cards are not comparable, but the GTX 750 Ti can be considered the next step up in performance. And that's how much faster she turned out to be.

    Only in Company of Heroes 2 was the AMD Radeon R7 260X able to snatch victory from the GeForce GTX 750 Ti. In other games, the new product turned out to be naturally faster, and on average its advantage is 16.3% without using anti-aliasing modes, as well as 14.8% when they are turned on.

    No less interesting, in our opinion, is the comparison diagram between the Gigabyte GeForce GTX 750 Ti and the MSI GeForce GTX 650 Ti BOOST Twin Frozr III. Both video cards are made according to the original design, have increased frequencies and highly efficient cooling systems. Only the GTX 750 Ti belongs to the new generation Maxwell, and the GTX 650 Ti BOOST belongs to the old Kepler.

    Thanks to the 192-bit memory bus and its higher bandwidth, the old MSI GeForce GTX 650 Ti BOOST Twin Frozr III outperformed the new one in games such as Sniper Elite V2, Crysis 3 and Batman: Arkham Origins. In turn, the Gigabyte GeForce GTX 750 Ti is faster in Company of Heroes 2, GRID 2, Battlefield 4, Thief, BioShock Infinite and Tomb Raider (2013). In the remaining games, the performance of these video cards is almost the same. That is, from a performance point of view, it’s still not worth changing the GeForce GTX 650 Ti BOOST to the GeForce GTX 750 Ti . Even if the latter is one of the fastest original GeForce GTX 750 Ti.

    Now let's evaluate the performance increase of the Gigabyte GeForce GTX 750 Ti when overclocked from frequencies 1215-1294/5400 MHz to 1275-1354/6560 MHz.

    During overclocking, we increased the core frequency by only 4.9%, but the video memory frequency by 21.5%. This allowed the Gigabyte video card to speed up on average across all games by 7.1% without anti-aliasing and by 8.5% with AA activated.

    Despite the good overclocking, the Gigabyte GeForce GTX 750 Ti still does not catch up with the HIS Radeon R9 270 iPower IceQ X2 Boost Clock in the vast majority of tests.

    Only in Batman: Arkham Origins without using anti-aliasing did the GeForce GTX 750 Ti turn out to be faster than its rival and very close to it in Metro: Last Light. The rest of the games were left for the Radeon R9 270. That is, in this case there really is something to overpay for.

    Video card memory power controllers

    The memory controller resides on the GPU chip and generates heat, which must be taken into account when calculating TDP. However, neither AMD nor NVIDIA take it into account in their total consumption. In Nvidia video cards, memory is powered through the MVDD phases and the storage controller. In AMD Radeon VII video cards, memory power is supplied through the VDDRC HBM and VDDCI circuits, while in Vega - through the MVDD and VDDCI lines. The main current in the memory power circuits comes through the MVDD (VDDRC) line. The VDDCI voltage is used on the I/O bus between the GPU Core and memory chips.

    ⇡#Energy consumption

    The energy consumption of the system with various video cards was measured using the Zalman ZM-MFC3 multifunctional panel, which shows the system consumption “from the outlet” as a whole (excluding the monitor). The measurement was carried out in 2D mode, during normal work in Microsoft Word or “Internet surfing”, as well as in 3D mode. In the latter case, the load was created using four consecutive cycles of the introductory scene of the Swamp level from the game Crysis 3 in a resolution of 2560x1440 pixels with maximum graphics quality settings, but without using MSAA anti-aliasing.

    Let's compare the power consumption of systems with video cards tested today:

    If in a confrontation between the Gigabyte GeForce GTX 750 Ti and the MSI GeForce GTX 650 Ti BOOST Twin Frozr III in terms of performance it is difficult to give preference to any of these video cards, then in terms of energy efficiency the new product puts the opponent to shame. The difference in power consumption of systems with these video cards at peak load in 3D mode reaches 56 watts in favor of the new GTX 750 Ti. Interestingly, when overclocked, the consumption of a system with a Gigabyte GeForce GTX 750 Ti video card increases by only 8 watts, and it still turns out to be more economical than all other current configurations, with the exception of the reference NVIDIA GeForce GTX 750 Ti. So, from an energy efficiency point of view, NVIDIA managed to create an incredibly attractive graphics processor and the first video card model based on it.

    DC voltage loss

    The operating efficiency of video card circuits is significantly lower than that of switching power supplies. This is due to limitations associated with the size of parts on the boards, as well as losses associated with coordinating the operation of many phases. Among the main components that reduce efficiency are filtering and voltage smoothing. When choosing video cards, we recommend paying attention to the number of power phases of the video processor. The greater their number, the less output voltage ripple.

    To improve the quality of the output voltage, you can install parallel smoothing capacitors on each phase. Increasing the capacitance of the smoothing capacitor by half reduces the ripple amplitude at the output of the converter by almost half. This has a beneficial effect on the stability of the GPU and its overclocking potential.

    Miniaturization of components worsens their cooling conditions. This negatively affects the overall efficiency of the power circuits. NVIDIA manufacturers have abandoned the use of virtual power phases in devices.

    To improve the balancing of the power phases of NVIDIA cards, DCR (Direct Current Resistance) smart controllers are used. In real time, they adjust phase operation depending on temperature and passing current.

    Balancing the phases of Nvidia cards is achieved by constantly monitoring and adjusting the gate current of the FETs of each phase. The measurements are made not by tracking the current on the shunt or the output of the smoothing filter, but by using DCR circuits.

    Most AMD video cards in the cheap and mid-price segment use circuits connected to LC filter inductors for phase balancing. They differ for the worse by the presence of large errors. Expensive AMD video cards use more advanced methods for controlling and balancing phase operation.

    Rating
    ( 1 rating, average 4 out of 5 )
    Did you like the article? Share with friends:
    For any suggestions regarding the site: [email protected]
    Для любых предложений по сайту: [email protected]