I avoid talking about every technical burp and tweek we review from the major technology firms including Intel, AMD, Nvidia, and the like. (Same goes with every ebb and flow of performance ratings and twists and turns of patent disputes.)
However the recent release of Nvidia Optimus tecnology is another small reason we prefer Nvidia to companies like Intel and AMD in the computing space.
There are several pages of technical description of Optimus available (this is a good one) but we’ll net it out as an elegant way to offer a computer with the power of a discrete GPU engine that is only used when it is needed, thus saving lots of power and helping to lessen the big tradeoff between speed and power consumption which bedevils mobile computing.
The most familiar analogy is the all-wheel drive in cars. In the old days most all cars were 2WD but for a very few 4WD like the Jeep. The drive train was permanent, there was no switching. Then cars started to come with the ability to lock and unlock the 4WD feature when the car was parked and you had the time and willingness to change the hub settings on the wheels. This got better with a simple lever but in most cases you still had to stop the car.
Now 2WD/4WD systems are automatic and unless you do something to prevent it, the car senses and uses whatever drive parameters are needed to maximize performance. The same is true with hybrids that shift between battery and internal combustion engines.
You get the idea.
Similarly it has been possible to manually turn off a GPU inside a laptop but it’s complicated and requires a reboot – so nobody ever does it.
The Nvidia Optimus technology allows a GPU to turn on only when an application that leverages the power of the extra processors is invoked. At the same time it turns off when it’s not needed. So when you are reading email the GPU is off and the power consumption is minimized. When you are done and fire up a video or graphically intensive game the GPU kicks in and delivers the power needed for a great experience.
This sort of “hybrid computing” has been around for some time but what’s important about this is the implementation. An important difference this time is that incorporating the design doesn’t involve extra effort and costs on the part of the device maker. So this will become a standard feature right away. After all any laptop with a GPU and without Optimus is at a major disadvantage in terms of expected battery life which is a big factor in use.
I don’t want to make too much of it but it’s another good datapoint with respect to Nvidia maintaining the leadership in driving the GPU into the fabric of general purpose computing (which BTW is a certainty in our research view, see our other research notes on it.)
Also refer to an earlier post (Nvidia Turns the Crank) for more links.
[Disclosure: The R2 model portfolio has a long position in NVDA.]