Sunday, April 26, 2015

What makes a good VRM

I have some good news and some bad news.
Good news when I tried to power on the GTX 590 it didn't catch fire, make magic smoke or explode.
Bad news is that I didn't get any video so I need to fix the card. It's probably the PCI-e slot or I need to trip the PWR_GOOD pin on the controllers. I also tried to power the GPU using 2 separate PSU so that might have something to do with it too.

However since there's no entertainment article I have prepped for today other than the GTX 590. You're gonna get an education on VRMs.

First of all you need to understand how a VRM that converts 12V DC to a lower voltage works. Since this is rather complicated and better explained elsewhere you can just go read this. That will explain the basics of a low power single phase VRM.

So now that you've read that lets expand that and apply it to computer VRMs. First of all the fly wheel circuit uses a diode. This is really inefficient and massively limits the maximum current through put so in computer VRMs you will find instead of the diode what is called a low side MOSFET. This MOSFET is only on when the High side MOSFET(the component labeled switching transistor) is off or else you would get a short circuit. This low side MOSFET handles the bulk of the current that flows through your load(CPU/GPU core RAM chip...) so these MOSFETs are the most important when building a powerful VRM. Low side and high side MOSFETs typically have current handling capabilities between 20 and 60A at 125C° case temperature.

The article I linked shows the PWM being fed directly to the high side MOSFET. In computer VRMs the MOSFETs used have a rather large gate capacitance. Meaning that if you want to switch them on you need to let them charge up. If you want them to charge quickly you will need to supply a current greater than what PWM controllers can provide. So to supply the current a driver MOSFET is used. This MOSFET is typically capable of only handling currents smaller than 10A and can be switched on and off directly by the PWM signal coming out of the PWM controller. The driver MOSFET is not very important to a VRMs current handling capability but they are a key component of computer VRMs so you should know about them.

So far I have explained everything as far as a single phase is concerned. As you probably know computer VRMs for the CPU and GPU typically have 3 or more phases. So how does that work.

Well  each phase handles a chunk of the total current that your load requires. To do this the PWM controller generates as many PWM signals as there are phases. These signals are offset so that only one phase has it's high side MOSFET on at any given time. All the other phases have their low side MOSFET in the on state and the high side MOSFET is off. So if you have a 4 phase VRM you have 3 phases running in fly wheel mode and 1 phase charging. You can more or less gauge the current handling capability of a multi phase VRM by taking the current capability of the low side MOSFETs and multiplying it by the number of phases.

Now VRMs also include capacitors and many better VRMs will include more capacitors than cheaper VRMs. This is because you need capacitors to smooth out the voltage being produced by the VRM and the more capacitors you have the more capacity the VRM has and the less your voltage drops while your highside MOSFET is off. If you had very small capacitors and a very high current draw the capacitor could end up completely draining before the high side comes on resulting the voltage that your load is being provided reaching 0V. As we all know that is bad. Which is why high end VRMs have huge capacitor banks. Now capacitors also cause an efficiency loss and take a ton of space so just slapping 1F of capcitance on a VRM is not the best idea. However if you have VRM that has high ripple adding more caps can help.The other issue with capacitors is that some capacitors(electrolytes) have a maximum current that can be pulled from them, If you exceed this current the cap will fail.

The other way to lower voltage ripple is to increase how often you turn on the high side. This is dictated by the PWM controller's switching frequency. When you turn on the high side MOSFET your VRM output voltage starts to rise until the PWM signal turn it off again and your voltage starts to drop. The longer the wait between the on and off the longer the voltage will rise and drop increasing the minimum voltage and maximum voltage that your VRM outputs when trying to hit a set voltage. This is what ripple is. So if you cut down the amount of time you voltage spends dropping and rising by increasing the frequency of the PWM signal you decrease the ripple. This is why many overclocking centric boards have a VRM switching frequency option in the BIOS. The down side to this is that you need to charge your MOSFETs on and off more often and that lowers the VRM's efficiency. Which is why OCing GPUs like the Lightning are so damn power hungry.

The final way to lower voltage ripple is to add more phases. Because then you basically increase the switching frequency of the PWM because instead of cycling through X MOSFETs turning on and off in time Y you cycle through Z>X MOSFETS turning on and off. So you get more switching in time Y. Accompanied by that same efficiency loss as before.

So how many phases does your motherboard/GPU have? No more than 8.
8 is the largest number of phases that any PWM controller currently used in computers can produce. So how do we have VRMs with 10 12 14 16 20 24 and 32 phases? Doublers.
Doublers are are specially ICs that take one PWM signal and split it into 2. In the process they cut switching frequency in half but they do give you more phases so you do get the extra current capability and get lower operating temps but don't gain anything in terms of voltage ripple suppression. Another trick motherboard manufactures use  that I hate is putting stuff in parallel. There is a good way to do it where they put extra MOSFETS in one phase which basically creates a "super phase" if they are using high end MOSFETs but more often than not they double the number of inductors. This means that inexperienced buyers who count inductors to get phase counts can easily be fooled into thinking that a board has 8 phases when in reality it only has 4 but with each phase having 2 inductors. having 2 inductors on 1 phase is completely pointless. It does nothing that a single inductor couldn't do other than looking more complex than it is.

A perfect example of all of these is the MPOWER MAX motherboard I bought. Here are photos detailing it's VRM design that looks like a 20(20 low and high side MOSFETs and 20 inductors) is driven like a 10(10 driver MOSFETS) and is only fed by 5 PWM(5 doubler ICs and the PWM controller is an 8 phase IR running in 5 phase mode) signals before the doublers.

What about VRMs that are listed as having X+Y phases?
Those VRMs just mean that there are 2 different VRMs one with X phases making voltage A and one with Y phases making voltage B. Many PWM controllers offer this type of configuration natively but often you will see more than 1 PWM controller being used. It all depends on the manufacturer.

So what makes a good VRM?

First of all the VRM has to handle the load. This is very important when overclocking because if the overclocked current draw of your CPU/GPU exceeds what the low side MOSFETs can handle the MOSFETs will burn up. The same happens if you exceed the ripple current of the capacitor bank, You end up with a burnt capacitor. The first is common with cheap AMD and X79 motherboards and reference PCB Nvidia GPUs when pushing the voltage. I've only heard of the second once and that is on the EVGA E-power board when heavily over volting(1.7V) the GTX TITAN-X. You can calculate current capabilities by multiplying the low side MOSFET current rating by the number of low side MOSFETs but with the capacitors you just gotta trust the manufacturer(this is almost never an issue). For a 20% over volt and 20% overclock you will want a VRM with at least 44% more current capability than the stock current draw(~TDP / stock voltage). So for an FX 8350(stock 125W 4Ghz 1.35V) at 5Ghz at 1.525V you would want a VRM that can handle at least 131A. That's 10A more than the typical cheap 4 phase VRM and 31A more than the super cheap 4 phase VRM. Also running VRMs close to spec is bad for them so you'd want 10% head room or 144A.

Now that your VRM doesn't explode when you overclock you need a VRM with low voltage ripple. Voltage ripple basically causes your CPU/GPU to degrade at the rate of the voltage you set however it's maximum clock is tied to the minimum voltage that the ripple creates. So if you set 1.525V and have 25mv ripple you can only achieve an overclock as high as if you had flat line 1.5V while the chip is degrading at the rate that 1.525V causes. To get as little ripple as possible you want the highest number of PWM drive signals coming from the controller as possible at the highest switching frequency possible. So ideally you want an 8 phase controller running in 8 phase mode with a 1MHz switching frequency. The difference this makes is usually minimal but if you're overclocking something with a really high power draw it helps. I also suspect that the stock VRM of the R9 290X has really bad voltage ripple but until I get more equipment I can't test that.

Haswell and Haswell-e use the FIVR so only the current thing applies and you have to do the calculation differently. As of right now there is no motherboard that will fail from too much current if you're overclocking with air/water cooling. If you're on LN2 you know what you need.

Also I'd like to thank silicon lottery for sponsoring me and this blog. They bin intel i7 CPUs so if you want to buy a CPU that is guaranteed to not suck at overclocking go check them out.


  1. Hi, thank you very much for sharing this.
    What about this example:

    GTX 970 current draw overclocked.
    at stock:
    150W/1.212V = 123.762A

    If you divide the voltage by the current you get resistance, so:
    1.212V/123.762A = 0.00979ohm
    1.263V/0.00979ohm = 129.009A

    Assuming that current draw scales linearly with frequency:
    1500/1050 = 1.43

    129.009A*1.43 = 184.48A

    And what are the conditions I should look for from the rating of the mosfet.
    ~140c on the "Maximum drain current vs case temperature"?

    1. Look at the graph Fig10. Maximum Drain Current VS Case Temperature. and look at the current that is available at 125C because you shouldn't run the VRM on your GPU above 110C. If you keep the VRM nice and cool in the 70-90C area then you can use the 100A rating on that graph.

  2. Thank you for the info.
    Why would they use double the mosfets on 2 phases out of the 4 on the GTX980:

    1. Many refrence Nvidia cards have whack VRM designs that I can't analyze from just a photo. I'd need the part numbers and a couple measurements to know for sure how that VRM is setup. However from the photo I'm going with it being a 3 doubled into a 6 using an inductor sharing scheme. To allow for 4 inductors to be use by 6 phases.

  3. Hey my r9 270 has 12 4925n from on semi how worried should i be with this

  4. what if you have 2 phases for the core, each one use doubling (22A low side MOSFET), shouldn't max current output be like 44A when another phase is at high side? still, the current pulled reported by hwinfo is about 70A, and that seems like max on my system as i can't go above 1350MHz on core with higher frequency having an effect in 3D load (different story with oclmembench, here going as high as driver/bios allow, which is 1560MHz, actually makes a difference, likely because such a light benchmark doesn't hit the current limit), is that a limit of VRM/MOSFETs ?
    it's a RX560 GPU