HashKah It’s dangerous to go alone ! | LaPointe a écrit :
Citation :
Jonny-Guru-Gerow (Corsair Head of R&D)
Also a legendary PSU reviewer back in 2000s and 2010s
Link to Reddit Account here
Some relevant comments:
It's a misunderstanding on MODDIY's end. Clearly they're not a member of the PCI-SIG and haven't read through the spec. Because the spec clearly states that the changes made that differentiate 12VHPWR from 12V-2x6 is made only on the connector on the GPU and the PSU (if applicable).
My best guess of this melted cable comes down to one of several QC issues. Bad crimp. Terminal not fully seated. That kind of thing. Derau8er already pointed out the issue with using mixed metals, but I didn't see any galvanic corrosion on the terminal. Doesn't mean it's not there. There's really zero tolerance with this connector, so even a little bit of GC could potentially cause enough resistance to cause failure. Who knows? I don't have the cable in my hands.
------
The MODDIY was not thicker gauge than the Nvidia. They're both 16g. Just the MODDIY cable had a thicker insulation.
------
That's wrong. Then again, that video is full of wrong (sadly. Not being like Steve and looking to beat up on people, but if the wire was moving 22A and was 130°C, it would have melted instantly.)
16g is the spec and the 12VHPWR connector only supports 16g wire. In fact, the reason why some mod shops sell 17g wire is because some people have problems putting paracord sleeve over a 16g wire and getting a good crimp. That extra mm going from16g to 17g is enough to allow the sleeve to fit better. But that's not spec. Paracord sleeves aren't spec. The spec is 16g wire. PERIOD.
------
If it was that hot, he wouldn't be able to hold it in his hand. I don't know what his IR camera was measuring, but as Aris pointed out.... that wire would've melted. I've melted wires with a lot less current than that.
Also, the fact that the temperature at the PSU is hotter than the GPU is completely backwards from everything I've ever tested. And I've tested a lot. Right now I have a 5090 running Furmark 2 for an hour so far and I have 46.5°C at the PSU and 64.2°C at the GPU in a 30°C room. The card is using 575.7W on average.
Derau8er is smart. Hr'll figure things out sooner than later. I just think his video was too quick and dirty. Proper testing would be to move those connectors around the PSU interface. Unplug and replug and try again. Try another cable. At the very least, take all measurements at least twice. He's got everyone in an uproar and it's really all for nothing. Not saying there is no problem. I personally don't *like* the connector, but we don't have enough information right now and shouldn't be basing assumptions on some third party cable from some Hong Kong outfit.
------
ABSOLUTELY. There is no argument that there is going to be different resistance across different pins. But no wire/terminal should get hotter than 105°C. We're CLEARLY seeing a problem where terminals are either not properly crimped, inserted, corroded, etc. what have you, and the power is going to a path of less resistance. But this is a design problem. I can't fix this. :-( (well... I can, maybe, but it requires overcomplicating the cable and breaking the spec)
------
They provide this if your PSU is not capable of more than 150W per 8-pin. If used with a PSU that CAN provide more than 150W per 8-pin, it just splits the load up across the four connections
There is no "6+2-pin to 12VHPWR". The cable is a 2x4-pin Type 4 or 5 to 12V-2x6. There is no disadvantage to using this as the 12VHPWR has 6 12V conductors and 6 grounds and two sense that need to be grounded. 2x Type 4 connection gives you up to 8x 12V and 8x ground. So, this is a non-issue.
12VHPWR to 12VHPWR is fine too. Just like the 2x Type 4 8-pin or 2x Type 5 8-pin, you have a one-to-one connection between the PSU and the GPU. That' s why I don't like calling these cables "adapters". If it's one-to-one, it's not an adapter. It's just a "cable".
|
Source : https://www.reddit.com/r/nvidia/com [...] egathread/
Zut, vraiment dommage que j'ai loupé le drop d'aujourd'hui 
|
Dans tous les cas, seul le temps nous donnera des réponses à ces questions, d'ici trois ou quatre mois, quand les 5090 seront largement disponibles et que de grands malades les auront foutu dans du mini-ITX ou avec des alimentations de merde sous-dimensionnées on verra si les cables fondues et les PSU qui crament se multiplient.
Actuellement y'a quoi ? Dix mille 5090 à tout péter sur des configurations de particuliers à travers le monde (je suis large), et forcément un premier cas problématique suscite des interrogations légitimes par rapport au four 600W que Nvidia nous a pondu.
Etant personnellement d'un naturel prudent, si je dois quitter mon domicile j'éteins mon PC, et je ne le fais pas tourner en Load la nuit ou pendant mon absence, je le laisse en idle quand je ne joue pas et si je suis à la maison.
Après si il y en a qui veulent miner de la crypto H24 sur leur 5090 FE, je dirais qu'ils aiment prendre des risques, mais je le dirais aussi pour une 4090 vu les échelles de consommation atteintes.
Même si je pense qu'il ne faut pas s'affoler, il faut aussi garder en tête que des conneries industrielles on en compte par dizaines sur des produits grands publics, et plus haut il a été rappelé le cas de Samsung Galaxy Note ayant fait l'objet d'une interdiction de vol et rappelé en masse, on parle d'un produit de grande consommation qui pouvait causer de graves dommages.
Si un tel produit, destiné à être vendu à des centaines de milliers d'exemplaires a pu faire l'objet de telles failles, que dire d'un GPU destiné à être vendu sur des échelles bien plus réduites, avec des enjeux bien moindres en cas de rappel ? Je me répète mais si le mot d'ordre avait été "sécurité", la 5090 embarquerait deux, voire trois connecteurs, avec des sécurités supplémentaires comme sur le modèle Astral, et ne ferait pas 2 slots mais 3.5 slots pour garantir une dissipation adéquate.
Là c'est une belle prouesse d'ingénierie mais clairement ça s'appelle fleurter avec les limites et jouer avec le feu, le temps nous dira si Nvidia a fait une énorme connerie ou pris un risque calculé et mesuré...
|