Forum |  HardWare.fr | News | Articles | PC | S'identifier | S'inscrire | Shop Recherche
2100 connectés 

 

 

 Mot :   Pseudo :  
  Aller à la page :
 
 Page :   1  2  3  4  5  ..  54  55  56  ..  73  74  75  76  77  78
Auteur Sujet :

[Topic Unique] GT300 // La nouvelle bête de combat nvidia en 40nm

n°7068266
White Sh4d​ow
GHz-O-Meter !
Posté le 10-08-2009 à 16:03:14  profilanswer
 

Reprise du message précédent :
Façon de parler  :o
Tu vois ou je veux en venir ... ( non ? )

Message cité 1 fois
Message édité par White Sh4dow le 10-08-2009 à 16:03:51
mood
Publicité
Posté le 10-08-2009 à 16:03:14  profilanswer
 

n°7068274
mikestewar​t
Air Roll Spinnin'
Posté le 10-08-2009 à 16:09:06  profilanswer
 

Pour les 230$, marllt2 parlait du GPU seul.
 

n°7068277
White Sh4d​ow
GHz-O-Meter !
Posté le 10-08-2009 à 16:11:16  profilanswer
 

Et à ça s'ajoute le PCB et son layout : les condensateurs, l'étage d'alimentation, le radiateur, les connectiques etc. et c'est pas gratuit  :whistle:

n°7068295
mikestewar​t
Air Roll Spinnin'
Posté le 10-08-2009 à 16:25:52  profilanswer
 

La mémoire non plus.  ;)

n°7068302
White Sh4d​ow
GHz-O-Meter !
Posté le 10-08-2009 à 16:31:55  profilanswer
 

Sans parler de GDDR5  :sweat:  mais je crois que le prix de la gddr5 a beaucoup baissé par rapport à sa sortie sur 4870

n°7068423
Activation
21:9 kill Surround Gaming
Posté le 10-08-2009 à 18:15:00  profilanswer
 

White Sh4dow a écrit :

Sans parler de GDDR5  :sweat:  mais je crois que le prix de la gddr5 a beaucoup baissé par rapport à sa sortie sur 4870


 
 
moueh y a plus quimonda sur le coup
 
et vu que maintenant les 2 adversaire nvidia et ati vont vouloir de la gddr5 sur tout le panel de leur modèle  
ça pue la pénurie dans pas longtemps de gddr5 à mon avis c est déjà en train de stocker des puces gddr5 avant même de pouvoir les mettre sur le PCB de la CG

n°7068494
Gein
Posté le 10-08-2009 à 18:52:42  profilanswer
 
n°7068534
dragonlore
Posté le 10-08-2009 à 19:19:00  profilanswer
 

White Sh4dow a écrit :

Façon de parler  :o
Tu vois ou je veux en venir ... ( non ? )


oui mais je sais qu'nvidia fera bien de bonnes marges dessus, faut pas s'inquiéter et sans forcément avoir des prix plus élevés que pour le gt280

n°7068549
0b1
There's good in him
Posté le 10-08-2009 à 19:30:36  profilanswer
 
n°7068564
Activation
21:9 kill Surround Gaming
Posté le 10-08-2009 à 19:44:27  profilanswer
 


 
 
oueh second semestre 2010
 
ils seront pret pour les HD6870 et GTX480  :o

mood
Publicité
Posté le 10-08-2009 à 19:44:27  profilanswer
 

n°7068754
kaiser52
Posté le 10-08-2009 à 21:47:43  profilanswer
 

http://www.clubic.com/actualite-29 [...] re-i3.html
 

Citation :

NVIDIA annonce par voie de communiqué de presse que sa technologie de rendu à plusieurs processeurs graphiques, le SLI, est désormais proposée sous licence pour les futures plates-formes Intel à base de chipset P55. Rappelons-nous que pour le chipset Intel X58, NVIDIA avait du ouvrir sa technologie SLI pour voir cette dernière proposée sur les premières machines Core i7...


 
Cool, moi qui voulais tester le SLI sur GT300 XD


---------------
Benchmarks du peuple - Crysis War - Vide grenier ! - nVIDIA Tegra
n°7068771
0b1
There's good in him
Posté le 10-08-2009 à 21:55:21  profilanswer
 

kaiser52 a écrit :

http://www.clubic.com/actualite-29 [...] re-i3.html
 

Citation :

NVIDIA annonce par voie de communiqué de presse que sa technologie de rendu à plusieurs processeurs graphiques, le SLI, est désormais proposée sous licence pour les futures plates-formes Intel à base de chipset P55. Rappelons-nous que pour le chipset Intel X58, NVIDIA avait du ouvrir sa technologie SLI pour voir cette dernière proposée sur les premières machines Core i7...


 
Cool, moi qui voulais tester le Tri-SLI sur GT300 XD


:o

n°7068797
kaiser52
Posté le 10-08-2009 à 22:11:03  profilanswer
 

Autant pour moi !
Moi je m'en fou de la planete, font tous chier quoi !  :o


---------------
Benchmarks du peuple - Crysis War - Vide grenier ! - nVIDIA Tegra
n°7068884
Gigathlon
Quad-neurones natif
Posté le 10-08-2009 à 23:20:48  profilanswer
 

dragonlore a écrit :

D'habitude les fabricants se font des marges énormes sur les cartes haut de gamme


Si tu parles de l'époque des R9700 voire des GF4 c'est clair, sauf que depuis c'est le mainstream qui a un coût du même niveau et est déjà vendu bien moins cher...
 
Une 4770 coûte à peu de choses près aussi cher à fabriquer qu'une 9700pro à son époque, sauf qu'elle est vendue 3x moins cher.
 
Depuis le GT200 nVidia met ses partenaires dans une situation intolérable, avec des marges ridicules.

n°7068989
White Sh4d​ow
GHz-O-Meter !
Posté le 11-08-2009 à 00:29:41  profilanswer
 

Si jamais, j'ai trouvé un nouvel article sur un site turque : C'est ici
 
Personellement j'ai rien compris (traduction à la google...) mais peut être seriez vous plus aptes à déchiffrer que moi :jap:
 
EDIT :  
 

marllt2 a écrit :


Paragraphe AMD: blablabla, et ils relayent le nom Radeon 7 series.
 
Paragraphe Intel: blablabla, et dispo pour le public pas avant mi 2010.
 
Paragraphe nVidia: le GPU est terminé, et nVidia bosse sur la A0. Ce GPU sera le plus gros changemnt architectural après les NV40 et G80, avec des unités MIMD au lieu de SIMD. Lancement selon eux en Novembre, malgré des problèmes de coûts et de quantités très limitées. Avec du "volume" pas avant 2010.  
 
 
Dans le blablabla, c'est compréhensible, mais ils ne font que répéter ce qui a déjà été dit ailleurs.



Message édité par White Sh4dow le 11-08-2009 à 00:59:30
n°7069072
marllt2
Posté le 11-08-2009 à 02:13:54  profilanswer
 
n°7069148
olioops
Si jeune et déjà mabuse.
Posté le 11-08-2009 à 08:04:17  profilanswer
 

Super  :D  
 
est-ce qu'il a beau temps ? [:reddie]


Message édité par olioops le 11-08-2009 à 08:06:35

---------------
Ne pas confondre gisement épuisé et mine de rien !
n°7069422
bjone
Insert booze to continue
Posté le 11-08-2009 à 13:16:24  profilanswer
 

Activation a écrit :

il est tout pourri ton crt :O le phosphore c est pas bon ...en mange pas


Très belle sortie de débat :D

n°7070473
marllt2
Posté le 12-08-2009 à 03:36:29  profilanswer
 

Interview de Dally: http://www.pcgameshardware.com/aid [...] abee/News/
 
Raytracing:
 

Citation :

PCGH: Intel made a lot of fuzz about ray tracing in the last 18 months or so. Do you think that's going to be a major part of computer and especially gaming graphics in the foreseeable future, until 2015 maybe?
 
Bill Dally: It's interesting that they've made a big fuzz about it while we've had a demonstration of real-time ray tracing at Siggraph last year. It's one thing making a fuzz, it's another thing demonstrating it running real-time on GPUs.  
 
But to answer that, what I see as most likely for game graphics going forward is hybrid graphics. Where you start out by rasterizing the scene and then you make a decision at each fragment, whether that fragment can be renderer just with a shader calculating using local information or if it's a specular surface or if it's transparent surface or if there's is a silhouette edge and soft-shadows are important. Then you may need cast rays to compute a very accurate and photo realistic color for that point. So I think it's gonna be a hybrid version where some pixels are rendered conventionally and some pixels involve ray tracing and that gives us the most efficient use of our computational resources - using ray tracing where it does the most good.


 
Larrabee:
 

Citation :

PCGH: While we're at it. Intel also made a big fuzz about Larrabee.
Bill Dally: M-hm.
 
PCGH: They are aiming for a mostly programmable architecture there. They state that they have only 10 percent dedicated to graphics of the whole die, the rest being completely programmable according to Intel. And still they want to compete in the high-end with your GPUs. Do you think that's feasible right now?
 
Bill Dally: First of all, right now, Larrabee is a bunch of View-graphs. So, until they actually have a product, it's difficult to say how good it is or what it does. You have to be careful to read to much into View-graphs - it's easy to be perfect, when you have to do is be a View-Graph. It's much harder when you have to deliver a product that actually works.  
 
But to the question of the degree of fixed function hardware: I think it puts them at a very serious disadvantage. Our understanding of Larrabee, which is based on their paper at Siggraph last summer and the two presentations at the Game Developers Conference in April, is that they have fixed function hardware for texture filtering, but they do not have any fixed function hardware either for rasterization or compositing and I think that that puts them at a very serious disadvantage. Because for those parts of the graphics pipeline they're gonna have to pay 20 times or more energy than we will for those computations. And so, while we also have the option of doing rasterization in software if we want - we can write a kernel for that running on our Streaming Multiprocessors - we also have the option of using our rasterizer to do it and do it far more efficiently. So I think it puts them at a very big disadvantage power-wise to not have fixed function hardware for these critical functions. Because everybody in a particular envelope is dominated by their power consumption. It means that at a given power value they're going to deliver much lower performance graphics.  
 
I think also that the fact that they've adopted an x86-instruction set puts them at a disadvantage. It's a complex instruction set, it's got instruction prefixes, it only has eight registers and while they claim that this gives them code compatibility, it gives them code compatibility only if they want to run one core without the SIMD extension. To use the 32 cores or use the 16-wide SIMD extension , they have to write a parallel program, so they have to start over again anyway. And they might as well have started over with a clean instruction set and not carry the area and power cost of interpreting a very complicated instruction set - that puts them at a disadvantage as well.  
 
So while we're very concerned about Larrabee, Intel is a very capable company, and you always worry, when a very capable company starts eating your lunch, we're not too worried about Larrabee at least based on what they disclosed so far.


Citation :

PCGH: Will it be a major contributor or limiting factor: The driver team? Intel's integrated graphics have not a very good reputation for their drivers and it seems that both AMD and Nvidia are putting real loads of effort into their drivers.
 
Bill Dally: I think that you have to deliver a total solution. But if any part of the solution is not competitive, then the whole solution is not competitive. And our view based on what's been disclosed [until] today, is that the hardware itself is not going to be competitive and if they have a poor driver as well, that only makes it worse. But even a good driver won't save the hardware


 
 
 
Dally et les GPU:
 

Citation :

PCGH: Recently, you have introduced new mobile Geforce parts which support DX10.1 compliance. Did your work already have an influence on those parts or where they completed already when you joined Nvidia?
 
Bill Dally: That was completely before my time. Those were already in the pipe. I'm tending to look a bit further out, so...


Citation :

PCGH: When do you think we're going to see products on the shelves, that were influenced by your work?
 
Bill Dally: I've had small influences on some of the products that are going to be coming out towards the end of this year but those products were largely defined and it was just little tweaks toward the end. It's really gonna be the products in about the 2011 time frame that I will be involved in from the earlier stages.


GT300 ? [:brainbugs]
 
 
DX11:
 

Citation :

PCGH: With Microsoft's Windows 7, and thus DirectX 11, expected on the shelves from October 22nd, do you expect a large impact on graphics cards sales from that?
 
Bill Dally: I actually don't know what drives the sales that much, but I would hope that people appreciate the GPU a lot more with Windows 7 because of DirectX Compute and the fact that the operating system both makes every use of the GPU itself and also exposes it in it's APIs for applications to use.

Citation :

PCGH: Independently whether it's a DX11 or a DX10 / 10.1 GPU?
 
Bill Dally: If it supports DirectX Compute, then it doesn't need to be DX 11.

Citation :

PCGH: No, but there's a different level of DX Compute, if I'm not mistaken. It's the DX11 Compute, then the downlevel shader called DX Compute 4.0 and 4.1?
 
Bill Dally: No, you're exceeding my knowledge a bit right now.


 
 :lol:  
 
On a trouvé le slogan marketing que nVidia va diffuser partout au lancement des GPU DX11 AMD.
 
 
 
Les GPU DX11 AMD ?:
 

Citation :

PCGH: Speaking of DirectX 11: Were you personally surprised when AMD was showing DX11 hardware at Computex?
 
Bill Dally: No, not particularly. We had had some advanced word of that.


 :??:

Message cité 2 fois
Message édité par marllt2 le 12-08-2009 à 04:45:27
n°7070475
marllt2
Posté le 12-08-2009 à 04:05:13  profilanswer
 

Perfs/W et perfs/mm²:
 

Citation :

PCGH: Ok, going back to the new mobile parts: They have a very much improved GFLOPS/watt ratio, almost double over the previous generation. Is this the way to go for the future? To squeeze out the maximum number of FLOPS per watt?
 
Bill Dally: We consider power-efficiency a first-class problem. And it's driven starting from our mobile offerings, but it's actually important across the product line. I mean at every power-point - even  at the 225-watt-top-of-the-line GPUs, we absolutely have to deliver as much performance as we can in that power-envelope. So a lot of the techniques that we use in our mobile devices, things like very aggressive clock-gating, power-gating are being used across the product line.

Citation :

PCGH: Do you think that DX Compute is going to facilitate a more rapid increase of GFLOPS per square millimeter than was seen in the past?  
 
Bill Dally: We try to deliver as many GFLOPS per square millimeter as we can, regardless of how people program it. So I think DirectX Compute will enable both within Microsoft's software and also what third party windows software for more applications to use the power of the GPU. We're gonna deliver the absolute best performance per square millimeter regardless of how people program it. That's something that we constantly, in our engineering effort, are striving to improve.


 
En GFlops/mm², Dally n'oublierait-il pas les GPU AMD ?  
 
 
La BP selon Dally:
 

Citation :

PCGH: How fast is the ALU/FLOP-ratio evolving? Is the move towards more FLOPS accelerating in the future?
 
Bill Dally: The texturing and FLOPS actually tends to hold a pretty constant ratio and that's driven by what the shaders we consider important are using. We're constantly benchmarking against different developers‘ shaders and see what our performance bottlenecks are. If we're gonna be texture limited on our next generation, we pop another texture unit down. Our architecture is very modular and that makes it easy to re-balance.  
 
The ratio of FLOPS to bandwidth, off-chip bandwidth is increasing. This is, I think, driven by two things. One is fortunately the shaders are becoming more complex. That's what they want anyway. The other is, it's just much less expensive to provide FLOPS than it is [to provide] bandwidth. So you tend to provide more of the thing which is less expensive and then try to completely saturate the critical expensive resource which is the memory bandwidth.

Citation :

PCGH: Do you think a large leap in available bandwidth would be necessary for next generation hardware - like for DirectX 11 with it's focus on random R/W (Scatter, Gather operations etc.) which should benefit greatly from more or at least more granular memory access.
 
Bill Dally: Almost everything would benefit from more bandwidth and being able to do it at a finer grain. But I don't think that there's gonna be any large jumps. I think we're gonna evolve our memory bandwidth as the GDDR memory components evolve and track that increase .


 
En même temps, à moins de passer à un bus 640 bits...
 
 
La stratégie du mamooth, et des GPU de 500mm²:
 

Citation :

PCGH: Another topic: In contrast to your competitor, Nvidia's GPUs, at least the high-end ones, have in the last couple of years always been very large, physically. AMD is going the route of having a medium-sized die and scale it with X2 configurations for high-end needs; Nvidia is producing very large GPUs. Is that a trend which could change in the future or don't you think you have reached the limits of integration in single-chips, the "Big Blocks of Graphics"?
 
Bill Dally: We're trying to always deliver the best performance and value to our customers and we're gonna continue doing that. And for any given generation there's an economic decision that has to be made about how large to make the die. Our architecture is very scalable, so the larger we make the die, the more performance we deliver. We also deliver duplex-configurations as in GTX 295 and so, if build a very large die and then also put two of them we can deliver even more performance. And so for each generation we're gonna to the calculation side what is the most economic way of delivering the best performance for our customers.


 
Dally a apparemment oublié que nV a laissé AMD seul sur le segment du THDG pendant 6 mois. [:yamusha]  
 
Et il ne semble pas remettre en cause les perfs/mm² du GT200(b), puisque le "most economic way of delivering the best performance for our customers" était chez les rouges.
 
Quand au "Our architecture is very scalable, so the larger we make the die, the more performance we deliver."  [:brainbugs]
 
 
L'après GT300:
 

Citation :

PCGH: Starting with G80, we've seen the integration of shared memory, scratch pads and the like, for data sharing between individual SIMDs. This was obviously done specifically for GPGPU and stuff. Are we going to see more of that non-logic-area in future GPUs?
 
Bill Dally: First of all, we like to call it GPU computing, not GPGPU. GPGPU refers to using shaders to do computing, while GPU computing uses the native compute capability of the hardware. To answer that particular question, the answer is yes, although it's gonna be over many generations of future hardware. We see improving the on-chip memory system as a critical technology [in order] to enabling more performance where the off-chip bandwidth is scaling at a rate that's slower than the amount of floating point that we can put on the die. So we need to do more of things like the shared memory that's exposed to the multiple threads within a cooperative thread array.

Citation :

PCGH: What do you think: What is the area, current GPUs, throughput processors or however you may call them, are lacking most? Which is the area which should be improved at the forefront?
 
Bill Dally: Well, they're actually pretty good, so it's hard to faults with them. But there's always room for improvement. But i think it's not about wanting, but about opportunities to make them even better. The areas where there's opportunities to make them even better is mostly in the memory system. I think that we're increasingly becoming limited by memory bandwidth on both the graphics and the compute side. And I think there's an opportunity from the hundreds of processors we're at today to the thousands of cores we're gonna be at in the near future to build more robust memory hierarchies on chip to make better use of the off-chip bandwidth.


 

Citation :

PCGH: Do you envision future GPUs more like the current approach, where you have one large scheduling block and then the work gets distributed to each cluster or is it going to be a more independent collaboration of work units where each rendering cluster, each SIMD unit gets more and more independent?
 
Bill Dally: In the immediate future, I think things are gonna wind up very much the way they are today. I'm viewing it as a flat set of resources where the work gets spread uniformly across them. I think ultimately, they are going forward, we are going to have a need to build a more hierarchical structure into our GPUs both in the memory system and in how work gets scheduled.

Citation :

PCGH: So that's a more distributed approach then for computing in graphics or computing in whatever the operating system uses the GPU for.  
 
Bill Dally: Yeah.


Message édité par marllt2 le 12-08-2009 à 04:50:17
n°7070723
ilo34
Posté le 12-08-2009 à 11:53:19  profilanswer
 

pardon mais quelqu'un pourrait'il faire une traduction des grage ligne car l'anglais et moi sa fait 2 .....

n°7070739
mikestewar​t
Air Roll Spinnin'
Posté le 12-08-2009 à 12:05:42  profilanswer
 

marllt2 a écrit :

Dally et les GPU:
 

Citation :

PCGH: Recently, you have introduced new mobile Geforce parts which support DX10.1 compliance. Did your work already have an influence on those parts or where they completed already when you joined Nvidia?
 
Bill Dally: That was completely before my time. Those were already in the pipe. I'm tending to look a bit further out, so...


Citation :

PCGH: When do you think we're going to see products on the shelves, that were influenced by your work?
 
Bill Dally: I've had small influences on some of the products that are going to be coming out towards the end of this year but those products were largely defined and it was just little tweaks toward the end. It's really gonna be the products in about the 2011 time frame that I will be involved in from the earlier stages.


GT300 ? [:brainbugs]


 
Soit :
 
1) Billy Dally raconte n'importe quoi. Sur ce qu'il s'est passé chez Nvidia avant son arrivée ok, mais là...
2) On a loupé un épisode. On est au courant de puces DX10.1 40nm (GT215/216/218, les mêmes que sur portables) et Dally n'a pas bossé dessus. On est aussi au courant du GT300 mais à part toutes celles là , rien. Une puce "dalliesque" pointerait le bout de son nez ? :whistle:  
3) Il parle effectivement du GT300.

n°7073300
dragonlore
Posté le 14-08-2009 à 12:55:44  profilanswer
 

un peu de HS:
bus 192 bits pour le prochain RV870. Si ça continue on aura droit à des bus 32 bits et de la gddr7 à 10 ghz dans 5 ans.
http://www.pcinpact.com/actu/news_multi/52514.htm
 
on progresse d'un côté, on régresse de l'autre. ou alors c'est que faire un bus plus gros (et les bus 256 bits ça existe depuis la 9700pro qui date de 2002) ça coute plus chère que d'acheter de la mémoire à très haute fréquence.

n°7073302
Profil sup​primé
Posté le 14-08-2009 à 12:57:19  answer
 

La latence diminue fortement en augmentant la fréquence et en baissant la largeur du bus
Sinon ce n'est pas une régression mais une adaptation aux contraintes budgétaires/cible

n°7073365
dragonlore
Posté le 14-08-2009 à 13:43:20  profilanswer
 


contrainte budgétaire/cible et régression c'est pas antinomique.

n°7073370
radeon4eve​r
Chasseur de specs
Posté le 14-08-2009 à 13:46:40  profilanswer
 

dragonlore a écrit :

un peu de HS:
bus 192 bits pour le prochain RV870. Si ça continue on aura droit à des bus 32 bits et de la gddr7 à 10 ghz dans 5 ans.
http://www.pcinpact.com/actu/news_multi/52514.htm
 
on progresse d'un côté, on régresse de l'autre. ou alors c'est que faire un bus plus gros (et les bus 256 bits ça existe depuis la 9700pro qui date de 2002) ça coute plus chère que d'acheter de la mémoire à très haute fréquence.


c'est pas une regression, simplement ATI n'a pas envie de produire des GPU a 2 milliards de transistors comme NV a cause d'une architecture complexe. la bande passante sera la meme avec un bus 256bits a 4GHz qu'une bus 512bits à 2GHz mais sans la complexité des 512 bits de largeur


---------------
(VDS) XPS 17 9730 - i9 - RTX 4080 - 64Go - 1To - 17' 4K
n°7073381
kaiser52
Posté le 14-08-2009 à 13:51:33  profilanswer
 

radeon4ever a écrit :


c'est pas une regression, simplement ATI n'a pas envie de produire des GPU a 2 milliards de transistors comme NV a cause d'une architecture complexe. la bande passante sera la meme avec un bus 256bits a 4GHz qu'une bus 512bits à 2GHz mais sans la complexité des 512 bits de largeur


 
Y'a du gachi de mémoire pour cacher la latence nan ?


---------------
Benchmarks du peuple - Crysis War - Vide grenier ! - nVIDIA Tegra
n°7073389
radeon4eve​r
Chasseur de specs
Posté le 14-08-2009 à 13:53:42  profilanswer
 

aussi ;) mais bon, si ATI y trouve son compte, pourquoi pas ;) faudra voir en pratique ce que ca donne


---------------
(VDS) XPS 17 9730 - i9 - RTX 4080 - 64Go - 1To - 17' 4K
n°7073390
Fssabbagh
Satsui no Hado
Posté le 14-08-2009 à 13:54:01  profilanswer
 

y a déjà du gâchis chez ati dans le ventirad stock alors :whistle:


---------------
Messatsu !
n°7073400
kaiser52
Posté le 14-08-2009 à 13:57:24  profilanswer
 

radeon4ever a écrit :

aussi ;) mais bon, si ATI y trouve son compte, pourquoi pas ;) faudra voir en pratique ce que ca donne


 
En faite c'était pour mettre en avant le faite que 4Ghz a 256 c'est mieux, car moins de gachi mémoire vue que moins de latence ^^


---------------
Benchmarks du peuple - Crysis War - Vide grenier ! - nVIDIA Tegra
n°7073402
radeon4eve​r
Chasseur de specs
Posté le 14-08-2009 à 13:58:16  profilanswer
 

kaiser52 a écrit :


 
En faite c'était pour mettre en avant le faite que 4Ghz a 256 c'est mieux, car moins de gachi mémoire vue que moins de latence ^^


exactement, mais plus la mémoire évolue, plus la latence augmente donc au final on se retrouve a peu pres au meme point :o


---------------
(VDS) XPS 17 9730 - i9 - RTX 4080 - 64Go - 1To - 17' 4K
n°7073403
radeon4eve​r
Chasseur de specs
Posté le 14-08-2009 à 13:58:36  profilanswer
 

Fssabbagh a écrit :

y a déjà du gâchis chez ati dans le ventirad stock alors :whistle:


pas sur mes ex-PowerColor HD 4870 :o


---------------
(VDS) XPS 17 9730 - i9 - RTX 4080 - 64Go - 1To - 17' 4K
n°7073533
White Sh4d​ow
GHz-O-Meter !
Posté le 14-08-2009 à 15:17:21  profilanswer
 

http://www.semiaccurate.com/2009/0 [...] nvio-chip/
 
Vous êtes libres de commenter :)

n°7073539
Gein
Posté le 14-08-2009 à 15:21:48  profilanswer
 

un die de 530mm² :ouch:  
Et la partie décodage de la video n'est plus dans le gpu mais sur une puce a part :pt1cable:  

n°7073559
sharshar
Slava Ukraini
Posté le 14-08-2009 à 15:32:10  profilanswer
 

si j'ai bien compris les yields du gt 300 coute 2 fois plus cher a produire que ceux du cypress et le nvio c'est le truc pour l'encodage qui va faire que ca va encore couter + cher :??:  :sweat:

n°7073567
radeon4eve​r
Chasseur de specs
Posté le 14-08-2009 à 15:34:52  profilanswer
 

et ben c'est good ca :d ca va encore etre bon pour AMD qui produit des gpus moins complexes et donc moins chers.  
 
ces caméléons n'ont pas encore compris que ca servait a rien ce genre de gpu :d


---------------
(VDS) XPS 17 9730 - i9 - RTX 4080 - 64Go - 1To - 17' 4K
n°7073628
Corleone_6​8
Posté le 14-08-2009 à 15:59:25  profilanswer
 

Un bon traducteur pourrait faire un recap svp ? J'ai pas compris les details..


---------------
Phanteks Enthoo Primo / Seasonic P-860 / Asus Strix B550 E/ Ryzen 5600X WC / 2*16 F4 3600C16 Gskill Ripjaws @3733 / 6900XT Red Devil / Crucial C300 128 Go / Sam 850 Evo 500 Go / Velociraptor 300 Go / Caviar Red 4 To / Caviar Black 1 To
n°7073634
mikestewar​t
Air Roll Spinnin'
Posté le 14-08-2009 à 16:04:01  profilanswer
 

sharshar a écrit :

si j'ai bien compris les yields du gt 300 coute 2 fois plus cher a produire que ceux du cypress et le nvio c'est le truc pour l'encodage qui va faire que ca va encore couter + cher :??:  :sweat:

 

Non, c'est une puce qui permet de déporter des transistors à l'extérieur du GPU. Ça a été inauguré avec le G80, le GT200 en a eu un aussi.

 

Donc, selon Charlie : GT300 (52$) + le Nvio (10$) = 64$, face aux 34$ du Cypress (+88%).

Message cité 1 fois
Message édité par mikestewart le 14-08-2009 à 16:08:51
n°7073645
Gigathlon
Quad-neurones natif
Posté le 14-08-2009 à 16:15:01  profilanswer
 

kaiser52 a écrit :

Y'a du gachi de mémoire pour cacher la latence nan ?


Du gâchis de transistors, puisqu'on doit mettre un cache de plus en plus gros.
 
En pratique, la mémoire n'est accédée quasiment qu'en burst pour cause de latence horrible, et pour ça on met en cache les adresses de lecture et les données à écrire, et on ne lit/écrit dans la mesure du possible que quand on a tous les blocs d'un accès burst (8 en gddr5, 4 en gddr3), c'est d'ailleurs pour ça que la gddr5 ne souffre que très peu de son bus de commandes à demi-vitesse.


Message édité par Gigathlon le 14-08-2009 à 16:15:44
n°7073646
kaiser52
Posté le 14-08-2009 à 16:15:31  profilanswer
 

y'a beaucoup de logo ATI sur la page !
On verra se que ca donne, mais moi ça ne me surprend pas !


---------------
Benchmarks du peuple - Crysis War - Vide grenier ! - nVIDIA Tegra
n°7073728
Activation
21:9 kill Surround Gaming
Posté le 14-08-2009 à 17:39:02  profilanswer
 

mikestewart a écrit :


 
Non, c'est une puce qui permet de déporter des transistors à l'extérieur du GPU. Ça a été inauguré avec le G80, le GT200 en a eu un aussi.
 
Donc, selon Charlie : GT300 (52$) + le Nvio (10$) = 64$, face aux 34$ du Cypress (+88%).


 
un nv i/o pour le gt300 ça friserait le ridicule
 
dans les gtx295 tout l interet d un nv I/O a été sabordé car il y en a 2 (car l interet serait qu il n'en ai qu 1 sur une carte même ci celle si devais avoir je sais pas 6GPUs pour exagérer la chose)
 
car bon si c est pour dire derriere "ça a l interet que sur les solutions tesla on vire le nv i/o"
 
au prix des carte tesla osef que ça soit intégré dans le gpu mais désactivé

mood
Publicité
Posté le   profilanswer
 

 Page :   1  2  3  4  5  ..  54  55  56  ..  73  74  75  76  77  78

Aller à :
 

Sujets relatifs
[Topic Unique] ATI Radeon HD 4800 Series (RV790 inside)[Topic Unique] ATI Radeon 4700 Series (RV740 inside)
question sur nouvelle configComment réduire le bruit d'une nouvelle configuration ?
Nouvelle config pour bien débuter l'année :)[ topic unique ] les alimentations be quiet!
cherche conseils pour nouvelle config[HELP!] Cherche un magasin d'info sur Annecy !
[Topic Unique] Boîtier Silverstone Raven RV01 : mobo orientée à 90° ! 
Plus de sujets relatifs à : [Topic Unique] GT300 // La nouvelle bête de combat nvidia en 40nm


Copyright © 1997-2025 Groupe LDLC (Signaler un contenu illicite / Données personnelles)