It is currently Thu Mar 28, 2024 4:40 am

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 135 posts ]  Go to page Previous  1, 2, 3, 4, 5 ... 7  Next
Author Message
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #21 Posted: Sat Sep 05, 2020 4:51 pm 
Lives in sente

Posts: 757
Liked others: 114
Was liked: 916
Rank: maybe 2d
RobertJasiek wrote:
lightvector, many thanks, very helpful!

The following questions are about using nets - not about training them.

RAM: So 8GB VRAM is more than enough. How much RAM of the mainboard do you recommend? More than 8GB, I suppose, but would already 16GB be enough for 8-12GB VRAM or 32GB enough for 22-24GB VRAM? Would more RAM of the mainboard be only useful for training?


Correct, 8GB VRAM should be already more than you'd ever need with any current Go programs and more will not be useful. As for ordinary RAM on your computer, not on the GPU, it mostly only matters if you intend to do large numbers of playouts. There is some fixed overhead for various internal things, as well as some limited overhead per thread, costs for storing the neural net weights itself, maybe we can handwave the total of these as around a GB. Beyond that, the only part of KataGo's memory usage that scales indefinitely is the cost of storing neural net evaluation results.

KataGo uses about 1.5kb per neural net evaluation in the MCTS tree (3kb if you have ownership prediction turned on). It also stores a cache of evaluation results that by default is sized to be about a million entries by default (specifically, 2^20), which you can change by editing your gtp.cfg ("nnCacheSizePowerOfTwo"), so using about 1.5GB. This cache is used to speed up repeated calculations, such as if you have just analyzed some moves in a game, and then you interactively scroll backward in the game to re-analyze some earlier moves. It also speeds up a single search somewhat if there are many upcoming transpositions, but this is not usually a big effect, perhaps 20% speedup on average.

So, if you wanted to search 3 million playouts on a single move, you will need a minimum of 4.5 GB of RAM (9GB if ownership is on). If you want to spend tens to hundreds of thousands of playouts per move for each consecutive move in a game, and then return to earlier moves in an analysis program and benefit from a somewhat faster search the second time due to caching, you'll need a cache big enough to hold the sum of the number of playouts of all the moves in between (and ideally, a reasonable constant factor larger than that).

Beyond that, there's no use for extra RAM. You'll of course want enough for your operating system, your browser, any other things on your computer running at the same time as Go-related stuff. But for Go, having extra RAM beyond what you'll need for the fixed overheads plus the playouts you want plus your cache will not help, and hopefully based on the above you can estimate how much that will be.

RobertJasiek wrote:
"SLI has no value": Does this also mean that having two graphics cards without SLI is useless?


No, of course not, you just use both GPUs. As I understand it, SLI is some magic where for certain specific tasks, the GPUs to work together on a *single* task to do it faster. Probably not 2x faster, but somewhat faster. But whether you have SLI or not, obviously if you have multiple tasks instead of just one, you just give both GPUs different tasks in parallel and get a 2x throughput. In an MCTS search, you always have multiple tasks. Different GPUs can evaluate different nodes in the tree. So SLI is unnecessary and useless.

RobertJasiek wrote:
How many real CPU cores do you recommend together with RTX 3080 or 3090?

Not sure. Only sure way would be to experiment. You might be able to get some idea by running on a weaker GPU after tuning for optimal threads and performance in other ways, and monitoring the CPU usage while a MCTS search is ongoing. The GPU will probably be the bottleneck, but you can see how much load is on the CPU to achieve bottlenecking on the GPU. And then extrapolating based on how much faster the GPU itself is expected to be based on benchmarks or based on other users's reported experience who have bought it before you.

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #22 Posted: Sat Sep 05, 2020 6:27 pm 
Lives in sente

Posts: 1037
Liked others: 0
Was liked: 180
It must be because I am old.

Sorry, but amount of core storage is also a matter of time. I'm old enough to remember "paging" (only a portion of the entire memory space used by a program in core at one time, paged in an out of external storage as needed).

BTW, that applies to the infinite memory requirement of a Turing Machine (or a Wang Machine where it is two unbounded stacks). If you were emulating either of these your computer would not need much core. Neither of these imaginary machines jumps around in memory, so simply page in or out from external storage as either end of what is in core is approached.

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #23 Posted: Sat Sep 05, 2020 10:53 pm 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
lightvector, thank you again!

It is crucial whether with 'b' you mean bit or Byte...! Which?

(Strict use should be b = bit, B = Byte, k = 1000, K = 1024, but for M = mega it is ambiguous because m = milli.)

Presumably, I would sometimes want to do, say, 20.000.000 playouts. With up to 3 kb (3000 bit) per neural net evaluation or 375 B (Bytes), this gives almost 8 GB. Plus 1.5 GB cache by default. Plus 6.5 GB for the operating system / other programs. So 16GB would barely do.

"if you wanted to search 3 million playouts on a single move, you will need a minimum of 4.5 GB of RAM": IIUYC, of which 1.5 GB is cache, so 3 GB for the playouts themselves or 1000 B (Bytes) per playout. Now, I rather think that you meant 3 kB (3000 Bytes) when writing 3 kb.

If you meant Bytes, for circa 20.000.000 playouts, I would need 60 GB plus 1.5 GB plus, cheating, 2.5 GB. So 64 GB will do. That's also what 'goame' has suggested.

***

The journal c't gave a graph for CPU cores for deep learning: 4 were too few (above linear increment), 6 considered the minimum (roughly linear increment), 8 better, 10 to 16 still slightly better but slowly approaching constant for >>16 cores (curve becoming somewhat flat clearly below linear increment). Of course, it depends on GPU speed and the type of programs used for deep learning. So I presume that the sweet range is 6 to 16 real cores, where significant factors are money and cooling, e.g.:
Code:
3700X /  8 / €260 /  65W
3900X / 12 / €400 / 105W
3950X / 16 / €700 / 105W

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #24 Posted: Sun Sep 06, 2020 3:58 am 
Lives with ko

Posts: 128
Liked others: 148
Was liked: 29
Rank: British 3 kyu
KGS: thirdfogie
Robert,

Thanks for starting this thread. The information will be useful when I come to replace my current PC, possibly a year from now.

It seems likely that lightvector meant bytes not bits. Hopefully, he will clear that up.

My question is: why do you want a superhuman Go-playing set-up? Of course, there's nothing wrong with that ambition, but I'm wondering if you plan to check all the statements in your published books. I used Leela Zero in that way to validate my Catalogue of Calamities, but on much less text and at 9 stones weaker in human strength.

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #25 Posted: Sun Sep 06, 2020 4:48 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
thirdfogie wrote:
why do you want a superhuman Go-playing set-up?


Teaching by example input from professional players has taught me very little, also because they have differing to contradicting opinions. Programs of superhuman strength should make more useful suggestions. Although I very much prefer to learn from theory, most theory for my learning does not exist (or I and others need to invent it) so for improving beyond 5d still learning by examples cannot be avoided.

I expect superhuman programs to help me with opening, middle game fighting, advanced use of influence / potential and identifying more of my blunders than I can detect by myself. Furthermore, programs can be used for backtracking to deepen understanding of sources of mistakes. Programs should punish and therefore implicitly reveal knowledge gaps or insufficient skills.

I do not expect much for life+death reading (because book problems will do, reading requires practical effort and programs do not reveal reading and its decision-making well) and endgame (because programs do not teach values and may play suboptimally intentionally).

While my positional judgement is reasonable, programs will implicitly convey their own alternative, also reasonable positional judgement.

Programs are no panacea but will offer me new insights.

Mid-professional level programs would hardly help me so I ensure to build hardware for superhuman level.

It is also possible to analyse programs' play to derive new go theory, especially since I am very strong at generalising something good occurring consistently.

Quote:
check all the statements in your published books.


Since I have not written specifically about the opening yet, many middle game examples are from pro games, I check everything as well as I can and quite a lot even relies on proven or otherwise well developed theory, I expect only occasional significant mistakes. However, I am curious about a few particularly sophisticated examples, such as the triple ladder position, for which I expect the programs to fail.

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #26 Posted: Mon Sep 07, 2020 7:22 am 
Lives in sente

Posts: 757
Liked others: 114
Was liked: 916
Rank: maybe 2d
Yes, I meant bytes, not bits in my posts above. But otherwise, yep, it sounds like you understand now roughly how to determine RAM usage.

Just one further detail - if a neural net evaluation is both in the cache and in the current MCTS search tree, it uses only 1 position's worth of RAM (1.5 kB or 3 kB depending on if you're tracking predicted ownership), not two positions' worth. The cache and MCTS tree both just point to same evaluation result in memory, rather than copying it. So generally, you can make the cache also big enough to fill most of the rough amount of RAM you plan to use, without worrying that it will significantly "compete" with the MCTS search tree for space. They will just share pointers to all the same however-many-millions of results.

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #27 Posted: Mon Sep 07, 2020 8:08 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
If a CPU SoC has an integrated graphics card and we have a dedicated (Nvidia) graphics card, will the NN programs automatically use the latter, is this the task of the operating system or must we set the programs' configuration files correctly?

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #28 Posted: Mon Sep 07, 2020 12:19 pm 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
This page has a lot of backgroup information on deep learning:
https://timdettmers.com/2020/09/07/whic ... -learning/

In particular, it states relative performance for convolutional neural nets:
Code:
2080TI
3070    ~ +12%
3080    ~ +40%
3090      +57%


Compared to 3080, thus 3090 is ~ +12% (at +114% the price).

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #29 Posted: Mon Sep 07, 2020 9:56 pm 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
In case you tried my link and met its server being down yesterday, try again now! That webpage is really worth reading!


This post by RobertJasiek was liked by: ez4u
Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #30 Posted: Tue Sep 08, 2020 1:58 pm 
Lives in sente
User avatar

Posts: 842
Liked others: 180
Was liked: 151
Rank: 3d
GD Posts: 422
KGS: komi
RobertJasiek wrote:
In case you tried my link and met its server being down yesterday, try again now! That webpage is really worth reading!


+1

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #31 Posted: Fri Sep 11, 2020 9:24 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
lightvector, what to choose for an SSD: high sequential speeds, high 4K random access speeds or a reasonable combination of both?

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #32 Posted: Fri Sep 11, 2020 10:38 am 
Lives in gote

Posts: 476
Location: Netherlands
Liked others: 270
Was liked: 147
Rank: EGF 3d
Universal go server handle: gennan
I think the disk is not used much after loading the network in memory (when starting up KataGo). So it shouldn't matter much?

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #33 Posted: Fri Sep 11, 2020 11:04 pm 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
A few months ago, igorslab tested RTX quadro 6000, RTX 2080 TI and slower cards for 3D games. PCIe 3.0 x8 is a bit too slow with 0% - 6% speed loss (especially for lower display resolution when the CPU has to do more because the GPU does less) while PCIe 3.0 x16, PCIe 4.0 x8 and PCIe 4.0 x16 are fast enough.

For dual card use and PCIe 4.0 x8, things are unclear for RTX 3080 and 3090: we need to await tests as soon as the NDA will be lifted in a few days.

I think that deep learning is closer to high than low resolution: the GPU has to do more than the CPU so speed losses should be nearer to the lower end. Nevertheless, with the circa. 1.5x acceleration of the new Nvidia cards, PCIe 4.0 x8 versus PCIe 4.0 x16 might play a role.

If the difference turns out to be up to ca. 3%, we can neglect the problem and, for dual use, some motherboard with two PCIe 4.0 x8 slots in dual use would be good enough. If, however, the difference is larger than 6%, we prefer some motherboard with two PCIe 4.0 x16 slots in dual use, provided prices of them and their fitting CPUs are not astronomic.

SSDs hit PCIe 4.0 x16 limits faster than GPUs. Currently, I see no danger of 100% speed loss for fastest GPUs despite PCIe 4.0 x16 being up to 100% faster than PCIe 4.0 x8. Small percentages appear to be the order of magnitude of speed losses.

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #34 Posted: Sun Sep 13, 2020 12:06 pm 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
As I am learning hardware aspects, I can share some:

Since RTX 3080 / 3090 are heavy, it is wise to support them by a pillar.

A 300W graphics card needs +100W extra safety for the power supply. These cards have 320 / 350W so each needs more than +100W. This is besides the basic extra safety for the mainboard.

If you want to use or consider the option of later using 2 such graphics cards, you must choose the mainboard carefully, reading specifications and handbooks. Having two PCIe x16 slots is insufficient. Mainboard manufacturers like to advertise with x16 and hide the fine print, which is the PCIe version combined with the numbers of lanes available for the slots when used together. The following operation modes work for two "PCIe x16" slots occupied by two fast graphics cards in "dual" mode, presuming a CPU with suitable PCIe functionality:

PCIe 4.0 x16 + PCIe 4.0 x16 (best; currently unavailable; maybe by Zen3)

PCIe 4.0 x8 + PCIe 4.0 x8

PCIe 4.0 x8 + PCIe 3.0 x16 (same speed as before)

PCIe 3.0 x16 + PCIe 3.0 x16 (currently the best of Intel)

Beware of the following trap, which occurs for many mainboards (such as with B550 chipset or moderately priced X570): PCIe 4.0 x16 + PCIe 4.0 x4. Your second graphics card would be too slow!

Typical extra expenses for 2 instead of 1 fast graphics cards are +70€ power supply, +150€ mainboard, +130€ CPU, +50€ larger case, maybe +50€ coolers. (Watercooled graphics cards would shoot the total amount to the space and create technical risks.)

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #35 Posted: Sat Sep 19, 2020 6:17 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
First 3D gaming benchmarks comparing RTX 3090, 3080 and 2080TI say that the relative differences between these cards are similar and 3090 compared to 3080 is ca. 19% faster on average with values 15~21%.

Under non-overclocking load, a 3080 FE consumes 320 ~ 450W. OEM 3080 consume 340 ~ 470W. Add the other PC components (among them ca. +35% for the CPU TDP) and 200W extra security for the whole PC. Say we have a 105W CPU up to 142W usage and 58W for the remaining PC components. Our PSU needs 470 + 200 + 200 = 870W. In the web, recommendations are 750, 850 or 1000W, depending on how safe we plan.

For 2x RTX 3080 in a PC, a 1200W PSU is the bare minimum and risky for occasional peaks. 1300W seems to be enough (a high TDP CPU suggests 1400W).

However, these are plain consumption calculations. The other PSU consideratons are noise and price. Up to ca. 1200W, prices increase readonably. Above, PSUs become expensive, say, €370 for 1600W. A silent 1600W PSU costs €480.

Each PSU behaves differently. Read noise test diagrams for watt percentages! Some PSUs become load at 50% load, others at 85%. A good PSU is reasonable silent around 50%.

For a PC with 2x 3080, we expect ca. 1000 ~ 1200W load (and go AI should usually create load). So for a still silent PSU, we must choose 1600W 80+_Titanium (such as Corsair) and pay the bill. For a 1x 3080 PC, the expense is significantly lower.

2 graphics cards must have at least one free slot (2.032cm distance) in between. Easy with 3080 FE. Difficult with OEM cards being 2.7 or 2.9 slots thick when dual PCIe 4.0 x16 at x8 speed mainboards set them only 3 slots apart. One can use PCIe riser cables (not cheap, either) and install the cards on the bottom of a case if it is broad enough (19cm exists and might be sufficient). However, the next problem is fixing the cards and their PCI cable slots onto the case bottom while case manufacturers have not forseen such demand...!

If you install in the mainboard slots, getting a stand for 2 cards (and maybe also the heavy CPU cooler) is difficult and must fit into the case given your case coolers. If necessary, build your own stand...

As you can see, building a PC with 2x 3080 is not just more expensive than expected but also difficult.

RTX 3080 FE is €699 if only it were available anywhere in the world. Scalpers and totally under-estimated demand by Nvidia and retailers apparently by the factor 5. The FE is if you want to go cheap and simple but it is somewhat load. If you want silent, get Asus 3080 TUF (the lowest temperatures by far, choose the quiet BIOS by the hardware switch) or MSI 3080 X Trio (even a bit more silent but 10K warmer because of slightly lower RPM). You can't get quieter by usual, very expensive and difficult water-cooling via fans. These two cards (apparently eventually each in the ca. €740 ~ 760 range) are quiet enough, unless you overclock them. Search Youtube for some card and "noise" so you can listen.

Case cooling: use slightly positive flow straight from the front also on the GPU level to the rear! Get a case appropriate for that, not too small, with mesh front and not meant for a silent PC (so that CPU or GPU do not throttle. Read tests from Gamers Nexus. E.g., I consider Phantecs P500A, Fractal Meshify S2 (probably needs other case fans - less silent but enough RPM and air volume, 120m^3/min or 1.7th of that in feet per second) or Cooler Master H500 (yes, without additional letter). Be Quiet is tempting but we must refrain from cooking your chips!

A 1 card PC build amounts to €1500 ~ 1900. A 2 cards build to €2500 ~ 3000.

Have I forgotten anything?

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #36 Posted: Sat Sep 19, 2020 11:02 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
Here are technical RTX 3080 details to compare Asus TUF to MSI GX3:
https://m.youtube.com/watch?v=3yrTlsgq9s0
Optimum Tech / youtube, video position 6:29 to listen noise,
i9-10900K, Benchtable, 3D load, ambient 21C, micro at 25cm distance:

Code:
Model                C    RPM    dB    W/system

Asus TUF Gaming OC   60   1650   41,6  480
MSI Gaming X Trio    72   1240   38,2  487

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #37 Posted: Wed Sep 23, 2020 1:29 pm 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
I have continued to explore the possibility of 2x RTX 3080.


2x 3080 air cooled need at least 1 slot free above and below each card.

All well air cooled, dust protected cases are only suitable for 2x 3080 (except maybe Founders Edition) put into the PCIe slots directly. Riser cables to manually built basements, whose fixation is unclear, for the cards are in conflict with case walls.

2x Nvidia RTX 3080 Founders Edition has 2 slots width. Therefore, they fit into mainboards having at least 2 PCIe "x16" slots each with 4.0 x8 (or 3.0 x16, which is typical for Intel-CPU mainboards) speed in dual use and that are 3 slots apart. Such X570, LGA1200 or LGA2066 mainboards are available after the most meticulously careful study of specifications and handbooks (compare my earlier warning) for ca. €270+. (For comparison, a simple B550 mainboard is available for ca. €100 and works for 1 card.) If you can accept the Founders Edition, such a solution works. I consider this card somewhat too loud.

I guess that 2x 3080 water cooled costs at least ca. +€500 ~ €600 (of that >€300 for the cards' cooling in case of self-assembly). If you consider 1x RTX 3090 good enough (although it is only 15 - 20% faster than the 3080), you hardly save money because the card costs €1500.


How about 2x 3080 each with almost 3 slots width? We need a mainboard having at least 2 PCIe "x16" slots each with 4.0 x8 (or 3.0 x16) speed in dual use and that are --- 4 --- slots apart. These current suitable mainboards exist:

Gigabyte X299X AORUS MASTER (rev. 1.0), €380, socket 2066, 48(?) lanes (*1).

MSI Creator-X299, €450, socket 2066, 48 lanes.

MSI MEG X570 Godlike, €580, chipset X570.

Some Supermicro mainboards (*2).

(*1) Beware the X299X spelling! (X299 is a different mainboard.) It is unclear if 2 cards do run each @ x16 speed without activating SLI. The manuals have been written by copy&paste. One should ask the manufacturer if 2 non-SLI cards do run at that speed.

(*2) C9Z490-PG, C9Z490-PGW (chipset Z490); X11SAT (chipset C236). The manuals are hopelessly ambiguous concerning the PCIe slots in which to put 1, 2 or 3 x16 cards. Therefore, I cannot confirm whether a distance of at least 4 slots can be established. Ask the manufacturer.


Intel i9-10900X is a typical LGA2066 / X299 CPU at €530. For AMD AM4 / X570, we can assume a comparable Ryzen 9 3900X (2 cores more, slower clock) at €400.

Hence, let me estimate the excess expense in € for 2 instead 1 RTX 3080 on a cheap AMD B550 mainboard:
Code:
--------------------------------------------------------------------------
AMD     Intel   Item             Details
--------------------------------------------------------------------------
   >+700        2nd RTX 3080
<+480   +410    mainboard/CPU    AMD +0 CPU. Intel +280 mainboard +130 CPU.
    +140        PSU              1500W (Be Quiet) instead of 1200W (Corsair).
--------------------------------------------------------------------------
+1320  +1250                     sum
--------------------------------------------------------------------------
  47%    44%                     fraction of expense for non-GPU items
--------------------------------------------------------------------------


IMO, the extra cost for the mainboard and (in the case of Intel) CPU is too large to justify building a PC with 2x 3080 having a 3-slot design. Financially, the Founders Edition is the only viable option for 2x 3080 and, on an AMD mainboard, maybe even a quality 1200W PSU might be enough. (I can't guarantee this.) For an extra expense of +€700 or 840, one might build such a PC.

I won't because I prefer a quieter and cooler RTX 3080. Saving some money now gives me an easier option to upgrade when Nvidia can use 5nm, 3nm or 2nm in a few years.


Is my analysis reasonable or do you have alternative suggestions of how to build a 2x 3080 3-slot-design PC without spending too much on mainboard and CPU? Note that I have already disregarded an open or bench case build for myself.


EDIT: If you want to build 4x 3080 water cooling this mainboard (with 2 slot distance between x16 speed slots) is available: Gigabyte X299-WU8 (rev. 1.0).

EDIT 2: The problem with current Intel CPUs and chipsets is, besides price, that they are not future-proof without PCIe 4.0. If one wants to build high end Intel, it is better to wait for new Intel CPUs and mainboards with this feature. Although it can be ignored at the moment, spending thousands on a new gaming PC should last for several years. This is uncertain with PCIe 3.0. Only for an industry / business PC, one need not care about much else than stability.

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #38 Posted: Thu Sep 24, 2020 2:34 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
According to thegamrone / youtube, RTX 3080 is 99% ~ 108% as fast as 2x GTX 1080 TI SLI in 1080p/2160p 3D gaming at Ultra with i9 10900K.

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #39 Posted: Thu Sep 24, 2020 7:34 am 
Judan

Posts: 6087
Liked others: 0
Was liked: 786
Correction: For AMD Ryzen 9 3900X at €400, the most comparable Intel LGA2066 CPU seems to be Core i9-10940X at €775. IOW, when choosing LGA2066, one either can feed Intel or selects the slower i9-10900X.

With 10th generation, Intel drastically dropped X-CPU prices but still is far more expensive per clock in the high end and very much more so per watt. If Intel wants to compete with AMD, Intel can't bring 11th generation successors of 10900K and 10900X soon enough together with a further drastic decrement of prices.

Top
 Profile  
 
Offline
 Post subject: Re: Nvidia RTX 30xx
Post #40 Posted: Thu Sep 24, 2020 8:28 am 
Lives in sente
User avatar

Posts: 866
Liked others: 318
Was liked: 345
RobertJasiek wrote:
IMO, the extra cost for the mainboard and (in the case of Intel) CPU is too large to justify building a PC with 2x 3080 having a 3-slot design. Financially, the Founders Edition is the only viable option for 2x 3080 and, on an AMD mainboard, maybe even a quality 1200W PSU might be enough. (I can't guarantee this.) For an extra expense of +€700 or 840, one might build such a PC.

I won't because I prefer a quieter and cooler RTX 3080. Saving some money now gives me an easier option to upgrade when Nvidia can use 5nm, 3nm or 2nm in a few years.

So Robert, have you talked yourself out of a new rig? There will always be better hardware coming.

Have you considered building a rig that is good enough to beat top pros consistently, but is merely good enough, not the best? I suspect 500 playouts per second will more than meet your needs, and such a machine would save you tons compared to your dream rig.

_________________
- Brady
Want to see videos of low-dan mistakes and what to learn from them? Brady's Blunders

Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 135 posts ]  Go to page Previous  1, 2, 3, 4, 5 ... 7  Next

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group