AMD Ryzen discussion


For big Data centers, how much does initial hardware cost compare to power efficiency in terms of long term cost?

For my own computers, the power efficiency isn’t really a big deal, but I thought it was a major driving force of data center operational costs.


Yes, and like explained in the previous posts, the Zen-based processors appear to have great power efficiency.


What percentage of data centers do you think are buying the THREADRIPPER 1950X over the EPYC 7401P at that price point?


Epyc has enterprisey amounts of cache.

(I don’t know if it’s been covered in this thread, but I read Russian, and although Epyc is nonsense in Russian as far as I know, it makes a lot more sense to my brain reading it as Cyrillic for some reason.)


I dunno I would need to know exact pricing and specs. Let’s see…
24/48, 2.0 ghz, 3.0 ghz turbo
16/32, 3.4 ghz, 4.0 ghz turbo

The clock speeds are dramatically lower on Epyc across the board, which sucks for our Ruby workload, which like JavaScript depends heavily on high clock speed and IPC. Might be OK for the database servers, I guess, but honestly I’d rather have the higher per-thread perf. If there was a choice of an Epyc with the clock speeds of the ripper, with less cores to add the thermal headroom, I’d jump on it in a heartbeat.


Sale came through for a Ryzen 1600X for $199 so I finally pulled the trigger. Now I play the waiting game. Got all the other components that should allow me to do GPU Passthrough and use Linux as the main OS and still game via Windows VM.


Neat! Let us know how it goes.

I’ve been playing with Gnome-shell recently. While I despise its core workflow, you can customize it with a bunch of plugins to work exactly as you like. It’s pretty good now.


I hate upgrading a CPU, it’s really irritating how much you have to upgrade with it. Got all my parts, went to put my memory in and realized I have DDR3 and AM4s I guess want DD4 :<

This machine better scream for how much I’m putting into this upgrade…


I am looking forward to this too. I don’t use Linux much, but would like to keep my work computer separate from my home computer without actually having two physical computers. I would like a third VM for graphics work (Photoshop), and a fourth VM for gaming, but the performance just isn’t there yet.


What a cluster****.

Put the system together, windows booted just fine (apparently the old HD I threw in there had an old windows setup, probably should have cleaned that before removing). Recognized my R9 Fury just fine but didn’t recognize the 2nd RX 550, but I didn’t think much of it since I hadn’t installed any drivers (I’m surprised a 2 year old install of windows even worked with a complete motherboard replacement).

However, all Linux distributions failed to boot with massive random errors and kernel panics. And I mean every one, from Ubuntu to Manjaro to Arch to Fedora. 6 hours of massive frustration later I FINALLY isolate it that everything seems to fail when IOMMU is on (which is virtualization feature required to passthrough I/O devices) and a graphics card is in the 2nd PCIE Slot.

Even with one graphics card in the 2nd PCIE slot and none in the primary everything shits the bed. At this point I"m assuming the motherboard is bad (I am now finding a lot of things bad about Gigabyte ryzen motherboards and linux, convieniently none of these were noticed previously) so I’m going to try for another Motherboard. Unfortunately, that means waiting for a new Amazon delivery on Monday :(.


Yeah, the IOMMU stuff is really tricky, particularly on Ryzen. This is not a trivial task you’ve set before yourself.


Might it be easier to run Linux as the VM in Windows, and just live in there when not gaming? I’d imagine (assumption on my part, granted) you have less hardware demands for your normal OS desktop routines?


Yeah, if I can’t get IOMMU working with this then I am just going to give up and reframe my mind to be happy with Windows core and Linux VM. The truth that I keep reminding myself of is that Virtualbox was running Linux VMs like crap on my 3570k (4c/4t) even overclocked to 4Ghz due to, well whatever reason (I assume lack of virtualization support). Plus I really didn’t like not having TPM for easy hard drive encryption, so that’s another plus I"m getting.

So at a minimum once I get past this disappointment of not being able to do what I really wanted to do I still have a really good computer upgrade (including RAM now) that will at least show gains.

Just annoying that my $350 plan has turned into a $600 implementation, and I have no idea if I’ll still be able to do what I originally set out to do. Found a better chipset motherboard on Amazon with good vfio reviews that should be here by the end of the day, so I’m guessing it’ll be another 3am night of debugging.


New motherboard fixed all my problems. Got Linux installed painlessly, got Windows setup painlessly, and now I’m just trying to find the right tweaks to get Linux to relinquish my gaming GPU properly, but that’s more Linux-fu than anything else that I"ll figure out tomorrow.

This X370 MSI Pro Carbon motherboard is great too, I should have gone with this in the first place, and it’s worth the extra $60. 7 USB ports in the back plus USB-C (no TB though), 7.1 audio jacks, 6 SATA slots. It also had both PCIE slots at 8x (16x if only one is in use) which meant I could have my Linux host GPU (RX 550) in slot 1 and put my Gaming GPU in slot 2 (so the host always starts off using the first GPU as the primary.

It’s weird that the motherboard wanted me to put the 2 sticks of RAM staggered, so if you have left to right slots 1, 2, 3, 4 it had me put them in 2 and 4. Not sure if that’s Ryzen specific or not.

Anyways, in a much better mood now :D


Nice, congrats!

Not that weird. I’ve had multiple machines that staggered their RAM channels like that.


Glad it’s working out. Just curious, why using VM instead of dual-booting?

I had a PC that dual-booted windows and Red Hat years ago when I was working on my dissertation. That was actually my first Windows machine, because I wasn’t an experienced Linux user and knew I could get support on red hat at school if needed.


If you can play windows games at full speed inside Linux, there’s no reason to dual boot. That’s the promise of IOMMU and GPU passthrough.


I swap between work programming, free time programming, and Gaming a lot when I"m at my computer (the former 2 I need Linux more and more for these days). So while SSDs have made dual booting not as big of a deal it still requires closing all application sessions (like web browsers) and mentally committing to one environment or the other. This allows me to run them both side by side with barely any performance penalty (in theory at least, I’m 99% there now to being able to fully vet it out).


Cool. Makes sense.

When I was doing the Linux thing, it was to run large numbers of simulations for hours at a time, so the other option was fine.


It’s Alive!

That’s having my Ultrawide having both inputs split, left side being the gaming VM running 3dmark benchmarks and the right side with my linux host. I was getting 30fps in the Timespy benchmarks, which sounds familiar but I’m a loser and didn’t think to benchmark native prior to reinstalling my operating systems (though I guess I can probably boot the windows drive native now that I think about it).

To be honest, this wasn’t too terrible to setup. Once I swapped out my motherboard I immediately got 95% of the way there. The remaining 5% was getting the AMD driver to stop latching onto my 2nd GPU so I could pass it through, and apparently I’m the only one on the internet who has struggled with it since everyone’s responses were “huh that shouldn’t be happening”. Once I got the solution to that I immediately was able to finish and get things working perfectly. If anyone is trying I very much suggest having different brands for your host and gaming GPUs as that will make your life easier by a long shot.

I still have some tweaking to do (like CPU pinning) but so far I’m ecstatic that it works, even with the sub-optimal VM (Ryzen has bugs that makes using the Qemu KVM hypervisor either have bad CPU or bad GPU performance depending on a toggle, Xen does not have this issue supposedly).