AMD Ryzen discussion

If it’s cores you’re looking for, wouldn’t you go with something a bit more server-y? Two-socket Epyc gives you 64, albeit with slower clocks.

As has already been said, this isn’t the AMD server chip. They will no doubt clock lower and use less wattage in that version.

I’m talking about in huge data centers. Obviously you don’t want to just use low power CPUs… but you absolutely do want to use EFFICIENT CPUs. Meaning, you want to maximize how much computational power you can get per Watt.

I think that this is generally summarized as a PUE rating?

It seems like the thread ripper would potentially be a problem as its high heat indicates wasted power. What’s worse, that heat then needs you to spend even more power for cooling.

Obviously you want lots of cores, but they don’t necessarily need to be on the same chip, depending on the setup.

But honestly, I just use the systems, I don’t build them, so I only understand this stuff on a superficial level.

Exactly, and how successful Epyc will be at chipping away at Intel in the data center world will come down to how it performs under various loads from both a performance perspective and a power usage perspective. I am not aware of anyone actually having released real numbers there yet. There’s some early preview stuff and it looks like they will run somewhat more power hungry than the Intel equivalents but so far I haven’t seen anything that says what the actually differences will be for different usage types.

What you appear to be ignoring despite it being said multiple times is that Threadripper is not a server CPU. This is a CPU for the enthusiast market, which is why they talk about water cooling. It’s as relevant to huge data centers as the Core i9-7900X is. Which is to say, totally irrelevant.

WTF? A high TDP does not indicate wasted power any more than a low TDP indicates power well spent

What, exactly, do you think is gained in the DC use by having more sockets with fewer cores each?

It’s much more relevant since threadripper supports ECC, unlike any “consumer” Intel CPU. You could easily deploy a 2U with threadripper to the datacenter with zero qualms.

This way you avoid the arbitrary server enterprisey markup, but get the server perf… and don’t give up reliability on ECC.

We’ve been through this at some length in the Android topics. A bunch of people thought a shitty slow Qualcomm CPU must be more power efficient on a work done per watt ratio because it is so very slow, natch, and uh… nope. Here’s the relevant chart:

I didn’t say that it did, only that heat is wasted power. I’m just asking if anyone knows what kind of energy efficiency AMDs stuff has. I’m just asking this because Wumpus said :

I’m not making a statement about AMDs tech, I’m asking about question about it.

Wumpus mentioned Qualcomm in an unrelated thread. Y’all know the rules, everybody has to take a drink.

Who’d have thought that the first drink I have in what is technically football season would have nothing to do with football?

That’s the same thing, and it’s equally silly. The power is converted to heat no matter what. Whether that power is wasted or not depends on how much computation is achieved as a side effect. You’re literally making your claims just based on the absolute TDP: “It seems like the thread ripper would potentially be a problem as its high heat indicates wasted power”

Yes, it’s a high TDP. It’s also a grotesquely high amount of compute. 16 cores at 3.4GHz (base) with Broadwell-level IPCs. It’s like two of these:

http://ark.intel.com/products/92985/Intel-Xeon-Processor-E5-1660-v4-20M-Cache-3_20-GHz

Those are 140W each, for a total of 280W. Would you say that Xeons are unsuitable for datacenter deployments, since such a high TDP indicates a lot of wasted power.

But that’s exactly the question I was asking. I was asking what kind of efficiency you would be getting.

Again, I didn’t mean to say that it wasn’t more efficient than other competing architectures, I seriously have no idea. Perhaps my use of the word “wasted” have this impression.

Like I said, I only buy time on these machines, I don’t build or maintain them. My understanding of all of these issues is superficial at best. I was asking a question about the issue, because I didn’t know the answer.

I feel like there are like three conversations at play. I will try and be clear what I am saying.

It’s obviously a moot point since Threadripper is not targeted towards the datacenter, but I think that the fact that AMD seems to be recommending liquid cooling will keep it out of that space anyways to not be of any significance volume wise. I think jsnell agrees here that it isn’t intended for the space anyways so it’s academic anyways. There’s similar spec’d Epyc options around the same price point so I don’t see why someone wouldn’t go with those in the general case. Doesn’t mean no one will do it, people make similar Intel servers but they are niche.

As far as Epyc’s success in taking a big chunk of the datacenter space I think that it is fair to that thermal/power characteristics will play a role in purchasing decisions and so far, unless I have missed them, there doesn’t seem to be a ton of data available there. It’s not just the TDW number itself but the power used under various loads. It sounds like maybe Epyc might have a general disadvantage there but actually an advantage in heavily floating op workloads, but I don’t think anyone knows now. They might have a total winner on their hands but it seems like there’s insufficient data to know from my perspective.

There’s no way threadripper is an enthusiast part. It’s a workstation part. You see all the youtubers gushing over it because they render 4k video all day long and it fits their use case.

Yes, Threadripper is for development and production workstations primarily. Of course you’ll still see the prosumers buying them like the super-expensive DSLRs they don’t really need, but that’s not the target market.

Both AMD and Intel seem to think that the appropriate comparison is to the enthusiast X-series Cores, not to workstation Xeons. And if you look at AMD’s marketing website, it’s all about consumers rather than business users. Hell, even the juvenile “Threadripper” name suggests what the target market is :-/

It is a pretty curious segmentation choice. The classic way this has worked is that you group the workstation parts under the same brand as the server parts. But none of the Epycs that have been announced so far are suitable for workstations.

Both AMD and Intel are more or less introducing a new product segment for use by people who have previously been repurposing server class processors.

Well, they’re trying to do that, yeah. I don’t think they’ll succeed.

The problem is that it’s frickin’ hard to make CPUs faster. Adding cores is easy-peasy, just increases production costs.

This has been covered at extreme length, ryzen offers Broadwell level IPC efficiency. Which is “pretty good” and certainly competitive once you factor in 50% more cores for the same price.

Pretty simple, most Xeons are almost identical to their Intel consumer CPU equivalents, sometimes with a bit more on-die cache, and more platform memory lanes plus platform PCI lanes if it is the higher end Xeons.

Notice that threadripper has a shit ton of pci lanes, plus ECC, plus quad channel ram, which crosses off THREE very big “now this is magically sprinkled with enterprisey pixie dust and we charge 2x as much” criteria Intel has built into their products in the past.

AMD offers quad-channel memory, eight ECC DIMM slots, and 64 PCIe lanes even on the cheapest CPU for the platform.

That is a big fucking deal.