Stardock Consumer Report for 2012

Thanks to the Steve Jackson annual report thread, I realized that we didn’t have a thread to discuss the annual Stardock Consumer Report. I know how much some of you love to jump into any thread with Wardell’s name in it and go completely nuts, but I hope we can keep the discussion civil here, okay? Let’s try to actually talk about the contents of the report.

First, Brad announced the creation of a fund for promising game developers.

When Stardock divested itself of Impulse, it presented the company with some unprecedented opportunities. We suddenly had enough capital to do essentially anything we wanted. We took 2011 to consider how we should use this capital. Ultimately, it was decided that we would keep Stardock itself relatively small and create an investment fund (this will be discussed more later this year). This new fund we’re creating will follow the following four principles.

  1. We should use this capital to help the next generation of software and game developers.
  2. We should use this capital to help found new game studios and new software ventures.
  3. We will strive to have a minority share in these entities in the long-run.
  4. These entities should produce things that will help future start-ups in the technology industry.

After Brad discusses the ways in which Stardock’s productivity software has answered some Windows 8 users’ needs, he moves on to Sins of a Solar Empire: Rebellion and Elemental.

He then includes a section titled “Beating the 64-bit drum.”

For strategy gamers, the last few years have been a mixed blessing. There have been some great titles released but the innovation in strategy games has been diminishing. This is not the result of a lack of game design or inventive thinking. The problem stems from a catastrophic decision made at Microsoft: not giving DirectX 10 to Windows XP users.

As a corollary, Microsoft continuing to sell 32-bit versions of Windows well after the hardware stopped being natively 32-bit has held back PC game development immensely.

Game developers have been stuck with DirectX 9 and 2GB of memory for the past decade. While this hasn’t harmed first person shooters (they only have to manage a handful of objects at once), it has been poisonous to othergenres. Next time you’re playing an RPG in first person with no party you can refer to DirectX 9 and 2GB of memory as a big reason for that.

With DirectX 11 we can go to town with shader anti-aliasing and lowering the development capability requirements on having a multi-core based simulation (right now, nearly all of a game’s simulation occurs on 1 thread on 1 core). And with 64-bit, we can fit a lot more stuff into memory.

There are whole classes of games waiting to be made that require these kinds of advances. Luckily, after a decade long wait, we are nearing critical mass. The days of games supporting 32-bit OSes is, thankfully, coming to an end. DirectX 10 as a minimum requirement has also arrived.

He finshes with some advice for other game developers wishing to strike it out on their own.

Another key thing to emphasize is taxes. I mentioned it earlier but when I see people celebrate big Kickstarter projects, I am almost certain they’re not aware that a third of that (or more) is going to go away in taxes. I think Kickstarter is a game changer for our industry (in a good way) but there is a definite downside to get a big check in the mail. If you’re going to use Kickstarter, use it as it was intended – seed money. Don’t use it to fund a significant portion of your effort if you can avoid it.

Financial management is an absolutely critical part of running your up and coming software business. So make sure you set up cash flow projections for a year out. Create financial models. If you’re doing Kickstarter, take the first 2-days of revenue, you should be able to generate 3 to 4 times that total – with media coverage being the wild card.

Dunno about the 64bit stuff. I mean, sure, you can fit so much stuff into the 16 gigs of memory many people have now … but on the other hand, it’s obsession with graphics and wasteful coding practices, combined with lack of ambition (some of which might actually be a wise choice, who knows) that prevent “whole classes of games” from being made.
Ultima 7 showed how it’s possible to make a living, breathing world and stuff it into 640k of RAM (actually, I think they required 2 megs or something, but you get the point) more than TWENTY years ago. It’s telling how only a few games even attempted to match it in scope, and fewer still enjoyed at least partial success. In two friggin’ decades.
The nineties saw me sitting in front of these games and drooling about future possibilities … most of which never manifested. A lot of games released back then I still hold dear, and many are denied their well deserved retirement by imbecile offspring.
XCOM kinda proves my point - let’s not start a discussion of whether it’s great or not in this thread, but there was clearly some serious amount of feature backpedaling compared to it’s also almost 20 years old predecessor, which just makes me a little sad.
So, as long as the popular train of thought is that this kind of steamlining away of possibilities is actually a good thing, for what do we need more RAM or processor power?


rezaf

Larger scale games (number of units, map size, whatever), better AI (if the devs desire to put the time and effort into it, which certainly isn’t a given!), etc. While not having to do with 64-bit, an example of new technology helping is Hearts of Iron 3. The game really started to beat the hell out of my CPU when WW2 really got underway which resulted in the game/UI becoming unresponsive. In one of the expansions they added multithreading support to take advantage of the multicore CPUs people have today. The result was a massive increase in performance for me, especially since I was on a Q6600 quad-core which had relatively slower individual clock speeds per core.

I share your frustrations about lack of progress on some fronts but I don’t think the move to 64-bit will be meaningless, especially when cross-platform titles will be taking advantage of it. We had Microsoft selling 32-bit Windows (as Brad mentions) as well as consoles requiring that cross-platform games needing to run on the LCD. I think once everything moves 64-bit we can see some evolutionary steps being taken. Will it come with increased waste? Probably, but if it means that game developers can spend some additional time coding game features as opposed to dealing with an archaic 2GB memory limit, I’m all for it.

You still have to support 32 bit? Really? In 2013?

You loose a large chunk of the market if you are 64 bit exclusive. 32 bit can run for everyone, not so with 64 bit. I wish MS had dropped the 32 bit platform with Vista, but they didn’t. The sooner 32 bit dies, the better. It can’t be that much longer until we start to see 128 bit machines. Hell, when we do, at this rate we STILL might be running 32 bit stuff as a rule rather than the exception.

Will consoles/gaming PCs ever have a need for 128-bit memory addressing ? I suspect not, but then I’m not the most forward-thinking person on the planet.

A very famous person made an analogous statement about 640k of memory too. However there is a lot more to 128 bit than how much ram you can address. Speed is the main reason you want to increase how many bits a cpu runs at, not memory addressing. How fast data can move and how fast computations can be done.

That’s why I very specifically mentioned memory addressing. I believe we already have computing devices that use very large bit architecture for certain subsystems. Aren’t modern GPUs often touted to be 512-bit or something?

I was just randomly wondering if we’d ever have need for 128-bit memory-addressing. :)

Sounds good. They should have thrown out 32-bit with Vista. Running 32-bit in 2013 is probably the closest thing you get to being Amish and still owning a computer.

Jeebus, a bunch of hardware nerds have turned this into the most boring thread ever! :)

-Tom

That’s just 32-bit commentary in a 64-bit world, Tom. ;)

The Small city size in Simcity…

If any of you have been disappointed with that game, then you should thank Brad for beating the hardware drum. Simcity’s small city size was the direct result of 32-bit programming, 2 Gig memory address limit and almost no multi-core use (the music is done the second core which means the second core is only under a “true” load of 15%). Though the game does a check for dual core and won’t boot without - if you turned music off, you could technically run the game on a single core processor. One of the programmers last week mentioned they didn’t want to make city sizes bigger for free because it’d require a massive investment to re-write most of the code. Why did they do that when it was already obvious where computer hardware would be when the game came out? Money to the exclusion of anything else… When Ocean Quigley said in the twitter feed they’d look at larger city sizes in the future? That was marketing to sell the game on an empty promise… a week before release. Ironically, by splitting the processes of the agents like buses, police cars, fire trucks etc into two cores, that would have made some independent routing. They could have halved the duplication of routing events we see (despite the 2.0 fixes, there will still be buses & police cars that follow each other etc). 4 cores? Quartered with little scripting changes (random roll). But leaving that 1% market-share unable to play would have been too much money for Maxis to leave untouched. Moving beyond 2 km increases calculations drastically, something the extra cores in your computer would have been really good at handling. But instead… we’re stuck and it’s not going to change for a long, long time… if ever.

So Brad - thanks for this. I appreciate what you’re doing for new development and your current team. Be nice of the big publishers would listen… but I know they won’t and the shovelware will continue until they pull a 3DO.

I think the new console gen having an 8 gig machine will allow for the death of 32-bit.

Fingers crossed.

Memory isn’t the only issue. Unless you’re overclocking your cpu, speeds have stalled which is why the PS4 & neXtBOX use 8 cores. It’s critical to use those cores, yet for the PC segment developers have been neglecting that badly.

Yea, well, Microsoft need to have a Windows upgrade path from 32->64 bit.

A game designer complaining about hardware limitations is like a painter complaining that his canvas is too small.

Memory is probably the main bottleneck right now. Ultimately Moore’s Law gets trumped by the laws of economics. If people can’t afford the new power, there’s no point in making consumer applications for that power level.

Some painters like to use huge canvas, in fact some of of most famous paintings are really big in size.

You’re missing the point, which is that you CAN create a great painting on a small canvas.

Anyhow, jpinard, I think your depiction of SC5 above is all wrong. The reason for the limitations is a mixture of conscious decisionmaking and sheer lazyness on Maxis’ part. Making an application use multiple cores has nothing to do with leaving single-core machines incompatible.
BUT it’s a very complicated and often extremely error-prone thing to do, which is why developers generally shy away from it. The only thing that’s fairly easy to split into a seperate thread without much risk is the sound - which is why you can often read that it’s the only thing utilizing a different core.
And all that has little to do with 32 vs 64 bit.

Yeah, some more RAM would be nice for many a game, but it will not magically make developers WANT to make more complicated worlds.
Like I wrote above, there’s a tendency to do the opposite these days…

Even Dwarf Fortress, with it’s almost notorious calculating of EVERYTHING, runs fairly well on a single core CPU with 32 bit memory.
If a dude and his brother in his basement can pull that off, Maxis is telling us they are unable to?
Sure thing…


rezaf