The Qt3 SSD Death Poll

Multiple image and model editors, presumably with lots of complex data loaded, are hardly typical for most users and might well be called unreasonable! Most people don’t work at Epic. :)

You’re unreasonable!! :)

The fail rate of SSD:s ranges from 0.5% on Intel SSD:s to 2% on OCZ. Don’t really remember the rest, but they were between those numbers.

Well, I’m an idiot, so that’s usually where my assumptions start. Heh.

Over what period of time or other unit? Where are those numbers from?

A few things.

  1. I like the word coined earlier in this thread, “anecdata”. The poll is worthless.
  2. Warren, swap files not going on the SSD was theorycrafting back when people were new to SSDs. These days the idea is you put any data frequently accessed or performance critical on them. So that means Windows, your swap and your applications.
  3. Warren, SSDs make your entire computer experience faster and smoother. Getting an SSD made parts I didn’t even realise were slow faster, such as opening an explorer window or copying 20mb of data. I heartily recommend you see first hand what an SSD will do for your system’s performance.
  4. What is with people saying the swap file won’t be used with 6gb of RAM? Aunt betty with two-tab IE won’t have problems with that sure, but you’re on a forum of gamers and geeks. With 8gb I have swap issues.

With 6 gigs I always have some free RAM even while running 1-2 instances of Visual Studio, plus various tools and web browser and stuff… so I’m not sure what you are doing with your poor computer! Are you doing lots of graphics editing like Warren?

Doesn’t your Windows just swap stuff out preemptively, to use RAM as disk cache? Mine does.

No, Windows never does that, not unless it’s already low on RAM. Have you actually observed that? If so, how?

edit: As usual it’s terribly difficult to find anything on the Microsoft website, and I don’t have a current copy of Windows Internals on hand, but here are two old Larry Osterman posts about Windows swap file usage:

http://blogs.msdn.com/b/larryosterman/archive/2004/03/18/92010.aspx (overview)
http://blogs.msdn.com/b/larryosterman/archive/2004/05/05/126532.aspx (corrections)

Windows pre-emptively reserves swap file space for new processes but does not actually use that space unless you run out of memory. Also, when static data (e.g. code) is paged out it’s not actually written to the swap file but simply re-read from the executable – only changed data ever gets written to the swap file, if at all.

edit2: I can’t find any solid information on exactly when Windows might preemptively write changed data to the swap file, so maybe there is a possibility of large data write-outs ahead of running low on memory. Looks like I’ll have to order Windows Internals 5 and see if they have details on this process.

I usually run with swapfile completely turned off (and have been doing it since my ram hit 2GB), the only problem I’ve ever encountered are programs that use huge amounts of virtual memory (e.g. video editing software with memory mapped files etc).

Fewer failures than I was expecting on the poll. Nice. With my sample size (1) my observed 100% failure rate under 1 year isn’t too helpful.

Over time, I have observed a bunch of games require a swap file. The size is irrelevant, they just need one. Titan Quest is one such game. There are others.

Don’t you guys think that by now someone smart like Mark Russinovich or Steve Gibson would have made a monitoring program, specifically designed for the swap file? If for no other reason than to dispel some of the myths. How about a Swpmon program Mark?

I have a 16MB swapfile on my SSD. The system has 12GB DDR-3.

I believe those figures were from the manufacturers themselves, and probably for last year. I’ve also seen figures from what I believe was the French market that suggested something like 0.4% for Intel and 3% for OCZ. I’m personally rather confident in saying that there is a difference between the two manufacturers in terms of reliability, but the difference is obviously not huge. The differences between the few remaining hard drive manufacturers are smaller in comparison, while you see people say they’ve had bad luck with brand X it’s not supported by the statistics I’ve seen at least.

I think that was probably more what you were thinking.

There’s the risk of data loss with a certain series of Kingston SSDs.

Well, today I got Mark Russinovich’s latest Windows Internals book (5th ed.) and read up on the brief section about the paging file (= swap file). Here’s the deal as I understand it:

  1. Page file space is committed up front when a process requests memory, but this does not imply anything actually being written to the page file. In particular, the “Page File” number on the Task Manager’s Performance tab in pre-Windows 7 actually refers to committed memory, not physical page file usage. (p. 783; Windows 7 corrected this entry to read “Commit”.)

  2. As of Windows Vista & Server 2008 (the versions Windows Internals 5th ed. is based on) there is no way to determine how much of a process’s committed memory is resident and how much is paged out! That’s a bummer. (p. 782)

  3. The memory manager wakes up periodically and writes out modified pages to the page file “when available memory runs low” (p. 906). So apparently some secret internal heuristic value determines if and when Windows starts to preemptively write out modified pages. Adding more RAM would presumably push back that point.

  4. Process Monitor should display page file accesses if you enable Advanced Mode (otherwise they’re automatically hidden, p. 908). You should also redirect PM’s log because it uses the page file by default. When I do that and monitor the system for a while, I don’t see a single page file access on my system. The PM log writes do show up when I let PM use the page file for its log, and page file accesses (for the directory entry) also appear when I navigate to its directory in Speed Commander, so I assume the monitoring is working.

So if anyone suspects frequent page file activity, I suggest they use Process Monitor as described and filter for C:\pagefile.sys accesses (or wherever you put your page file). Annoyingly, that seems the best (and only) available strategy.

What thread are people referring to? Did I miss some drama?

I’d have to change my vote if I could. My SSD has just failed on me. It was a Kingston SSDNow V 64GB (one of the early mainstream SSDs.)

I’ve checked the date I purchased it and it’s within a year since I bought it, so my vote here was wrong. It seems like I’ve had this PC for years with all the better gear that’s now available.

Hopefully Kingston will replace this one. Although I’d be a little wary of using it as a Primary Hard Drive for the OS, despite how nice it is to use (and how awful the HDD I have installed now is.) I’ve never had an OS Drive fail on me before, and this has taken up an awful lot of my time.

My Crucial C300 is still going strong, and the latest firmware update finally fixed those odd access pauses for good.