Is it possible to give a JVM (Java) too much memory?

Before I left work for my T-day vacation, there was a build problem where some build was taking like 20 hours (when it normally took like 3 hours). Someone suggested that the JVM doing to build had been given too much memory and was getting stuck in some kind of garbage collection loop.

Anyway, at home I have been playing minecraft and it has 9 gigs of ram given to it. After a while it gets into these stutter loops where I am sure the GC is running to clean up a bunch of memory.

This seems to support the idea of giving a JVM too much ram. If this is true, can someone explain or point me to a good article on the subject? How do you know what is enough ram, but not too much?

Are you giving more memory to the JVM then you physically have? I assume you’d go into disk I/O as you smash at your swap space, and that would make your process incredibly slow. I think there’s a flag you can give the JVM so it limits GC thrashing when it’s about to go OOM, but you don’t OOM from giving too much memory for your task…

You could start here and check how much GC your doing

Then decide if you need to tune it
https://confluence.atlassian.com/enterprise/garbage-collection-gc-tuning-guide-461504616.html

No. I have much more physical memory. This is really about the JVM and how it works with a lot of ram rather than troubleshooting minecraft.

It just doesn’t make sense to me that a program would perform badly with MORE memory available than it needs, however, that doesn’t mean it isn’t true.

I haven’t seen that problem yet in my ~17 years of java coding experience. Doesn’t mean it isn’t tru, but I’d rather be profiling the system than randomly guessing which thing to optimize.

One more link
https://publib.boulder.ibm.com/httpserv/cookbook/Java.html
“If the occupancy of the Java heap is too high, garbage collection occurs frequently. If the occupancy is low, garbage collection is infrequent but lasts longer… Try to keep the memory occupancy of the Java heap between 40% and 70% of the Java heap size… The highest point of occupancy of the Java heap is preferably not above 70% of the maximum heap size, and the average occupancy is between 40% and 70% occupancy. If the occupancy goes over 70%, resize the Java heap.”
“A correctly sized Java heap should always have a memory occupancy of between 40% and 70% of the maximum Java heap size. To ensure that the occupancy does not exceed 70%, set the maximum Java heap size to at least 43% larger than the Maximum occupancy value provided by GCMV. This setting then makes the Maximum value 70% of the Java heap and the average to be above 40% of the Java heap size.”

Generally, the garbage collector will only thrash when you allocate too LITTLE memory, as it’s garbage collection gets more aggressive as the used memory approaches the cap.

While it’s possible that there is some issue where you allocate too much, I’ve never seen such a thing, and I run JVM’s allocating upwards of 20 gigs.

If the garbage collector is thrashing, you should see a CPU spike, because the garbage collector will use all the processing time trying to free memory. Is that what you are seeing?

How much CPU usage are you seeing, what did you set the Max memory to, and what are you seeing it actually allocating?

I’ve never heard of a JVM being allocated too much memory and that causing performance problems. Occam’s razor says the real exclamation for why the build take so long is something simpler… Like a really slow test suites or … Let’s see, hardware failures maybe? If there are RAM or hard drive issues that could end up generating multiple retries. Or maybe some sort of repository or some such is full or almost full making writes take a long time? Or some sort of failover issue with a network or database controller? The build logs have gotta shed some light on the matter.

I wonder how the JVM deals with memory fragmentation? If it has a naïve implementation, any application with a large number of mixed small/large allocations/deallocations with different lifetimes could potentially cause problems after some time running, especially if the “compact” pass of the GC is naïve enough to examine ALL available memory when it runs.

That said, I doubt the official JVM would ship with such a problem, so it’s probably nothing of the sort.

I doubted that too. I thought the idea was ridiculous. I first heard it when from some guy’s explanation of why the build server was so slow. I thought it was bullshit, but hey, I hardly know everything. I could be wrong. Then today when I was researching on how to improve minecraft’s performance, I find a post from someone saying the exact same thing about the JVM and the problems if you give it too much memory caused by the garbage collector. In like a week I suddenly hear this from two different sources, so I come here to find out if there is something to this after all.

While I do know its possible to write something that will hose itself with to many resources, I seriously doubt that Oracle would be releasing the JVM in such a state. Then again, the Hibernate framework does have some WTF?! bugs, so I guess anything is possible.

I’m not as much of an old hat as some people here, but I do a lot of Java development and have never heard of anything like that.

In my experience, garbage collector performance problems in applications with really big heaps tend to manifest themselves as longer, infrequent pauses. (The rule of thumb is that doubling the heap size halves the frequency of garbage collections but makes them take twice as long.) I had that in an earlier, more memory-hungry version of my hnefatafl project; I worked around that problem by using the concurrent mark and sweep garbage collector, which does most of its work without pausing application threads.