Why is this (i.e. terrain turning invisible to prevent unsightly camera blockage) not a thing in all third-person action games?

“This” being that any terrain/bushes/walls/whatever that happen to be between the camera and the player character become temporarily transparent so they don’t keep you from seeing your character.

The game Oni did this way back in 2001 or so and it struck me as brilliant then. It solves so many problems. How many reviews have we heard/read where the reviewer complains about the camera? Is it computationally expensive to do? I know we have some devs and other people connected to the games industry here, so why is this “not a [more common, if not universal] thing?” Do people think it breaks immersion or something?

I feel like most 3rd person action games actually do do this? But then perhaps I just don’t pay enough attention…

Well, maybe what some do is temporarily erase any evidence of something being in the way at all, which isn’t how Oni did it-- in that game you saw sort of a ghostly see-through image of the wall or whatever.

I guess that’s fine if they do that, because at least you can always see your character.

ArcRunner does this.

The “ghostly image” version or the other one?

Goes into something of a first-person mode.

I feel lots of games do this.

Yours and other comments say “lots of/most games do this” but I’d like to know which ones. I’ve played tons of 3rd person action games (Most of the Assassin’s Creed series, the entire Batman Arkham series, Shadow of Mordor/War, GTA series, Ghost of Tsushima, Spider-Man, etc.) and have never seen the ghostly interposed wall/bush/tree whatever, apart from in Oni. I like that because it lets the player know there’s something there.

Maybe in most games the object that would otherwise block the view is momentarily made entirely invisible, granted, and those that are doing that are the ones with a “good camera.” Any devs care to confirm that that’s the case? I do know that there’s occasionally a game where I’m frustrated by something being in the way of the action.

I’m more of a first-person… uh, person, but being the camera avoids most of these problems :)

Given how punishing they are, I’m kind of amazed more folk don’t complain about how bad the camera is in the Souls games, not helped by a wildly unpredictable targeting system. I remember it being a problem 14 years ago on Demon’s Souls on the PS3 and it’s still an issue in Elden Ring. I thought (naively) FromSoft would have fixed all that by now.

I imagine making a game where camera clipping looks natural without revealing the ugly realities behind environmental geometry is where the real headache is in all this but I’ve no idea. All I know is that with third-person action games, controlling the camera is often something else to worry about. I can’t think of many that do what Oni did, although I feel like I have seen it before, probably in bird’s-eye view games where buildings and trees and stuff get in the way. You would think a quality of life feature like that would have been rolled out to every type of game over the years.

I agree with you 100%. If I can add another thing, the OPTION to go from 1st person to 3rd person camera helps a ton.

To your point, Grounded does things similar to Oni, here’s an example with a blade of grass disappearing because the over shoulder camera is looking through it (one blade nearly transparent and another starting to become transparent):

But they also mix that with MOVING the camera closer in some cases, like this tight space between an oak root and a soda can:

Or you can go all in with first person, which in my case is a little too restrictive:

And speaking of camera blockage, it’s 2023, can we not get a POV field increase for every game these days?

Believe it or not, changes in rendering techniques over the years have made this a little more difficult. Deferred renderers (including Unreal 4/5) handle transparency in a very different way (they are forward rendered separately), which means you have to switch out the materials with transparent variants. This means that you’re probably not getting the same lights or light models applied. For example, if it’s a character or plant, you probably have to disable subsurface scattering. So as soon as someone decreases alpha to 0.99, there’s an obvious pop in the appearance of the character.

Not to mention that if the object/character is not trivial, the sorting of the transparent layers is going to be all screwed up unless you have order independent transparency of some kind.

These days, workarounds include drawing to an offscreen buffer, hiding the real object, and then compositing the offscreen object back into the screen with transparency. Though you’ve flattened depth at that point.

The most recent examples I’ve seen use a noise pattern to just discard various pixels depending on the desired transparency. It doesn’t look great, but it’s cheaper and easier than the other options, and various forms of temporal anti-aliasing help the look. Guardians of the Galaxy and Mario Odyssey are two examples of this that come to mind.

So there are solutions, but they tend to be game-specific and bug-prone. It’s not as easy and just ramping the vertex alpha like might have been done in 2001.

SO much this. With advancing age (62 now) I’m much more susceptible to queasiness with a narrow FOV than I was even when I first played Sleeping Dogs at 51 or whenever.

And thanks for giving an example where they’re still doing the transparency thing. I also agree that the option to go 1st-person is handy, as in RDR2 for instance.

Thanks for the explanation, although I only vaguely understand that it’s a lot more complicated to do than it used to be. :-)