Depicting the 3D Environment

A discussion about skyboxes on the Discord made me realize that there’s no dedicated thread to discuss the depiction of the 3D environment. So I decided to start this thread to write down my thoughts.

The first time the player sees the 3D environment will always be on the smallest scale which we depict in 3D, and it will always be underwater. So it’s probably best to think about how we want to depict these circumstances first.
Before anything else we have to acknowledge that we aren’t after depicting this scale “as it actually looks”. That is because how something looks depends on who looks at it and which sensory organs and technical devices they use to view it. Thus, we have to evoke the visuals humans associate with things at this scale. And humans generally view things at this scale with a microscope.

BLUR, OPACITY AND BRIGHTNESS

The most important visual cue that things are really smal is a small focal length. Things that are out of focus (in our context far away from the player organism) need to be blurred.
Other than blur I can really only think of two other effects which obscure far away objects under water: Opaqueness of the water and darkness.
Opacity, like blur, is dependent on scale. Yet the correlation is reversed: Opaque water is created by an abundance of tiny particles. So at a microscopic scale, water can never really be opaque.
Let’s think about the third aspect: Light and Darkness. The most important light effects we associate with underwater environments are the godray-like light columns produced by light breaking on a wavy surface as well as the pattern these produce when hitting an object. In my understanding both of these effects only really happen in a pretty limited zone just below the surface and only at a macroscopic scale. This is because the waves which produce them are macroscopic.
On a microscopic scale an object would either be directly hit by light or not. So this effect might be portrayed by periodically flooding the whole loaded scene with light. (And by periodically I mean at the speed in which a wave assembles and dissapears.)
For simplicities sake we could think of all underwater moods on all scales to be replicable with these three sliders: Blur, Opacity and Darkness.

The transition from maximum blur to no blur should ideally be gradual. Instead of creating mutliple environments at different scales with differing amounts of blur, we should create one non-blurred environment and then blur it according to the scale which we try to portray.
An open ocean environment consists of very few “objects”: A surface above, an ocean floor below, a sky (obscured by the water above the camera), a sun and sometimes moons. These things are so big that it doesn’t really matter how big the players organism (and therefore the camera) is. The surface above you always extends seemingly infinitely in all directions, as does the ocean floor.
While the depiction of the open ocean in different scales should be comparatively straight-forward, the depiction of surfaces like rock will probably be much more difficult. While most games have to deal with LODs, levels of detail which depend on the distance of an object, we may have to deal with levels of scale on top of these.

2 Likes

this effect you describe with the blur is called Depth of Field and it is pretty expensive with no good way to optimize it. That is why if you play Skyrim, it is a bit of a flex to run the game with DoF enabled in graphics settings / through ENB

I am extremely against making this effect a core part of the visual experience and art style. It should just be a bit of optional extra stuff.

I’m 99% sure the idea is to make the background visual images already blurred so that there is no runtime cost. Similarly to how they are done currently.

I think a depth of field effect will be an important part of conveying the scale the player is playing at.

They mention things like rocks and stuff which aren’t always going to be a part of the skybox. When they are far away how do we handle that? I think we should not incorporate Depth of Field at all at first (for those things)

I mean the things that the player will actually be able to interact with will be at the same scale as the player, right? So those objects won’t need expensive post processing effects, right? And also depth of field is basically in any AAA game in the year 2022, so once again only the people who want to play Thrive on potatoes that can’t play any modern high fidelity would be affected (I’m not saying we can ignore them entirely, but we shouldn’t give up on all good ideas to improve the game visuals for current hardware).

what happens if you are against the floor surface though (what I think op meant by rock surface). You would need DoF functions to depict area close and far to the camera.

Vast majority of our players seem to play on terrible computers and GPUs cost an internal organ nowadays so I’m just trying to look out for people

I’m not going to worry about that right now as we can just make a surface ocean environment first and start with that.

Even now the GLES2 rendering mode has graphical glitches and there’s been that issue forever for auto disabling the chromatic aberration shader which causes errors in that mode. So it doesn’t seem like anyone is really prioritizing making sure the game renders correctly on old hardware.

ok good so you agree we don’t need DoF (at first), there is nothing big enough to use it on.

We may not bother fixing what is there sometimes but, that isn’t an excuse to give people more problems

As I said, someone can just blur the background panorama image used initially.

Obviously if we introduce some really expensive graphical effects, there needs to be an option to turn them off without the game becoming unplayable. I’m 99% that adding a DoF effect will be fine if it can be disabled, then people who need to disable it can play the game with it looking worse.

I’m open to suggestions of how to make the scale really be felt visually without such an effect, but I think it’s such a common effect that we’d be pretty crazy to not have it.

yes so we don’t need Depth of Field. DoF is a shader function you do based on the camera’s Depth Texture, not just a general concept of Gaussian Blur.

I’m not against the latter of it being optional but it can’t be integral to the game’s look I don’t think

Please suggest an alternative then what we can use when the player is playing as an ant and is wondering around blades of grass and seeing massive creatures in the distance.

Atmosphere and the distortion of speed (that effect of how big things look slow)

It’s cheaper to apply tint by depth rather than to convolve an image with a special blurring function

Why do I get the impression that that also is a shader effect…

it is but it’s linear algebra vs Convolution (yes that convolution)

to explain this a bit more, you probably seen old as belgium games that color stuff far away (Silent Hill on the original PlayStation), but DoF is less common unless you’re talking about like, Crysis.

Literally Runescape has depth of field these days (which is a game many might remember as being one of the first big 3D games you could play in your browser). I don’t buy the argument that depth of field wouldn’t be a common thing all self respecting 3D games should have releasing after 2022 (unless they are going for an art style that would conflict with that).

they have it but, the game doesn’t look stupid without it either, it’s not a core part of the visuals

Most games don’t really let their post-processing carry their art. It’s more about the art design as a whole. Like Dark Souls isn’t technically impressive but it has some serious style that carries it. Same with Nintendo games.

Oh boy, a lot of posting has been going on in here during the last three hours. I’ll try to clear up what I meant and carry the conversation further.

What I’m essentially talking about is creating a system which parametrically creates underwater environments based on things like depth, sun position, distance to ocean floor and so on. At a microscopic scale things like the distance to the surface and the ocean floor would be so huge (relatively to the player) that these things wouldn’t have to be simulated constantly and dynamically during gameplay. It can be assumed that no matter how much the player moves, he will never move enough to substantially change this setup. Rather this scene could be generated once upon entering gameplay. Things like blur wouldn’t have to be computed constantly.

I agree that for most games, style is infinitely more important than technical fidelity. But while this applies to Thrive in some regards, in other parts Thrive operates unlike most games. In Dark Souls the soul of the game lies in the discreetness and particularity of how places and things are carefully placed in relation to one another. Every playthrough looks the same. In Thrive we have to generate environments which look different from playthrough to playthrough.

I will present you with some schemes of how exactly I envision this underwater stage setup when I’ve further developed it. But for now here are some random unordered thoughts:

  1. The plane below which simulates the ocean floor should be very big and probably somewhat roundish (but with few polygons, it doesn’t need to be smooth at it will be blurred anyways)
  2. To determine the color of the light which hits the objects in the players immediate environment the color of the star should be run through the atmosphere and water simulation once before generating the scene. Once we have a stationary light setup we can start thinking about dynamically changing the times of day and the light with it, but this probably shouldn’t be a priority.
  3. When we reach a scale at which the environment never changing despite the player swimming up and down would become unrealistic we could also generate a few additional environments which would show the world from a higher or lower point in space. These could then be dynamically faded in and out when the player moves up and down.

I know that a system which generates parametric environments is hard to program. For that reason and for the sake of performance I’m asking myself what the simplest possible parametric depiction of an ocean environment would be. This whole thread is basically a thought experiment about that.

2 Likes

generating blurred skybox elements and essentially LODs but for blur all at once is certainly doable. Will increase the load times, by how much will depend heavily on hardware.

Doesn’t solve the problem for where you’re completely against the surface, and you have made DoF effectively required if they use these blurred skyboxes, and we’d have to change our minimum requirements by quite a bit (The user will AT LEAST need a discrete and sufficiently strong GPU)

I’m still against screwing people too poor to buy a GPU when our game is free and giving them a messed up / ugly looking game

so

I looked more into this and:

Unreal Engine 4/5 solved this problem by implementing some really fast DoF algorithm allowing high end tablets and things like that to use DoF.

I have no idea if this exact effect could be implemented in Godot since it’s an engine feature with Unreal suggesting it is fairly well integrated.

Godot has Near / Far Blur for DoF in the Environment Resource (where you put skyboxes). My computer is absurdly strong, so I am useless in determining the effectiveness of their algorithm.