Light and Vision

Wherein I will start a discussion on sight, light, rendering, and so on. This post will be updated with more substantial details as I get the time to gather stuff from other places. Expect math and some pseudocode.

Interesting things to consider include: pigmentation, photoreceptors, Rayleigh scattering, sunlight, chlorophyll and other photosynthetic pigments (and how they’ll be colored), and radiative heating (ie warm rocks in sunlight).

The most important question to me right now is: How should we model spectra? There’s a balance between speed of computation and usefulness of data, where is it?

Okay, so if you were following this discussion back on the old forum you know that one of the biggest issues we faced was first vs third person vision. The goal of Thrive is to have very seamless transition between stages; however, forcing the player to view fuzzy images in black and white in the multicellular stage after playing in third-person for the whole microbe stage would probably be frustrating and would create a rigid and noticeable transition. I think that I know how we could solve this problem. My suggestion is as follows:

Cells have a very good sense of their surroundings given their size. By sensing concentration gradients, leaking compounds, and protein receptors, cells can form a pretty decent image of the environment. As such, I suggest we leave the microbe stage as is, allowing the player to see everything going on on the screen in detail. Now, the problem with concentration gradients is that they decrease exponentially in strength the farther you get away from the source. So while you know that there is a glucose compound cloud 4 hexes (I’m using hexes as a measurement since most people know how a hex compares to the size of a microbe) away from you or that species A is 20 hexes away from you, 200 hexes away the world gets very “dark” and hard to see and things happening 600 hexes away are completely invisible.

Because cells are fairly small, around 5 hexes in length at the moment and around 20-50 in the future, the fact that you cannot see 600 hexes away from you doesn’t matter much. Why would you want to look that far out? But as you become multicellular and get more and more cells as a part of your organism, this vision limit creeps up on you. If you play as a colony 20 cells wide each of which is 50 hexes wide, you suddenly take up 1000 hexes of space (on one axis) and you can only see half of your organism around you (the 600 hex limit for vision starts from the border of your organism, not the center). And if you diversify even more and take up 6000 hexes of space, you can only see 10% of your volume around you, and the rest is darkness. Eventually, when you have millions of cells as a part of you, this “buffer” zone of vision no longer shows you anything.

I think this method works pretty well since as a cell you don’t really worry about seeing around you; you get automatic vision, but as you grow in size it becomes more and more of a nuisance and eventually you have to develop a way to sense the environment; otherwise, you’re just crawling around in the dark and at a huge disadvantage compared to species with senses. In late multicellular you will see the world completely as you species sees it, which would lead to the quick development of eyes (or other sensory organs). This is, in fact, similar to real evolution where eyes appeared almost completely developed in an extremely (evolutionary speaking) short period of time.

What do you think?

1 Like

I think that’s a really good solution. However, how would we represent the area the player cannot see? “Fog of war”?

Nice idea. What’s the gameplay like if you can only see a small ring around your organism?

Good idea. Should there also be a prompt explaining what exactly is going on so that the player doesn’t assume their copy of the game is broken?

I imagine it’d be a tad bit confusing if the game started progressively getting more blurry until eventually nothing. Especially for new players who aren’t sure how the game is supposed to go.

@nickthenick, I was thinking that the first 500 hexes would be clear, 500-600 hexes would be progressively getting darker and darker, and all hexes after 600 would be completely black.

@tjwhale, I guess it’s just moving in the dark hoping you run into food, or if you’re a plant you just do whatever plant gameplay will be like.

@Atrox, I don’t think that wil be a problem since the player will see himself approaching the limit of visibility, but we’ll see when we get there.

Hmm, it’s gonna need more than that.

I was saying a while ago that a good horror mode for the deep ocean would be to give someone a bioluminescent organelle, but which was incredibly expensive to use and burned a lot of energy. Then you use sound to let them hear when other microbes are close (and what those other microbes are) and so you live in this creepy darkness, listening to things scuttle around you, only lighting your lantern when you really need to see / hunt something. There might be some mileage in something like that.

But what about the part before you evolve hearing?

There’s no reason why the senses need to mapped 1-1 between your creatures senses and human senses. Like how do you represent echolocation? Maybe by seeing it on the screen? You are then using one human sense (sight) to represent a totally different one (echolocation). There’s no reason this is a problem.

What I meant was the part before the organism evolves echolocation. The organism needs to evolve a sense of hearing before it can use echolocation.

What I’m saying is what about the part before the organism evolves that. Yeah it would be cool to use echolocation, but the gameplay before that would have to be swimming around in the dark hoping to find food as @TheCreator said

I read your post incorrectly. I understand now.

Yes but that is not good gameplay and remember the goal of Thrive is to make the most scientifically accurate thing we can which is fun. Fun has to come first. Really in the microbe stage you shouldn’t be able to see anything, you should just have a compounds list and be able to make more or less of certain substances but that, although it might be an amazing post modern game of colour, is not what most people think of as fun.

So yeah if you don’t have enough senses to make fun gameplay then we’re going to have to give you some whether or not it is realistic, like has been done in the microbe stage.

Bangs gavel. Aight so we’ve settled the question of third-person in multicellular then I think. You get free sight up to a certain real distance away from your surface, as you get bigger and the world ensmallens, that free sight becomes less and less useful. Maybe you’d grow a nose, or use electroperception, or lateral lines or something. I don’t care, make another thread, sight is a big enough topic on its own. Okay, I do care and I would be very interested in a discussion on visualizing other senses, but I made this thread in particular not to cause a brainstorming session, but to hopefully seal up some loose threads regarding vision specifically.

So: some loose threads, in no order, as I remember them:

  • photoreceptors, which I think is the biggest one. How do we handle eyes with photoreceptors that do not behave like our own?
  • pigmentation: How do we handle pigmentation under, say, a sun that is not our sun’s color, seen with eyes that are not like ours?
  • Field of view: How do we map a horse’s sight to the screen? A dragonfly’s? A spider’s?

I remember coming up with a nice, scientific model for sight that would model everything almost perfectly realistically, up to and including having afterimages from bright lights, but plenty of that is not really feasible with commodity computers.

And now I’ll post because I got ninja’d

Editnote: Note, I do want to try and focus this discussion on the actual math/technical details required to make it all work. So if you’re afraid of that, feel free to make a parallel brainstorming discussion.

Let’s talk just about light for now, not about vision and perception.

Thoughts on this article?

Great article, especially for us.

We haven’t talked about how to model stars yet but I think it’s totally possible to get an approximate spectrum for each one (maybe approximated by 30 points, as you have been suggesting). Then we could filter this through the atmospheric gas composition (which we can keep track of in the CPA system, which actually allows the player to deliberately alter the chemistry of their planet as a microbe if they so choose, which would be a very nice feature). This fits in quite nicely with the temperature model for the climate data we discussed on the old forums, we could even have a two layer model for the greenhouse effect. Then the colours of photosynthetic organisms could be derived from this as in the article.

That way, even if you saw each planet through human eyes, you would see a scene lit by an approximation of the true light (with appropriate RGB derived from the spectrum coming through the atmosphere) and you would see unusual coloured foliage, which would be very striking.

Are you wanting to go further and try to keep track of the light as it collides with the objects in the scene? I know you have some interest in this but have said it might be too computationally intensive.

Are you wanting to go further and try to keep track of the light as it
collides with the objects in the scene? I know you have some interest in
this but have said it might be too computationally intensive.

Yes, and yes. I think one option might be to simply build a ‘pigment table’, where we generate a bunch of different pigments with different spectra, perhaps using the planet’s surface-sunlight spectrum to help decide what sort of pigment spectral profiles are most likely to be useful. Then, each pigment can be converted to an RGB color, so when a particular pigment is pulled out of the pool and used by a species, we can simply use that color for rendering what the human eye would see.

Come to think of it, we don’t even need to build the table beforehand. Every time a species evolves a new pigment, we would simply calculate the RGB coloration of the pigment under ideal light conditions (where ‘ideal’ is as-of-yet undefined), then when rendering, we simply paint the surface using that color as part of the pattern, and light it up with a point light corresponding to the sun, and an ambient light (or something like that) for the color of the sky.

As for what that ‘ideal’ lighting condition would be, well, renderers are designed with the human eye, and regular sunlight in mind, so when we calculate the ‘baseline’ color of a particular pigment, we’d probably do it using the spectra of sunlight and the human photoreceptors.

So essentially, we’d be converting the 30-dimension colour data (spectra, assuming we use 30-point vectors) into 3-dimensonal colour data as a first step, before rendering, not as the last step, in the retina. I think it’s a suitable approximation if we decide to only have human-like color vision.

Note, we could still pretend to have more kinds of color vision simply by distorting the RBG spectrum, and we can still add overlays with other data for special kinds of vision, so we aren’t losing too much by frontloading the pigment-rendering calculations. So I think this could be a good solution.

Edit: Oh yeah, and the important part of modeling pigments, ie, the part where coloration has an effect on chances of survival, can still be simulated using the raw pigment data instead of RGB. So we don’t lose out on that either.

Ok so let me check if I understand. Is this diagram about correct?

http://imgur.com/b7gCCRg

So you take the raw light from the sun and run it through the filter of the atmosphere. Then you take this light and run it through the pigment (which is a 30D spectrum) and this gives you an approximation for how that pigment would look on the planet.

Then you convert that spectrum to RGB (how are you thinking this will work?) + paint any objects with that RGB colour and then light the scene with white light.

If a chameleon is trying to hide on a leaf then the leaf and chameleon would both be painted with the same RBG colour and so they would, hopefully, be hard to see.

Have I got it about right?

Rotate those pluses by 45° :wink:

Edit: Yes, you’ve got it about right.

Conversion to RGB is done by taking the product of the input spectrum with 3 spectra that each represent a particular color photoreceptor of the human eye. This is done once when a pigment is generated.

Camouflage is a tricky question, cuz it depends on what you’re hiding from, and how they see. Your camo pigments have to be able to hide you from their sight. This is likely one reason why supposedly-camouflaged organisms often look strikingly colored, we have an extra photoreceptor to identify fruit, and that also incidentally makes it easier for us to see certain camouflaged organisms that their predators cannot.

So, what that means is for camouflage, we would likely work by comparing the color spectra of the prey against the color-spectra of the background, through the predator’s eyes. You think this would be too expensive? It could certainly give some realistic but surprising results.

Hmm, I really like the idea. Though it is a bit problematic because you would want a general result for the CPA system which is how much the camo reduces the predation. Essentially the predator’s eyes are one variable, the prey’s pigment is another and the background is a third and they can all vary a lot (especially as the predators will also want to be camouflaged for hunting).

Though maybe it isn’t as bad as it might be. Say you want to compare 1000 species against each other with 100 backgrounds each using 30D spectral data. I think that works out as 100 million spectral comparisons. If each comparison requires light x pigment x receptors twice (once for the background and once for the prey) then that’s another 50k; so that’s 5 trillion total computations (which are just real number multiplications). This data has to be computed once on setup but then only mildly reworked each time a species changes eye or pigment.

Maybe it’s a bit insane to compute 100 megabytes of data per patch. However I suppose what I am saying here is that brute force is workable. If we cut it down (less backgrounds, some species flagged as not-camouflaged, some tagged as not having vision) then I think it could be doable (also 1000 species per patch is definitely an upper limit as with even 100 patches that’s 100k creatures on your planet which is probably overkill).

My internet went down for a bit today (it was my vietnam) but it did make me do a couple of thrive prototypes.

Here is a very naive spectral composer (I just made that up) at work. So the top panels are the light and the bottom panels are the filters. Top left (1.top) is the light coming out of the sun (randomly generated, I haven’t looked into stellar spectroscopy, internet was down). Then the next panel is what comes through the atmosphere (2.top) and the atmospheric filter is below that (2.bottom, again no science just random). Then there is the pigment (3.bottom) and the light that bounces off the pigment (3.top). Finally there is the spectrum to which the receptor responds (4.bottom). An inner product is taken with the light from the pigment (3.top) and the receptor (4.bottom) and the result is displayed in the top right. “Seen” means the result is >1,7 (completely arbitrary) and “Not Seen” is anything below this.

What is quite exciting is that the time it took to do all the calculations (not plotting) and it is very fast, 2 ten thousandths of a second per cycle. Which is great because I did everything in the dumbest way (single thread go through the numbers in sequence and multiply them, no numpy) so with things done better I think we could get this really really fast (multithread c++). So I think it is feasible to do pigment calculations this way which is really nice :smile:

2 Likes

Fantastic job on this! Your prototype got me thinking. The player gets to choose the wavelength of light required to stimulate his receptors, right? If we have subeditors for each organ, we could allow the player to manipulate the graph you see at the bottom right of your video and have the player assign a specific human color to this receptor. For example, I could create a receptor that responds best to green (in human terms) and UV light—so that he could see plant food and know when to run away to avoid getting its DNA fried—and assign the color blue to the green receptor and the color red to the UV receptor. He would then see everything in terms of his specified colors. The problem would be if he had more than 3 receptors. Would we allow the player to choose orange for the 4th one and risk him confusing a red and yellow object for a gamma colored object, or would we do something crazy and allow him to choose a texture that could be overlayed? Something along this lines of: this receptor is red, but this one has black stripes and this one has tiny pink dots?