Light and Vision

Thanks.

So I think how it works is that your brain makes interpolations of RGB to create the colours you see. So if you receive photons and your red receptors fire like 2/10 and your green fire like 7/10 then your brain says this is yellow and “paints the object that colour.” So the problem is how do you interpolate a texture? Maybe the size of the dots and stripes?

I wonder if we could partition the RGB spectrum. So receptor 1 is [0,128] of the red pixel and receptor 2 is [129,255] of the red pixel. I think you would probably be able to tell the difference between these and that would give you up to 6 receptors.

The problem with partitioning the RGB spectrum is that we can’t display brightness in a reasonable way any more, and since the renderer is designed to work with light for human eyes, and works with brightness, we’d have to write our own rendering pipeline, and that’s just way more work than I think we should be giving ourselves.

And using something like texture overlays for a certain color would also not work out too well, because color doesn’t depend only on the parameters of the surface lit, but the light too. Hence why we’re putting together this pipeline to convert every texture and light to RGB, because then the Ogre/GL rendering pipeline can do the lighting work and greatly simplify our lives.

So, we’re effectively limited to only being able to render 3 photoreceptors into a single picture – you could still have more sets of photoreceptors, that you simply swap views between.

Thing is, with every new photoreceptor combination, we have to recalculate all textures. If we can develop a method for quickly re-generating all textures from the associated pigment data (just a couple times faster than it otherwise would have to be), and a system to easily texture-swap when we switch between vision modes, then we can support having multiple vision modes, and thus, more than 3 photoreceptors.

Maybe even echolocation, using a heavily-modified texture-generator, and a light source shining straight from the player organism (or from every organism which makes a sound?).

I think swapping views could work well, though it’s not perfect.

There’s this which does a similar thing.

Coincidentally I pledged some money towards it’s kickstarter so I’ll download it and see if it’s any good.

.

Ok so this is, I think, awesome.

I put in all the science so that the stellar spectra (1.top) actually look like what a star would look like. Then I put in all the absorption bands of different elements / compounds in the atmosphere (2.bottom). Then I put in 3 colour receptors (4.bottom) and then I displayed the resulting colour in 4.top. (There is no pigment included but it could be with ease)

You can see in 1.bottom the temperature of the star (I didn’t realise the sun was such a classic black body) and the numbers next to the compounds are the percentage of the light that they let through. Also the thick red bar on the bottom axis represents the spectrum that is visible to us puny humans.

So without further ado, this is the colour the sky would look like if you had those colour receptors and lived on those crazy planets!!!

Imgur

Imgur

Imgur

Imgur

Imgur

The top one is most similar to the earth compounds and stellar temperature wise but you can see the sky is green because the green colour receptor is in a reasonable place and the others are much too high.

You see that grey one with the really cold star? Yeah that’s the UK. Oh and also here is a reference image, I think this is pretty damn close!

2 Likes

Very nice.

Have you tried finding photoreceptor response spectra? If you can find some, and, say, compute the spectrum of light that is rayleigh-scattered by our atmosphere, you could try generating spectra for what the bright daytime sky and sun look like from the earth’s surface.

You might have to incorporate some form of normalization of the final RGB data, in case a bright light causes saturation.

I’ll have a think about Rayleigh scattering (it’s complicated because it depends on the atmospheric gasses).

The coloured square is already the photoreceptor response. An inner product is taken between the spectrum that comes through the atmosphere and the three colour receptors (4.bottom) and then their firings are mapped to RGB. As you quite rightly say I had to renormalise the result, I set it so the largest value was 255 and the others were scaled appropriately (though I think this is not correct, I was reading about HSL cylinders and I think [100,100,100] may not be the same colour as [200,200,200] but I have treated them like they are the same and the second is twice as bright).

Well, you could do a test, another proof-of-concept, was my suggestion. Look at the scattering spectrum, etc.

About brightness, it really depends on how exactly the color space is characterized, and that I’m not certain, I think it varies with the use. I’m pretty sure the RGB color space is meant to be linear, though.

The complications come in when you want to try simulating a larger color depth, or brightness range, so you can accurately render superwhites like the brightness of the sun, etc. Stuck as we are with rendering to RGB monitors, we’d have to use tricks, like simulating a drop in exposure as your irises adjust to lighting changes, or use a fragment shader that simulates the way photoreceptors saturate (which is what causes afterimages after looking at colors that are bright in either R, G, or B).

I realize that my post isn’t really contributing to the discussion, but this stuff is too smart for me. I just wanted to tell you that this is literally one of the coolest things I have ever seen. Would we derive the pigmentation of the object from an image texture? If so, have you thought about how that would work?

Anyway, keep up the fantastic work!

Actually, I was thinking we’d go the other way.

From pigments, we derive coloration, using procedural texture generation. It’s obviously a lot more complicated than simply cel-shading (though that would be good for starters), we need to use all sorts of properties of the exterior surface to develop the texture, and I’m certain that this will be something we’ll have a lot of fun putting a massive amount of work into.

As for how we do it, I haven’t thought about it at all beyond that.

But how would we store the pigments data in the first place? If we actually want all of this to be useful, we will need to store around 30 numbers for each pixel, which would require a lot of memory.

Oh nono, we wouldn’t store the pigments per-pixel. We’d have a pigment table, and when generating the texture, one of the intermediate steps would probably have a texture that indexes into the pigment table, and have a second step that converts that into the actual colors. We combine that with a few other steps that produce specularity/bump/etc to get the final texture, which would just be a normal texture.

So I had a go at a Rayleigh spectrum. This is the most Earthlike I could make everything (the star like the sun, the atmosphere like ours, the photoreceptors in the right place, etc) so that I would know if I got it right. It’s not gone great.

http://imgur.com/WS5IQmF

I think one problem I am having is the 30 sample points are equally spread, so there’s just as much detail in the high infra-red as there is in the visible spectrum. It would be good to have more points where the incoming light is high.

What I’m doing is taking a curve like C + 1/(wavelength**4) and then reducing C so it cuts through the graph. Anything above this line is than taken to form a new spectrum which is run through the colour receptors to generate an RGB and then is displayed. Then some percentage of this (like 20%) is removed from the remaining spectrum. Problem is, basically, it’s not red enough. Not sure how to fix that.

1 Like

Are you sure your receptors are right? Shouldn’t they be more like this:

The red cone is slightly triggered by the color blue and it is shifted more to the left irl, I think.

Also, I just thought of a possible optimization. What if you looked at the receptors the organism has at the very beginning and then use that as a mask, so any light with a wavelength greater than the greatest wavelength the receptors can notice is just discounted?

Because I think for the CPA system we are going to want to do this for all the species in your patch. Possibly even for all the species in the game. So if you are a pit viper and have infra-red vision you need some resolution higher up the spectrum.

Edit: Also not sure about the colour receptors. It’s hard because there is a point at each 100 nm so like the blue is 0 at 400 and 0 at 500 but non zero inbetween. Anyway code is in the proto-repo, if you want to play around with them the function to change is generate_receptor_spectrum. It’s all if statements so I think it’s pretty straightforward.

Oh, I somehow forgot about that.

I don’t really know python and, sadly, I don’t have the time to learn it right now. But what I was saying was that if you increased the resolution and had,
Blue - 400 to 450 to 500
Green - 500 to 550 to 600
Red - 500 to 600 to 700
where the first and last number is 0 for that receptor and the middle is its peak, you would have more red in your sample like you wanted and it would be more realistic.

Edit: Eh, what the heck, I’ll give it a try. What is the row on the bottom supposed to look like?

Python is really very easy but I imagine it will take you a while to get used to it if you haven’t used it before.

Ideally it would look like the palette from which an artist might set out to paint a sunset. You want the deep blues for 90 degrees from the sun and you want oranges, pinks and yellows for highlighting the clouds with light that came directly from the sun.

Now of course my scattering algorithm may not be great but I think it’s ok.