Are you wanting to go further and try to keep track of the light as it
collides with the objects in the scene? I know you have some interest in
this but have said it might be too computationally intensive.
Yes, and yes. I think one option might be to simply build a ‘pigment table’, where we generate a bunch of different pigments with different spectra, perhaps using the planet’s surface-sunlight spectrum to help decide what sort of pigment spectral profiles are most likely to be useful. Then, each pigment can be converted to an RGB color, so when a particular pigment is pulled out of the pool and used by a species, we can simply use that color for rendering what the human eye would see.
Come to think of it, we don’t even need to build the table beforehand. Every time a species evolves a new pigment, we would simply calculate the RGB coloration of the pigment under ideal light conditions (where ‘ideal’ is as-of-yet undefined), then when rendering, we simply paint the surface using that color as part of the pattern, and light it up with a point light corresponding to the sun, and an ambient light (or something like that) for the color of the sky.
As for what that ‘ideal’ lighting condition would be, well, renderers are designed with the human eye, and regular sunlight in mind, so when we calculate the ‘baseline’ color of a particular pigment, we’d probably do it using the spectra of sunlight and the human photoreceptors.
So essentially, we’d be converting the 30-dimension colour data (spectra, assuming we use 30-point vectors) into 3-dimensonal colour data as a first step, before rendering, not as the last step, in the retina. I think it’s a suitable approximation if we decide to only have human-like color vision.
Note, we could still pretend to have more kinds of color vision simply by distorting the RBG spectrum, and we can still add overlays with other data for special kinds of vision, so we aren’t losing too much by frontloading the pigment-rendering calculations. So I think this could be a good solution.
Edit: Oh yeah, and the important part of modeling pigments, ie, the part where coloration has an effect on chances of survival, can still be simulated using the raw pigment data instead of RGB. So we don’t lose out on that either.