The problem with partitioning the RGB spectrum is that we can't display brightness in a reasonable way any more, and since the renderer is designed to work with light for human eyes, and works with brightness, we'd have to write our own rendering pipeline, and that's just way more work than I think we should be giving ourselves.
And using something like texture overlays for a certain color would also not work out too well, because color doesn't depend only on the parameters of the surface lit, but the light too. Hence why we're putting together this pipeline to convert every texture and light to RGB, because then the Ogre/GL rendering pipeline can do the lighting work and greatly simplify our lives.
So, we're effectively limited to only being able to render 3 photoreceptors into a single picture -- you could still have more sets of photoreceptors, that you simply swap views between.
Thing is, with every new photoreceptor combination, we have to recalculate all textures. If we can develop a method for quickly re-generating all textures from the associated pigment data (just a couple times faster than it otherwise would have to be), and a system to easily texture-swap when we switch between vision modes, then we can support having multiple vision modes, and thus, more than 3 photoreceptors.
Maybe even echolocation, using a heavily-modified texture-generator, and a light source shining straight from the player organism (or from every organism which makes a sound?).