The tutorial on shadow mapping is almost done, everything is written (35 pages) code has been commented and corrected. I just have to look over everything, this could still take some time as my english is sometimes clumsy.
I had some problems on other levels, everything was pitch black… after trying to correct that for a few hours I finally decided to clean up everything… and the problem disappeared, I guess there was something bad happening in my code 🙂
Here’s a video of the start of the second level of Alien Blitz
Combining old & new
And now for some new tests I wanted to try to combine the old render and the new one, basically I will first render exactly as before, and add shadows on top of it (removing the new lights calculation, just using shadows). So wall shadows will be rendered twice (old + new).
Old / old + new :
Just re-rendering second level :
So basically it’s the same as before except there are more shadows and they are more accurate. The down side is that everything is a bit darker, but not as much as I was expecting.
In previous videos/screenshots I was using a 1024px depth map, so there was not that much aliasing, but I think for performance reasons the quality should be selectable. But at lower resolutions aliasing appear.
256 / 512 / 1024 px depth maps :
(Aliasing is not very visible on the thumbnails)
This test case is probably the worst that could happen, the light is just a little higher than ground a few meters away. I tried various PCF techniques (adapted to cube maps) but nothing really worked. One possible way to solve this could be to use a bigger viewport for shadow frame buffer.
512px depth map, x1 viewport / x2 viewport :
Using a x2 viewport makes small artifacts blend, this is indeed nicer, I couldn’t see a difference with x4 so it’s not included
I also tried mixing PCF in the different processing steps, but got nothing really good.
So I think I will stick to something like :
No shadows : same as current
Low quality : 256px depth map, x1 viewport
Medium quality (default) : 512px depth map, x2 viewport
High quality : 1024px depth map, x2 viewport
And I might change some levels to avoid problems in low quality (for example adding a small wall around the path might help in above case)
Updated video, with new render & medium quality shadows
I had some problems on Android, textures were not read properly, nothing was working. The problem was quite stupid, it’s just that I wasn’t binding my textures correctly, I guess it worked on desktop because MAX_TEXTURES might be higher.
Now shadow mapping seems to work exactly the same on my Nexus 7 as on PC, but still nothing on my Nexus 10 (black screen, empty depth map, empty shadow buffer), still needs some debugging.
Anyway it’s way too slow to do anything (even with caching), but I just want to make sure I can get it to work in case I do a tutorial later.
The problem I had previously on the far was because of the pack/unpack functions I was using to store depth data on all the rgba values, now I will just use an alpha texture for lights depth map. So I will loose precision, but in a third person view it doesn’t really matter.
All far values work now
Correcting multiple lights
My problem with multiple lights was just a small bug in my multi render pass, no idea why it was working on my nvidia card and not on my intel card.
The shadow map :
The render :
Of course as there are lots of lights there’s not so much shadows.
Adding basic colors is pretty straight forward from here, the shadows & render :
Comparing to the current render :
There’s still some work to do to get a similar result, mainly because in the current solution I do a lot of smoothing. But the advantage of shadow mapping is that even sprites have a shadow…
After just a few modifications, and trying to smooth a bit (2nd image)
It doesn’t change that much 🙂
Tests on PC
I ran 2 tests on different configurations and checked the FPS, using the sample level (7 lights)
Results (first number is test 1, second is test 2) :
Recent laptop, Integrated Intel card : 26/60
Recent laptop, Nvidia card : 53/60
Old computer (6 years old), Nvidia card : 60/60
I’m surprised my old computer has such a good result, but even in some recent games it performs better than my laptop (Borderlands Pre Sequel for example), but it can’t render all the effects on these games (old graphical card)
In the first test the depth map is computed each frame, I didn’t try to optimize anything (my shaders are certainly full of bottlenecks), in a final version it should cache the depth map and re-render it only if something is moving in its range. So it would save A LOT of time, I think it will always render at 60fps on these configurations with proper optimizations and caching.
Sadly my netbook is broken, it needs a new memory kit, so I can’t test on it.
I prefer the old rendering, maybe I could do a mix between old and new, to get dynamic shadows but keep the light colors.
I don’t know yet if it will be included on PC version, I will continue working on it for now, and we will see.
I think I’ll try to do a clean tutorial with all I learned, basically it will cover
Point lights and directional lights
Cubemap frame buffers
Multi pass rendering (forward rendering)
Everything using Libgdx, and Android/iOs/PC compatible
It will not cover caching of lights, everything will be rendered each frame. Because this is a very specific work that depends on your game engine.
The problem I have is I’m pretty sure there are better ways to do it, but at least it works 😉
My fear in previous post were justified, in order to use deferred rendering I need to use OpenGL Multiple Render Targets (MRT), and this is not available in Libgdx as it uses OpenGL 2.0
I tried enabling OpenGL 3 but I think it’s just some preliminary work as it justs crashes the game, something to do with shaders apparently.
Basically from what I understand :
MRT is used on frame buffer to render to multiple textures instead of just one (you get access to an array of gl_FragColor in the fragment shader instead of just one)
A few textures are saved containing data : colors, positions, normals,…
Then next passes will use these textures to just render on the screen space using what was previously calculated, and each pass blends a new light.
This saves a lot of calculations as no unnecessary lights calculations are made, only what is visible on screen is computed.
Multi pass rendering
So I tried something else, render the terrain multiple times
First time just rendering the terrain without lighting
Render one more time per light, each time blending the shadows from the light
The bad thing is that the fragment shader is called multiple times, so it’s not very efficient…
I first tried with a simple approach, just adding a soft shadow on each light
I have lots of problems when lights have a limited range, I don’t know why. So I set a very high range for the moment
Of course this is not at all a correct result, but at least I can now see that all lights and frame buffers are taken into account.
Now to correct the process :
Render all lights (viewed from the usual camera) on a Frame Buffer, each new render just adds its data above the previous one
Render the terrain full light, and multiply each fragment with the color from the frame buffer
Basically the frame buffer contains:
And the render becomes :
This seems a bit better but I still run into troubles when adding more lights…
In the above example there are 3 lights activated, if I activate the one in the center of the big room the render becomes :
It seems the new lights takes the frame buffer of the right light… which doesn’t make sense… it needs more investigation.
*edit* it works better with my nvidia card instead of the integrated intel one, so I must be hitting a limit somewhere
In my previous post I didn’t fully understand the use of cube maps, and I was just generating the render as if the point light was 5 lights… But code can be a lot easier as OpenGL natively supports Cubemap and the glsl code to check if a point is in shadow or not is actually a lot easier.
Another mistake was to not consider top depth for lights, in most cases they are useless, except when a light is near a wall. In this case the light will “see” the wall when looking up.
Sadly it does not seem Libgdx supports cube map frame buffer, so I copied the FrameBuffer class and modified it to include cube maps.
Here is what happens if you don’t change the up property of the camera :
The up property is not the same for negative/positive Y as the ones I saw on the Internet, don’t know why :
After correction and adding the bias back
This seems to be a lot quicker than previous method.
Now, trying to render multiple lights, my first approach was to
Have one framebuffer per light
Generate shadow map for each light on its own framebuffer
Generate the final scene, using arrays of samplerCube (for each light) in the fragment shader
Problem is it seems only the last shadow texture is used, and I have lots of troubles getting array of samplerCube in the shader.
As of now I can’t seem to get anything to work, so I guess I’m not using the right technique. I’ve read about deferred shading, but I don’t know yet how it works (nor if it can be done with Libgdx / GL 2.0)
Removing artifacts near the light is simply done by adding a bias to the depth test, it is good enough in our case
I’ve also added back unrealistic shadows on wall: walls on the right/left or down/top have an automatic different shadow applied, it makes gaps easier to see (you can see the small holes on the top-right of the image now, they were previously invisible).
It could be done using global illumination, but this method is a lot quicker (just testing the normal of a surface) and works nicely in Alien Blitz.
I will now make some assumptions, in order to get a good visual aspect over rendering speed ratio. It is very specific to Alien Blitz:
Most lights have a small range, except sun lights that will probably have a specific code
Bullets should not have a realistic shadow, the old one is best as it allows player to easily guess the height of a missile (I’ve tested both, unrealistic shadow is really better for gameplay)
View is quite far away, no need for high quality shadows (a FPS view would require such shadows)
There can be lots of lights on screen, so they should be computed quickly
These points explain some choices I will make from now on, if at some point I want to develop a first person view game I will have to change lots of code.
Testing different shadow map sizes
64px is obviously too small, 256px should be enough for most lights, and 1024px for high range lights (sun)
There are some artifacts above walls, I will have to correct them with PCF but maybe just as an option (3 quality options for shadows : low without PCF & 256px maps, medium with PCF & 512px maps, high with PCF & 1024px maps, or something like that)
Real level test
I’ve changed my code a bit so that light used for shadow mapping is the closest to the player, that allows me to quickly run some tests on actual levels.
It really feels good, currently shadows are very big, but when all lights will use shadow mapping it should feel better I think, otherwise I will have to make some changes on lights (higher and looking at the ground)
More complex test case
For next modifications I will need a more complex test case, with more lights, colors and such. Here it is (it also let me test that the old code was still working)
I should have used this king of effect (different light colors in the same room) in the game, it looks nice 🙂
Point lights / Omni-directional lights
Omni-directional lights require to use a cubemap, basically the scene is rendered 6 times : left/right/up/down/bottom/top
Of course in Alien Blitz top rendering is useless
The idea is just to render the scene 6 times, changing camera looking direction and viewport position/size each time.
Here is the resulting map
I will assume a light does not cast a shadow on itself, it will make things easier (hence the full black image in the bottom-right, the top-right is supposed to store the top view map, but it is unused here)
Time to render this new map now:
There are some artifacts on the edges of the 5 generated maps (it can be see on the left/down diagonal), other than that it seem quite fine.
I am now considering managing multiple lights, but I’m afraid I will come on some difficulties:
Depending on the video card there can be some huge limitations (221 max uniforms vectors, 16 textures, on one of my video cards)
I still get 60fps, but I think it is already beginning to be heavy on the gpu (my laptop is increasing fan speed, that’s a good indicator)
Most examples on the Internet are very simple: directional light, one light only,… so I will have to find some better sources on how to manage multiple lights and do lot of tests.
In Alien Blitz lights are computed using a mix between a 2d light texture and some custom code to get smooth lights are avoid artifacts on walls.
It’s a good solution as it is requires some time when loading level but it is quite quick afterwards.
I’ve decided to try to implement shadow mapping, I don’t know if I will apply it to Alien Blitz, maybe just the PC version. We will see… but it is still a good exercise and I’d love to see what it could look like.
I’ve not at all completed the task, but I’ve decided to post some “Work in progress” screenshots.
I’ve made a custom level, with just some basic walls and one light, here is the current result
Light point of view
In order to generate the shadow map we need to set a camera on the light itself (what the light can “see”). For testing purposes it is easier to actually use the game camera first.
For the first tests I’ve decided to use a directional light, point lights need to generate 6 textures (surrounding cube), so I will do that later. This light “looks at” and follows the player
And now the shaders need to be changed to only get a depth map
(the UI elements are added after the 3d rendering, so they will not be saved when using a framebuffer later)
The black & white are inverted in this screenshot, else the wall in the back would be almost invisible/pitch black.
Using a frame buffer
Next step is to revert view to its original state and use a frame buffer to render the light’s camera.
Black and whites are now correctly rendered, I’ve changed the FOV on the camera to be able to view the pillar with the 1:1 ratio (1024×1024)
Something is happening, it doesn’t crash, and lines move when player moves, we’re getting there
After some correction
Hmmm it looks like letters and signs… Oops forgot to bind the texture, so it’s using another one (the one used by the font apparently)
Something is obviously off
Hey! it looks like there’s a small mecha there…
Ok, there’s obviously a mecha there, and it moves accordingly. Good, but there are way too much artifacts…
After a bunch of tests and corrections, pillars’ shadows seem correctly placed, but the light does not go far away (should illuminate walls), there doesn’t seem to be any shadow for the mecha and there are still artifacts…
Some more tests, still a lot of artifacts but I’m beginning to like what is displayed…
Haa a lot better, let’s remove the right part
Nice, I also have the “expected” artifacts near the center of the light. They are normal artifact due to distortion, they are to be expected at this point (and corrected afterwards of course).
A small video
What’s next ?
This was a simple example with lots of limitations
Shadows are computed each frame
Only one light
No light color
Badly written code
So there’s lots of code to do… but it’s a start and I’m pleased I’ve been able to get some shadow mapping so fast.