I’ve recently had a need to simultaneously render using both DirectX and OpenGL. There are a number of reasons why this is useful, particularly in tools where you may wish to compare multiple rendering engines simultaneously. With this technique it is also possible to efficiently perform some rendering operations on one API to a render target, and switch to the other API to continue rendering to that render target. It can also be used to perform all rendering in a specific API, while presenting that final render target using another API. Providing direct access to textures and render targets in graphics memory regardless of API has the potential of efficiently pipelining surfaces through multiple discrete renderers.
It’s been too long! Here’s an effect I was taking a look at a few months ago.
So there is this cool technique that had gained significant popularity in the demoscene called “Signed Distance Fields”. There’s a truly excellent presentation by iq of rgba (Iñigo Quilez) posted on his website http://www.iquilezles.org which he presented at nvscene back in 2008 called “Rendering Worlds with Two Triangles”. I wanted to play around with some GLSL and thought this would be a really interesting algorithm to take a look at. You can see some of the power of these types of functions in a presentation that smash of fairlight (Matt Swaboda) gave at GDC earlier this year http://directtovideo.wordpress.com.
So here’s an extremely basic GLSL shader I made to learn exactly how the distance fields work. A frequent question when working with signed distance fields, is why march through the scene if you already know the exact distance from your viewpoint to the surface of an object? Well, if you consider the path of a ray fired into a scene, in order to determine the intersection point of that that ray with the implicitly defined surface of one out of many objects, a great deal of math would be involved to determine the exact intersection point; too much for a real time application. The signed distance fields are a way to describe geometry by providing a distance from a given point in 3d space for the entire scene. Combined with ray marching, we can start at the camera and step at least the distance to the nearest surface, but in the direction of the ray. If the ray pointed directly into that nearest surface, the ray would return the intersection point in 3D space. Otherwise, we can run the equation again given a point in 3D space that is that distance along the ray to find a value for the nearest surface. March again, test for intersection, return if we intersected, otherwise continue marching.
To really solidify the algorithm in my brain, I wrote up this little shader. The only input I use is the window dimensions, which are only used for coloring. I hope to soon add shadow computations to provide a true 3D look.
© 2019 Halogenica | Stumblr by Eleven Themes