I’ve recently had a need to simultaneously render using both DirectX and OpenGL. There are a number of reasons why this is useful, particularly in tools where you may wish to compare multiple rendering engines simultaneously. With this technique it is also possible to efficiently perform some rendering operations on one API to a render target, and switch to the other API to continue rendering to that render target. It can also be used to perform all rendering in a specific API, while presenting that final render target using another API. Providing direct access to textures and render targets in graphics memory regardless of API has the potential of efficiently pipelining surfaces through multiple discrete renderers.
I’ve been terribly bad at keeping information on my games posted on my blog, so I’ve created a “Games” section where you can see what I’ve been working on! My latest game was an entry to Ludum Dare 28 called “Aggrogate”. It did extremely well, getting 8th place in innovation! I will do my best to keep the game page up to date with the latest builds of my games, the latest videos, and links to the original Ludum Dare competitions.
Also, GDC is next week! I will work on making a nice informational post about one of the many research projects I did this year, as I’ve been meaning to post more technical information on this blog.
And in another hardware related update, I also posted my recent 120Hz monitor overclock conversion.
Today I showed off Zero Zen, a game I made in 48 hours for Ludum Dare, at the Portland DrupalCon. It went great! For those interested in downloading the game, you can download the original Ludum Dare entries here:
The original Ludum Dare page with extra information can be found here:
There will be a proper post discussing a postmortem of the game, but for the time being take a look at this nice gameplay video!
I’ve been interested in making some nice electronic music and sound effects for my games. I tend to find the UI of different VSTs frustrating, so I was interested in getting a MIDI controller that could help speed things up. I’m also interested in the possibilities of rigging the MIDI knobs to variables in the code so I could tweak different parameters in real time. Like any good (over-)engineer, I researched what it might take to put one together myself. For whatever reason I find This sort of controller hardware fascinating.
It’s been too long! Here’s an effect I was taking a look at a few months ago.
So there is this cool technique that had gained significant popularity in the demoscene called “Signed Distance Fields”. There’s a truly excellent presentation by iq of rgba (Iñigo Quilez) posted on his website http://www.iquilezles.org which he presented at nvscene back in 2008 called “Rendering Worlds with Two Triangles”. I wanted to play around with some GLSL and thought this would be a really interesting algorithm to take a look at. You can see some of the power of these types of functions in a presentation that smash of fairlight (Matt Swaboda) gave at GDC earlier this year http://directtovideo.wordpress.com.
So here’s an extremely basic GLSL shader I made to learn exactly how the distance fields work. A frequent question when working with signed distance fields, is why march through the scene if you already know the exact distance from your viewpoint to the surface of an object? Well, if you consider the path of a ray fired into a scene, in order to determine the intersection point of that that ray with the implicitly defined surface of one out of many objects, a great deal of math would be involved to determine the exact intersection point; too much for a real time application. The signed distance fields are a way to describe geometry by providing a distance from a given point in 3d space for the entire scene. Combined with ray marching, we can start at the camera and step at least the distance to the nearest surface, but in the direction of the ray. If the ray pointed directly into that nearest surface, the ray would return the intersection point in 3D space. Otherwise, we can run the equation again given a point in 3D space that is that distance along the ray to find a value for the nearest surface. March again, test for intersection, return if we intersected, otherwise continue marching.
To really solidify the algorithm in my brain, I wrote up this little shader. The only input I use is the window dimensions, which are only used for coloring. I hope to soon add shadow computations to provide a true 3D look.
A couple weeks ago was another splendid game jam, this time held at the art institute in Portland, organized by the excellent PIGSquad (Portland Indie Game Squad). I was in the mood to learn a bunch of new technologies, and to experiment with audio visual synchronization, which I intend to use heavily in a lot of my upcoming games. The audio program I’m using is called Renoise, and the graphics framework I’m using is called Cinder.
This past weekend, I made a game in 48 hours! It was part of Ludum Dare, which holds a competition to create a game as a single person team in 48 hours, from 6 PM (PST) Friday to 6 PM Sunday.
So, lets take a look at the game. It’s called “Serendipity with Cubes”, and available to download (and rate, if you’re kind enough) from the Ludum Dare page.
I just recently got picking working using the Bullet Physics Engine. Picking is a way to “pick” an object via a primitive (triangle) using a cursor from the camera’s perspective. Hovering your mouse cursor for example of a window and clicking on an object is a very intuitive way to interact with a scene. However, it’s not as intuitive to program, because the location selected is in 2D screen coordinates, and not 3D world coordinates. The difficulty in picking really lies in somehow determining the 3D coordinate space of the object to select. First, lets see what I’m talking about.
Well here is every blogger’s obligatory post about being too busy to blog. I’ve been working hard on the game engine for quite a while, but haven’t made many posts as there’s not too many visually interesting things going on. I’ve been testing out a lot of technology in the game engine and working on robustness. Here’s a short list of the things I’ve been up to since the last post:
I’ve tried to add whatever is necessary to allow prototyping of a game in a short amount of time, without compromising the structure of the engine. I’m quickly realizing that I’d like another layer of abstraction between the engine and the scripting system that contains just gameplay logic. For example, the engine might handle rendering assets and simulating physics, but the gameplay layer is responsible for describing the notion of a “player” or an “enemy”. This is further abstracted into a scripting system that allows rapid level creation.
I interpreted the snake eating itself as a prompt for recycling, revolution, reincarnation. So I started working on a type of “Jenga” game, using my particle system to place blocks in the shape of a tower. The objective is to pull blocks out of the tower and place them on the top to make the tower taller, without falling over. The tallest tower yields the highest score. I worked as a single person team, and did all of my work from home. The game isn’t finished, but I’m fairly happy with the result after just a couple days work.
© 2014 Halogenica | Stumblr by Eleven Themes