Today I showed off Zero Zen, a game I made in 48 hours for Ludum Dare, at the Portland DrupalCon. It went great! For those interested in downloading the game, you can download the original Ludum Dare entries here:
The original Ludum Dare page with extra information can be found here:
There will be a proper post discussing a postmortem of the game, but for the time being take a look at this nice gameplay video!
I’ve been interested in making some nice electronic music and sound effects for my games. I tend to find the UI of different VSTs frustrating, so I was interested in getting a MIDI controller that could help speed things up. I’m also interested in the possibilities of rigging the MIDI knobs to variables in the code so I could tweak different parameters in real time. Like any good (over-)engineer, I researched what it might take to put one together myself. For whatever reason I find This sort of controller hardware fascinating.
It’s been too long! Here’s an effect I was taking a look at a few months ago.
So there is this cool technique that had gained significant popularity in the demoscene called “Signed Distance Fields”. There’s a truly excellent presentation by iq of rgba (Iñigo Quilez) posted on his website http://www.iquilezles.org which he presented at nvscene back in 2008 called “Rendering Worlds with Two Triangles”. I wanted to play around with some GLSL and thought this would be a really interesting algorithm to take a look at. You can see some of the power of these types of functions in a presentation that smash of fairlight (Matt Swaboda) gave at GDC earlier this year http://directtovideo.wordpress.com.
So here’s an extremely basic GLSL shader I made to learn exactly how the distance fields work. A frequent question when working with signed distance fields, is why march through the scene if you already know the exact distance from your viewpoint to the surface of an object? Well, if you consider the path of a ray fired into a scene, in order to determine the intersection point of that that ray with the implicitly defined surface of one out of many objects, a great deal of math would be involved to determine the exact intersection point; too much for a real time application. The signed distance fields are a way to describe geometry by providing a distance from a given point in 3d space for the entire scene. Combined with ray marching, we can start at the camera and step at least the distance to the nearest surface, but in the direction of the ray. If the ray pointed directly into that nearest surface, the ray would return the intersection point in 3D space. Otherwise, we can run the equation again given a point in 3D space that is that distance along the ray to find a value for the nearest surface. March again, test for intersection, return if we intersected, otherwise continue marching.
To really solidify the algorithm in my brain, I wrote up this little shader. The only input I use is the window dimensions, which are only used for coloring. I hope to soon add shadow computations to provide a true 3D look.
A couple weeks ago was another splendid game jam, this time held at the art institute in Portland, organized by the excellent PIGSquad (Portland Indie Game Squad). I was in the mood to learn a bunch of new technologies, and to experiment with audio visual synchronization, which I intend to use heavily in a lot of my upcoming games. The audio program I’m using is called Renoise, and the graphics framework I’m using is called Cinder.
This past weekend, I made a game in 48 hours! It was part of Ludum Dare, which holds a competition to create a game as a single person team in 48 hours, from 6 PM (PST) Friday to 6 PM Sunday.
So, lets take a look at the game. It’s called “Serendipity with Cubes”, and available to download (and rate, if you’re kind enough) from the Ludum Dare page.
I just recently got picking working using the Bullet Physics Engine. Picking is a way to “pick” an object via a primitive (triangle) using a cursor from the camera’s perspective. Hovering your mouse cursor for example of a window and clicking on an object is a very intuitive way to interact with a scene. However, it’s not as intuitive to program, because the location selected is in 2D screen coordinates, and not 3D world coordinates. The difficulty in picking really lies in somehow determining the 3D coordinate space of the object to select. First, lets see what I’m talking about.
Well here is every blogger’s obligatory post about being too busy to blog. I’ve been working hard on the game engine for quite a while, but haven’t made many posts as there’s not too many visually interesting things going on. I’ve been testing out a lot of technology in the game engine and working on robustness. Here’s a short list of the things I’ve been up to since the last post:
I’ve tried to add whatever is necessary to allow prototyping of a game in a short amount of time, without compromising the structure of the engine. I’m quickly realizing that I’d like another layer of abstraction between the engine and the scripting system that contains just gameplay logic. For example, the engine might handle rendering assets and simulating physics, but the gameplay layer is responsible for describing the notion of a “player” or an “enemy”. This is further abstracted into a scripting system that allows rapid level creation.
I interpreted the snake eating itself as a prompt for recycling, revolution, reincarnation. So I started working on a type of “Jenga” game, using my particle system to place blocks in the shape of a tower. The objective is to pull blocks out of the tower and place them on the top to make the tower taller, without falling over. The tallest tower yields the highest score. I worked as a single person team, and did all of my work from home. The game isn’t finished, but I’m fairly happy with the result after just a couple days work.
In the last few months, I’ve been spending a considerable amount of time fleshing out some tedious but necessary parts of my game. I realized that since I’m a one-man army, I need the ability to very quickly get all of my ideas out and into a playable form without a lot of process and layers of tools. Unfortunately, the only way to achieve a very seamless workflow is by specializing your tools, which means rolling my own level editor and game formats. These things are nice to have anyway, but I believe that the time I invest in these tools will pay off in even the very first game I end up writing using them. I decided I needed a quick and easy way to import models and other game assets, a scripting language (in my case, Lua) for data definition and eventually scripted events and possibly game rules, and a level editor that allows rapid building and playtesting of open 3D worlds.
Just a short and sweet post on the progress of my game. This demo shows off basic character and camera controls, as well as interaction with the world. You can toggle gravity using ‘g’, as well as the forces that cause the particles to flock to various shapes using ‘c’. The player moves using standard wasd keys, with ‘q’ and ‘e’ moving the character forward and backward. Holding the left mouse button will fire small “bullets” while clicking the right mouse button will fire larger “bullets”. The player moves within a bounded plane. The statistics display has also been updated to show relevant graphics and physics information.
You can download the executable here: tech_demo_1.zip
Note that you will need to install the June 2010 DirectX and the Visual C++ 2010 runtimes if this is your first time running the software. These should not need to be installed again to run any of my future demos, unless otherwise noted.
I’ve been conspicuously quiet the last couple of weeks and this is why. I have been evolving my rough particle system into a (very targeted) game engine. From a design standpoint, the particle system was a learning project where I tried to leverage as much as I could from the natural mechanisms of C++ to develop an object model for what I planned on turning into a more general purpose game engine. I decided to try and model my game objects using an inheritance structure. I felt that in the real world, nearly everything falls into some sort of classification, often with distinct parent-child relationships. However, as I began adding game specific objects to my engine, I realized that not only is it prohibitively difficult to try to model the real world (the way I felt it should be modeled) due to the sheer volume and complexity, but game objects are simply an approximation of real world occurrences, and as such they tend to “cheat” to achieve a certain effect. Objects in a game can chose the be invisible, or defy the laws of physics. This basically breaks whatever elegant classification structure I had planned. Luckily, this can be addressed by converting to a “has-a” object model, where objects will contain pointers to optional collections of data and functionality.
© 2014 Halogenica | Stumblr by Eleven Themes