The Blog of Ryan Foss

It's a start!
graphics

Fireman Run Dynamic Water Spraying Geometry


WaterSprayer1

I built some custom editor extensions in Unity to help me make geometry for use in level creation.

The spraying object’s mesh is generated on the fly with a number of settings. It automatically generates the geometry, assigns UVs and adds additional colliders for game interactions.

Automatic Road Geometry Generation in Unity

I’ve had an awesome project at work developing a simulation in Unity3D that builds a test track, a road essentially, right before your eyes.  Technically, we’re using some pretty hefty data, including OpenCRG and Power Spectral Density data as inputs, but I also added a lower weight random algorithm that can make for some roads more appropriate for game play.

In the image above you can see a long length of track that I generated with a few inputs.  Additionally, it creates road rails to help keep the vehicle on the road.  It also creates a simple ditch on each side of the road from a simple profile. Additional options include material selection, which changes the road appearance.

It’s not often that I get to share stuff from work, but this is only a taste of what it can do. The project will eventually be released to the public. Hopefully some day I can share a video of everything it does.

The vehicles are from the Car Tutorial project provided by Unity.

Unity Skyboxes Help

I’m building a simple weather system in a Unity simulation at work and I’m using the supplied Skybox shader along with some of the supplied skybox materials.  The supplied standard asset skyboxes do a good job for part of what I need, but I needed a few more variations.  I decided to use one of their texture sets and make a few quick atmospheric modifications.  As I expected, doing this in Photoshop can come with some problems.  Texture alignment at the edges and corners required special attention, along with the box distortion.

I made this helper image as I worked to remind me of how things should work.

Position from Depth

I’ve been doing some stuff at work using our visualization, shaders and some python scripting.  I normally don’t post stuff about work for many reasons, but this project has been a lot of fun and is worth blogging about (and I have permission).  I also want to document what I did as well as address some of the issues I encountered.

Essentially, long story short, we’re doing some human safety systems work where we need to detect where a human is in an environment.  I’m not directly involved with that part of the effort, but the team that is is using some depth cameras (like Kinect in a way) to evaluate the safety systems.  Our role, and mine specifically, is to provide visualization elements to meld reality and simulation and our first step is to generate some depth data for analysis.

We started by taking our Ogre3D visualization and a co-worker got a depth of field sample shader working.  This first image shows a typical view in our Ogre visualization.  The scene has some basic elements in world space (the floor, frame and man) and others in local space (the floating boxes) we can test against.

A sample scene, showing the typical camera view.  The upper-right cut-in is a depth preview.

The next image shows the modifications I made to the depth shader.  Instead of using a typical black and white depth image, I decided to use two channels, the red and green channels.  The blue channel is reserved for any geometry beyond the sensor vision. Black is depth less, essentially no geometry exists there.

Two color channel depth output image.

I decided to use two color channels for depth, to improve the accuracy.  That’s why you see color banding, because I hop around both channels.  If I only used one channel, at 8 bit, that would be 256 colors.  A depth of 10 meters would mean that the accuracy would only be about 4 cm (10.0 m / 256). By using two color channels I’m effectively using 16 bit, for a total of 65536 colors (256 * 256), which increased our accuracy to 1.5 mm (10.0 m / 65536).  In retrospect, perhaps I could have used a 16 bit image format instead.

To do this sort of math its surprisingly easy.  Essentially you find the depth value right from the shader and make it a range of 0 to 1, with 1 being the max depth.  Since we are using two channels, we want the range to be between 0 and 65536, so just take the depth and multiply by 65536.  Determining the 256 values for each channel is pretty easy too using the modulus.  (A quick explanation of a modulus is like 1pm = 13.  It’s when numbers wrap around.  So the modulus of 13 by 12 is 1 for example, as is 25 = 1.  You could also consider it the remainder after division.)  So the red channel is determined by the modulus of depth by 256.  The green channel is done similarly, but in this case is determined by the modulus of depth/256 by 256.

red channel = modulus(depth, 256)
green channel = modulus(depth/256, 256)

Here’s an example.  Lets say the depth is 0.9.  That would result in a color value of 58982.4 (0.9 * 65536).  The red channel color would be the modulus of 58982.4 by 256, which equals 102.  The green channel would be the modulus of 58982.4/256 by 256, which is 230.

With that done, I save out the image representing the depth with two channels as I illustrate above.

Next I calculate the position from the image and depth information.  This particular aspect caused me lots of headaches because I was over-complicating my calculations with unnecessary trigonometry.  It also requires that you know a some basic information about the image.  First off, it has to be symmetric view frustum.  Next, you need to know your field of views, both horizontal and vertical, or at least one and the aspect.  From there its pretty easy, so long as you realize the depth is flat (not curved like a real camera).  Many of the samples out there that cover this sort of thing assume the far clip is the cut off, but in my case I designed the depth to be a function of a specified depth.

I know the depth by taking the color of a pixel in the image and reversing the process I outlined above.  To find the x and y positions in the scene I take a pixel’s image position as a percentage (like a UV coordinate for instance), then determine that position based off the center of the image.  This is really quite easy, though it may sound confusing.  For example, take pixel 700, 120 in a 1000 x 1000 pixel image.  The position is 0.70, 0.12.  The position based on center is 0.40, -0.76.  That means that the pixel is 40% right of center, and down 76% of center.  The easiest way to calculate it is to double the value then minus 1.

pixelx = pixelx * 2 – 1
pixely = pixely * 2 – 1

To find the x and y positions, in local coordinates to the view, its some easy math.

x = tan(horizontal FOV / 2) * pixelx * depth
y = tan(vertical FOV / 2) * pixely * depth
z = depth

This assumes that positive X values are on the right, and positive Y values are down (or up, depending on which corner 0,0 is in your image).  Positive Z values are projected out from the view.

To confirm that all my math was correct I took a sample depth image (the one above) and calculated the xyz for each pixel then projected those positions back into the environment.  The following images are the result.

Resulting depth data to position, from the capture angle.
The position data from depth from a different angle.
The position data from depth from a different angle.

The results speak for themselves.  It was a learning process, but I loved it.  You may notice the frame rate drop in the later images.  That’s because I represent the pixel data with a lot of planes, over 900,000.  It isn’t an efficient way to display the data, but all I wanted was confirmation that the real scene and the calculated positions correspond.

Low Polygon Characters



Here are a few shots of the characters I’ve been working on lately. They are a combo level boss. The knight is the first half. Once you beat him, he takes on his spider form.

He should be appearing as the level two boss in the upcoming Android game.

A Nearest Color Algorithm

I’ve been developing this software at work to take an image from a video stream (a webcam in this case) and detect if a green laser dot is present. After spending most of my day getting a project to compile and a program to recognize my webcam as a stream, and being distracted by numerous other projects and co-workers, I just wasn’t at the top of my algorithm development game to detect the green dot. Combine that with office lighting and you know my dilemma (fluorescent lighting is often green tinted and throws off a white balance.)

I start by stepping through each pixel in the source and check its color. If you take the pixel’s colors from each channel and find the average you’ve essentially got a gray scale value. This doesn’t work so well since some shiny metal might throw off the detection (like a ring). If I wanted to find the brightest spot, this is a good approach, but I needed to emphasize the green. At first I added a weight to the green channel by simply doubling it. So now the average is calculated as (R+G+G+B)/4, counting the green channel twice. This was better, but also prone to problems.

At the end of the day I had a basic project setup and working with some hit or miss detection. I had invested over an hour into multiple approaches and although my algorithms for color matching were working somewhat, and I knew there had to be a better approach.

What dawned on me later was something I should have noted much earlier. I know that the RGB color concept is essentially a three dimensional array, but since it is color I’m used to thinking of it two dimensionally like you see in an image editing program. Once I visualized it as a 3D space, a big cube made of blocks so to speak, I knew my answer.

You know the shortest route from A to B is a straight line right. This is easy to calculate in 2D using the Pythagorean Theorem as you probably recognize:

a² + b² = c²

This also applies in 3D space, which is exactly what RGB color is. An RGB value is a point in a 3D space. The Theorem applies like this:

a² + b² + c² = d²

The obvious solution was in front of me but I didn’t see it. Euclidean distance would give me exactly what I needed. I needed to treat the color difference as a distance.

r² + g² + b² = d²

For instance, say I wanted to find the closest pixel in the supplied source to a target color RGB 128, 255, 128. I check pixels against my target color by finding their channel distance. So imagine I have a black pixel of 0, 0, 0. My distance is calculated as:

r = pixel_color – target_color = 0 – 128 = -128
g = pixel_color – target_color = 0 – 255 = -255
b = pixel_color – target_color = 0 – 128 = -128

d = sqrt(r² + g² + b²) = 312.7

What if I have a pixel with color 0, 0 255, or a pixel with color 0, 255, 0. Which one is closer to my target? If I calculate the average, they are the same. But by distance, the green pixel is closer to my desired.

GSP 380 Week 1 Complete

I just finished my homework for week 1 of GSP 380, Multimedia Programming with Lab. This week it was pretty simple though I’m really looking forward to the weeks ahead.

This week I had to get VS 2008 Express and the DirectX SDK installed and then compile and run two programs, one using DirectX and one using OpenGL.

For the OpenGL assignment we had to modify the program, which is why you see a different color and an extra triangle.

Game For Review, Board@Work

Ben submitted our iPhone game on Friday last week. He expects it to take about a week to be released, since his last game took about that long to be approved. Since then, we’ve made a video to showcase some of the gameplay.



I also helped him update his website. I’m not one much for web design, because its often frustrating and time consuming, but I think simplicity is nice. Hence, the design of his site conforms to that. You can check it out, along with info about the game, at SpigotGames.com

New Logo for Spigot Games


I offered to make Ben at Spigot Games a new logo for his game company. It was selfish of me because I didn’t like his existing logo very much. This is an improvement.

The drop shadow one is the one he is using. It helped give it some depth I think, though I’m 50/50 on which one I like more.

iAlley Ball Release


I did some art for a local iPhone developer a while back. He just released the game. Its called iAlley Ball. I don’t have an iPhone/Touch so I haven’t played it yet.

This was rendered in Carrara (old) with 4 layers, where I used Photoshop to blend to get the results I wanted for shadows and lighting. It turned out better than I thought my old software could do.

I would have used Blender, but at the time I was struggling with the materials and camera, so I fell back on my old familiar. Note that the Carrara I have is 3.0 (ala 2003), and I made it do some things it isn’t capable of via creative blending of multiple renders.

2003! Man my PC is getting old.

Torque Moonbase

I wrote a crazy complicated pitch for my class, something ambitious requiring a lot of determination, sweet and coding. Problem right there, I don’t have all the time I need. So I switched gears entirely and spent much of the night getting my world ready.

This is based off the starter.fps project. The character is default, but the terrain and skybox (the planet and space background) are custom. Check out the crater in the lower right, I’m able to paint those around. Looks pretty cool I think.

Torque Me

In my Modification and Level Design class (GSP 340 at Devry) we’re using the Torque Game Engine to “modify” a game. I’m behind in the class because I don’t put my nose to the grind stone until 11:00 pm, and then I get distracted.

Were using the book 3D Game Programming All in One by Ken Finney, and I’m through chapter 5. Chapter 4 and 5 have the template started with running game levels. I couldn’t resist and took their default player and put my face on him.

iPhone Games

I think the iPhone is a really neat device, regardless of my distaste for Apple marketing and consumer milking (which I won’t go into). I see the potential for independent developers and wish I had the skills (and the time and money) to develop games for the device. I’m getting close though. I’ve managed to wrangle a freelance gig doing some 3D level model design and graphics work for a iPhone app. I’m signing an NDA so I can’t talk about it and I don’t know much yet, but hopefully it will be cool.

If you’re an iPhone developer looking for someone to do art, graphics, 2D or 3D models, level design, textures, icons, etc., please give me a buzz and I’ll see what I can do.