The Blog of Ryan Foss

It's a start!
simulation

Rocket Builder Prototype

This Rocket Builder Prototype is something I’ve been playing with. Inspired by Kerbal Space Program and my kids interest in Bad Piggies’s sandbox mode, I built a quick prototype for testing. When I showed my kids its was a hit, and I’ve never experienced better motivation.

CLICK TO PLAY

This prototype was built in Unity 3.5 and works on Android, PC and web. I used Unity’s built in physics of rigid bodies and joints for the parts which I think works great.  I’m aware of a number of bugs or strange behavior, and the interface needs work, but it is a prototype after all.

There is still a lot I want to do with this, and my kids have more ideas too. Hopefully I can get back to it in the near future.

If you have any ideas or comments, or a cool design you want to share, send me a note!

Automatic Road Geometry Generation in Unity

I’ve had an awesome project at work developing a simulation in Unity3D that builds a test track, a road essentially, right before your eyes.  Technically, we’re using some pretty hefty data, including OpenCRG and Power Spectral Density data as inputs, but I also added a lower weight random algorithm that can make for some roads more appropriate for game play.

In the image above you can see a long length of track that I generated with a few inputs.  Additionally, it creates road rails to help keep the vehicle on the road.  It also creates a simple ditch on each side of the road from a simple profile. Additional options include material selection, which changes the road appearance.

It’s not often that I get to share stuff from work, but this is only a taste of what it can do. The project will eventually be released to the public. Hopefully some day I can share a video of everything it does.

The vehicles are from the Car Tutorial project provided by Unity.

Hungry Monsters Prototype

While on my recent business trip, I spent my time at the airport and on the plane programming a game concept I’ve been thinking about a lot lately: Hungry Monsters (the first prototype). Fair warning, there is no game play yet, but merely a rapid prototype of the game play elements, specifically the resource management of employees, work stations, ammunition and the actual playing field.

Hungry Monsters is very similar to Plants vs. Zombies, but with my own engineering spin on it. What’s different, at least in my design, is that the player will have to decide how to spend their resources differently by running a bakery to provide the food (the ammunition), as well as place weapons to fend off (feed) the onslaught of hungry monsters. Players will have to decide if they place another muffin shooter, an oven, or another employee for instance.

The prototpye at this stage doesn’t allow the player to do anything yet, but was built to allow me to investigate the idea. Through the Unity Editor I’m able to try different combination of things. Considering this took me about 6 hours (built from scratch in airports and on the flight for a recent business trip) I’m pretty happy with the outcome and excited to move on. Please excuse the Microsoft Paint artwork and simple geometry, it is a first pass prototype!

In the next version I hope to have the basic interactive elements working to allow placement of work stations, workers and weapons. (I need a better word for weapons too!)

Booth Bunny Me

I was out of town this last week, employed as a booth bunny for a simulation demonstration we made to showcase a number of our systems. It’s fun and frustrating and exciting and boring, all in one!

Position from Depth

I’ve been doing some stuff at work using our visualization, shaders and some python scripting.  I normally don’t post stuff about work for many reasons, but this project has been a lot of fun and is worth blogging about (and I have permission).  I also want to document what I did as well as address some of the issues I encountered.

Essentially, long story short, we’re doing some human safety systems work where we need to detect where a human is in an environment.  I’m not directly involved with that part of the effort, but the team that is is using some depth cameras (like Kinect in a way) to evaluate the safety systems.  Our role, and mine specifically, is to provide visualization elements to meld reality and simulation and our first step is to generate some depth data for analysis.

We started by taking our Ogre3D visualization and a co-worker got a depth of field sample shader working.  This first image shows a typical view in our Ogre visualization.  The scene has some basic elements in world space (the floor, frame and man) and others in local space (the floating boxes) we can test against.

A sample scene, showing the typical camera view.  The upper-right cut-in is a depth preview.

The next image shows the modifications I made to the depth shader.  Instead of using a typical black and white depth image, I decided to use two channels, the red and green channels.  The blue channel is reserved for any geometry beyond the sensor vision. Black is depth less, essentially no geometry exists there.

Two color channel depth output image.

I decided to use two color channels for depth, to improve the accuracy.  That’s why you see color banding, because I hop around both channels.  If I only used one channel, at 8 bit, that would be 256 colors.  A depth of 10 meters would mean that the accuracy would only be about 4 cm (10.0 m / 256). By using two color channels I’m effectively using 16 bit, for a total of 65536 colors (256 * 256), which increased our accuracy to 1.5 mm (10.0 m / 65536).  In retrospect, perhaps I could have used a 16 bit image format instead.

To do this sort of math its surprisingly easy.  Essentially you find the depth value right from the shader and make it a range of 0 to 1, with 1 being the max depth.  Since we are using two channels, we want the range to be between 0 and 65536, so just take the depth and multiply by 65536.  Determining the 256 values for each channel is pretty easy too using the modulus.  (A quick explanation of a modulus is like 1pm = 13.  It’s when numbers wrap around.  So the modulus of 13 by 12 is 1 for example, as is 25 = 1.  You could also consider it the remainder after division.)  So the red channel is determined by the modulus of depth by 256.  The green channel is done similarly, but in this case is determined by the modulus of depth/256 by 256.

red channel = modulus(depth, 256)
green channel = modulus(depth/256, 256)

Here’s an example.  Lets say the depth is 0.9.  That would result in a color value of 58982.4 (0.9 * 65536).  The red channel color would be the modulus of 58982.4 by 256, which equals 102.  The green channel would be the modulus of 58982.4/256 by 256, which is 230.

With that done, I save out the image representing the depth with two channels as I illustrate above.

Next I calculate the position from the image and depth information.  This particular aspect caused me lots of headaches because I was over-complicating my calculations with unnecessary trigonometry.  It also requires that you know a some basic information about the image.  First off, it has to be symmetric view frustum.  Next, you need to know your field of views, both horizontal and vertical, or at least one and the aspect.  From there its pretty easy, so long as you realize the depth is flat (not curved like a real camera).  Many of the samples out there that cover this sort of thing assume the far clip is the cut off, but in my case I designed the depth to be a function of a specified depth.

I know the depth by taking the color of a pixel in the image and reversing the process I outlined above.  To find the x and y positions in the scene I take a pixel’s image position as a percentage (like a UV coordinate for instance), then determine that position based off the center of the image.  This is really quite easy, though it may sound confusing.  For example, take pixel 700, 120 in a 1000 x 1000 pixel image.  The position is 0.70, 0.12.  The position based on center is 0.40, -0.76.  That means that the pixel is 40% right of center, and down 76% of center.  The easiest way to calculate it is to double the value then minus 1.

pixelx = pixelx * 2 – 1
pixely = pixely * 2 – 1

To find the x and y positions, in local coordinates to the view, its some easy math.

x = tan(horizontal FOV / 2) * pixelx * depth
y = tan(vertical FOV / 2) * pixely * depth
z = depth

This assumes that positive X values are on the right, and positive Y values are down (or up, depending on which corner 0,0 is in your image).  Positive Z values are projected out from the view.

To confirm that all my math was correct I took a sample depth image (the one above) and calculated the xyz for each pixel then projected those positions back into the environment.  The following images are the result.

Resulting depth data to position, from the capture angle.
The position data from depth from a different angle.
The position data from depth from a different angle.

The results speak for themselves.  It was a learning process, but I loved it.  You may notice the frame rate drop in the later images.  That’s because I represent the pixel data with a lot of planes, over 900,000.  It isn’t an efficient way to display the data, but all I wanted was confirmation that the real scene and the calculated positions correspond.

Sound Served

At work I played around with our sound server and sound messaging and I was excited to get variable engine sounds working working. When we first started doing sound on the PC, the system couldn’t keep up with the simulation and play the sound at sync. None of us had any sound programming experience and could not get it to work well. We had some balancing issues so we made a sound server that could run on a remote machine where the processor wasn’t consumed. This was good for a lot of reasons, but ultimately is ugly today IMO because it adds complexity and lots of messaging. Hopefully I can integrate sound directly into our simulation this summer. Meanwhile, since we are simulating some boats I added the ability to control volume and pitch and volume on indexed sounds that are attached to vehicles, and send messages with the corresponding values. The code looks something like this.

pitch = 0.75 + 0.5*currentSpeed/maxSpeed;
volume = 0.50 + 0.5*currentSpeed/maxSpeed;

//send message “vehicle index soundname x y z h p r volume pitch”

Its pretty simple and it works quite well. Essentially as the vehicle increases in speed the volume and pitch of the motor sound increases. It proved the concept, which I’m sure is how some games do it. Next I want to add an idling sound and modify it something like this.

if currentSpeed < 0.25*maxSpeed {
volumeIdle = 1.0 – 1.0/0.25*currentSpeed/maxSpeed; //goes from 100% to 0%
volumeEngine = 0.5 + 1.0*currentSpeed/maxSpeed; //this should max at 0.75
}
else{
volumeIdle = 0.0;
volumeEngine = 0.75 + 0.25*currentSpeed/maxSpeed;
}
pitch = 0.75 + 0.5*currentSpeed/maxSpeed;

//send message “vehicle index soundnameIdle x y z h p r volume pitch”
//send message “vehicle index soundnameEngine x y z h p r volume pitch”

This should let the engine idle sound be heard when the vehicle is not moving, or moving very slow. I don’t know if this will work, but I bet with some tweaking it will sound ok.

Also, since this is for a boat it makes sense as there is no shifting or gears; the engine just ramps up. But if I want to apply this to another type of vehcile with a more complicated engine, it could be modified this:

pitch = 0.5 + 0.25*gear + 0.75*currentSpeed/maxSpeed;
volume = 0.5 + 0.75*gear + 0.25*currentSpeed/maxSpeed;

Where gear is used to make the sounds see-saw some for that vroom-vrooooom-vrooooom sound of changing gears.

Flash Class


I know it isn’t much, but this is what we had to do in my Simulation class. For some reason we have Flash labs. This one basically had us make objects and then move something. Pretty lame, but I assume it’s going to get better.

I’m deadly (and geeky) too!


I spent the last four days in DC at the Navy League something or other trade show standing on the trade show floor, demonstrating our modeling and simulation to VIPs and other not-so-VIPs. Turns out the schedule I conned my co-worker/friend Brent into coincided with an interview. Brent got a stunning write up on the Popular Mechanics technology blog and all I got was an extra hour of sleep.