Thursday, April 13, 2006

Chosen First design method - Videotracking and mesh insertion

Just to clarify how the application is supposed to work, I'll give a summary here.

First, in my thesis partners part, coordinates in a video are chosen. This coordinates should at this early stage be the four corners of where a building should be inserted upon. Coordinates are found in a few frames, clicked on, and then the application interpolates between the frames. This is sometimes called keyframing.

My part of the application takes the four coordinates for the current frame, creates a homography matrix (used for calculating coordinate correspondence between different coordinate systems) and uses the homography to:
  1. Set texture coordinates for the ground plane - basically calculating the corner coordinates and then normalizing them.
  2. Find the position of where the house should be inserted on top of the ground plane.
  3. Find the rotation of the house, through the recieved four coordinates.
This sounds very simple in some ways, but there are some stumbling blocks - as has been described in previous posts. Currently, the greatest obstacle is that the video texture doesn't dispose - but I expect to have that solved within a few days (I'm also working on other stuff parallell, I'm not that slow ;p).

I expect to later move a lot of the calculations to HLSL, hopefully making it faster in the process. But for now I've decided to stick to the simplest ways of doing things and just make them work...




If we have time for the second method, it will be different in a number of ways. In that method we wont use any video in the 3D world, but instead only calculate which position and orientation to put the inserted building. While in the first method we can consider the building only being rotated around the Y-axis (meaning the building is always having the same sides facing up/down, but "changing other directions"), the second method also considers the other axises, making us calculate orientation and position in X, Y and Z coordinates. As a comparison, the first method calculates the position in X and Z coordinates, and as mentioned the rotation is only around the Y axis.

Wednesday, April 12, 2006

MDX Video texture code

Since I've found that a lot of people have had trouble with creating video textures with Managed DirectX, I decided to post some of the code for it here. The code basically loads the video and then renders when the current frame has been copied.

The following can be put for example where you create the mesh, as I did - or of course where ever else you find suitable:
video = Video.FromFile("test.avi"); //I'll set this to open a video with the file menu later
video.TextureReadyToRender += new TextureRenderEventHandler(onTextureReadyToRender); //Set an event handler to fire when the texture is ready to render.
video.RenderToTexture(_device); // Render the texture with the device (graphics card)
video.Play();
About an hour after writing the post it's time to update it.... I changed some stuff in the eventhandler, giving the following code as result:
public void onTextureReadyToRender(object sender, TextureRenderEventArgs e)
{

if (e.Texture == null)//If there's no texture file (video) for 'e', then get out of here
return;
SurfaceDescription ds = e.Texture.GetLevelDescription(0);
if (ds.Pool == Pool.Default)
{
sysSurf = _device.CreateOffscreenPlainSurface(ds.Width, ds.Height,
ds.Format, Pool.SystemMemory);
}
using (Surface vidSurf = e.Texture.GetSurfaceLevel(0))
{
if (_tex == null)//If there is no texture set to "_tex"
{
_tex = new Texture(_device, ds.Width, ds.Height,
1, Usage.Dynamic, ds.Format, ds.Pool);
}
using (Surface texSurf = _tex.GetSurfaceLevel(0))
{
SurfaceLoader.FromSurface(texSurf, vidSurf, Filter.Linear, unchecked((int)0xffffffff));
}
}
Invalidate();
}

To give some quick comments:
  • The update peoblem (application not updating without user interaction) was fixed when I put in the Invalidate() method.
  • The updating looks quicker than before - but of course it's impossible to tell by the naked eye.
  • The changes were made after I found some useful things in the book("Managed DirectX 9 Kick Start" by Tom Miller in the MDX team)
I believe most is pretty obvious. Just get the video file, check when it's loaded and ready to render, render it to a texture and watch the result...

Sorry about the lack of indents in the code - Blogger doesn't like the easy methods to create indents like tab/space, so I'll just skip that unless I get comments about it ;-)

I still have a lot of trouble with disposing the video texture when closing the application, but since I can't find anything to fix it after trying different ways and searching different places, I'll leave it for now...

The next things I'll do will just be minor issues. like for example stretching the video texture different ways, just to see the effect of it, and if it can be used easy and quick to rectify the frame images..

Tuesday, April 11, 2006

Video perspective distortion rectification

Well... the title says it all - need I say more?

This Thursday - two days away - I need to have the next step of the MDX application ready!! This means, at least, the following substeps, since my last post:
  • Creating a ground plane (a quad in Managed DirectX), which uses a video as texture. This substep was finished today, after a lot of trouble - mostly because there's next to no information about video textures with Managed DirectX. I almost gave up and changed to OpenGL, but since I want to learn as much MDX as possible I let it take the time it took.
  • Creating a new house mesh, which has the same size (at least proportions) as the place in the video, and a texture of its own.
  • Finding the homography (coordinate correspondences) between "real world coordinates" and the current frame in the video texture. This means calculating a new homography matrix for each frame, which of course may slow down the application. This can hopefully later be moved to HLSL instead, to let the graphics hardware take care of it whichever way it likes.
  • Setting texture coordinates of ground plane, according to the homography matrix.
  • Setting location and orientation of the house mesh, according to coordinates from the video texture, combined with the homography. However, the coordinates come from clicks in my thesis partners application, meaning I'll have to wait for those before doing anything final.
  • Fixing a lot of smaller and bigger issues (same texture showing on house and ground, video texture not updating without user input, application not completely stopping/disposing all when closing window etc.).
Future TODO's include
  • Doing all matrix calculations in graphics hardware - hopefully speeding the application up considerably.
  • Rendering the result and creating a "de-rectification" on the rendered video.
  • Obviously, clean up and comment the code more :-)

The image shows a sample frame of the latest version. The house mesh is positioned on top of the red square, but with the same texture as the ground plane.

As has been previously described, the ground plane will be changed to make the red square completely straight (perspective distortion rectification) in the current frame. This will give the effect of the image sides becoming crooked, while giving the opportunity to easier place the house on top pf the ground plane. The house will then, obviously, be placed with location and orientation according to the square.