Forums are in read-only mode. Data has been backed up but be sure to save anything you want.

Calling All Smart Programers



  • Right now I am on a robotics club that will be competing in this: http://www.nasa.gov/offices/education/centers/kennedy/technology/lunabotics.html

    LONG STORY (not very) LONG:
    If you read a bunch of crap, you will see that we get extra points if our robot is autonomous. I am teamed with a few other guys and it is our job to try and make it autonomous. We are using the Xbox-360 Kinect to see. The kinect can see distance and RGB images. I wrote some code that takes the depth image it sees, and makes a heightmap of what it can see and adds to the map every frame, given it knows how much it has rotated since it last took a picture, (let me know if you want to see it). It gives us a map of our environment which our path planning code can use and make a decision on where to go. This uses the depth sensor on the kinect.

    That part is working perfectly. But there is another problem, we don’t know how to measure how much we have turned. The motors can give us feedback, but its extremely inaccurate. Right now there is this software: http://reconstructme.net/ which stitches images together and does exactly what we need our program to be able to do, figure out how much it has turned and moved. I looked at the source code but its a little too complicated for me to be able to understand. I understand the underlying mechanic behind stitching images together, but I don’t know how I would actually make an algorithm do it consistently and reliably, especially with how distorted the images can be in relation to each other. I found this: http://research.microsoft.com/pubs/70092/tr-2004-92.pdf but it is too abstracted for me to implement. I have also only completed Calc 2 up to this point so I haven’t learned how to work with whatever kind of math they are using.

    So as it stands, we have to manually tell it how much it has turned and moved since it last took a picture for it to accurately map. We need it to do it by itself via image stitching.

    LONG STORY SHORT:
    The Xbox-360 Kinect has and RGB image AND a Depth image. This is more than enough to stitch one image onto another. Once that is done, it is very easy to find our rotation since the first image was taken. I need an algorithm to do that, or guidance to make one. Same with moving forwards and backwards.

    There is a lot of stitching software out there, its just a matter of being able to understand it and implement it.
    Thanks in advance, I hope.



  • Right now I am on a robotics club that will be competing in this: http://www.nasa.gov/offices/education/centers/kennedy/technology/lunabotics.html

    LONG STORY (not very) LONG:
    If you read a bunch of crap, you will see that we get extra points if our robot is autonomous. I am teamed with a few other guys and it is our job to try and make it autonomous. We are using the Xbox-360 Kinect to see. The kinect can see distance and RGB images. I wrote some code that takes the depth image it sees, and makes a heightmap of what it can see and adds to the map every frame, given it knows how much it has rotated since it last took a picture, (let me know if you want to see it). It gives us a map of our environment which our path planning code can use and make a decision on where to go. This uses the depth sensor on the kinect.

    That part is working perfectly. But there is another problem, we don’t know how to measure how much we have turned. The motors can give us feedback, but its extremely inaccurate. Right now there is this software: http://reconstructme.net/ which stitches images together and does exactly what we need our program to be able to do, figure out how much it has turned and moved. I looked at the source code but its a little too complicated for me to be able to understand. I understand the underlying mechanic behind stitching images together, but I don’t know how I would actually make an algorithm do it consistently and reliably, especially with how distorted the images can be in relation to each other. I found this: http://research.microsoft.com/pubs/70092/tr-2004-92.pdf but it is too abstracted for me to implement. I have also only completed Calc 2 up to this point so I haven’t learned how to work with whatever kind of math they are using.

    So as it stands, we have to manually tell it how much it has turned and moved since it last took a picture for it to accurately map. We need it to do it by itself via image stitching.

    LONG STORY SHORT:
    The Xbox-360 Kinect has and RGB image AND a Depth image. This is more than enough to stitch one image onto another. Once that is done, it is very easy to find our rotation since the first image was taken. I need an algorithm to do that, or guidance to make one. Same with moving forwards and backwards.

    There is a lot of stitching software out there, its just a matter of being able to understand it and implement it.
    Thanks in advance, I hope.



  • You are using what hardware and what language? Have you done any testing/research on corrective feedback with gyroscopes?



  • We are using C++ up to this point.

    This is all going to be on a dual core laptop. It could be either Linux or Windows. Currently I am using openNI to interact with the Kinect. Although everything I have writen so far can be easily transported to a different framework such as libfreenect. The reason i’m using openNi is because it was the first one I got to work correctly. I tried libfreenect, but it kept having problems.

    We haven’t done anything with gyroscopes yet, but that was one possibility we considered. Except we would have to buy them and then get them to work. There is another person on the project investigating whether to use the gyroscopes, and I am the person in charge of investigating the stitching method. I will talk with him ASAP to find out if he has made much progress, because so far, this stitching thing seems a little too complicated with my current skill set.



  • compass



  • Visual processing is a very complicated process to handle, I’m assuming this is a project aimed to work on the moon. With the amount of monotonous terrain on the moon you’ll get around the same error margins as wheel-slip. You’re using a Camera, why not do what original astronauts did and get celestial bearings? You could record the shift of a star as you turn, and the point would be easier to track than terrain.

    @AtomiC:

    compass

    Whilst the moon doesn’t have a dipolar magnetic field, there are strong sections of magnetic fields on the surface of the moon. I do not know of a commercially available magnetometer that would obtain suitable readings for directional information.



  • Any modern smartphone has more than enough sensors to give very accurate spatial information, just interface your robot to an android phone



  • Yeah, compass was my first thought actually, but don’t think we’re allowed to use one. And unfortunately, this contest will be taking place during daytime in a greenhouse like structure, so we can’t track celestial’s.

    We aren’t allowed to use things that we couldn’t use on the moon, yet ironically they’ve set it up so we can’t use things that would be available on the moon!  ???

    Skip to 4:20 on this video http://www.youtube.com/watch?v=c_jxKulILlQ to see what the actual course we are using looks like. This robot will never go to the moon if that wasn’t already obvious.
    As for the smartphone, i’m pretty sure it uses gyroscopes, so we would just go for those. My mission is to decide whether visual processing would be feasible for determining rotation. I’m sensing a no.



  • Photoshop and I think even the Android panorama mode has this kind of  ‘visual processing’ you are looking for, on Photoshop it’s just a script but I don’t know if they are compiled or is human readable, but either way I don’t think it’s feasible for real-time applications

    Also most modern smartphones have magnetometers, gyroscopes and an accelerometer



  • @Samuel:

    Photoshop and I think even the Android panorama mode has this kind of  ‘visual processing’ you are looking for, on Photoshop it’s just a script but I don’t know if they are compiled or is human readable, but either way I don’t think it’s feasible for real-time applications

    He’s looking for a method of converting number of stitches in a clean sweep to an appropriate angle of turn. I could only imagine that the maths involved in doing this is tricky at best. Not something I’d like an arduino to handle.



  • It doesn’t have to be a clean sweep. We might take a picture, rotate approximately 20 degrees, and take another picture and ideally use software to stitch it, but this isn’t looking good.
    I think i’m just going to tell them we should use gyroscopes. Thanks for the help, especially Benjy. I’ll let you know if we actually produce anything interesting.



  • No problem, yeah share a video if you can.



  • They should still work, though.  Just maybe slowly.  http://www.youtube.com/watch?v=-KQBFJECf6A



  • I was lurking around and happened to see this thread. Are you still working on this? It sounds like an exciting task for a robotics club, you lucky duck.


Log in to reply
 

2
Online

11.3k
Users

15.5k
Topics

300.2k
Posts