If you’ve got a system that will correct parallax in worn (powered) sensors, you could win a prize of up to $50,000. US Special Operations Command (USSOCOM) is sponsoring a public prize challenge for a wearable system to correct optical parallax resulting from offset sensors.
Background.
USSOCOM is developing a multi-spectral visual system; however, fusing the inputs of offset optical sources results in parallax – a perceived change in the position of an object due to the offset positions of the optical sensors. To address this issue, USSOCOM is sponsoring the Algorithm for Real-Time Parallax Correction prize challenge. For additional details and to register to solve the challenge, visit www.innocentive.com/ar/challenge/9933759..
Challenge Structure.
Begin Date: 27 Jan 16
Submission Close Date: 29 Feb 16
Phase I winners will be announced approximately 2 weeks later, and invited to participate in Phase II
Phase II will be a live demonstration of the solvers solutions.
Prize Structure.
$50,000 in total prize awards is available.
Phase I winners are eligible for up to $15,000
Phase II winners are eligible for up to an additional $35,000
Get the rest of the details on how to participate at www.fbo.gov..
If you can crack that code, the IP has broad application and worth a lot more than $50k.
I have a feeling a job offer would immediately follow…
Good luck finding someone smart enough to actually figure this out, yet dumb enough to give it up for less than 50k
Most of the people I know who are smart enough to crack this code don’t know what their energy is worth.
Requirement of registration to view fundamental details of challenge? Hmm…
Why they put there a kindergarten-level explanation of what the parallax is, but didn’t tell at least which spectral ranges are they talking about, like VIS+NIR+TIR or something?
If they need to align two (or more) images from sensors worn on the body (of a single individual) then I would automically assume they’re using a combination of sensor types. It wouldn’t make sense to use two of the same sensors on one person – unless it were to be used for calculating range (which would be more discrete than pointing IR measurement lasers downrange like we are presently doing).
Overlaying thermal and optical images on long range devices is already being done – just nothing yet at troop-level distances.
“automatically” not automically.
…touch-screen typing frequently misses taps.
Basic OPSEC? Why would they openly publish that information, rather than make the people that want to access this info jump through a small, but reasonable hoop?
Because the people who can solve it for less are small shops and don’t have the know how or the bandwidth to jump through hoops.
They are taking a page out of DARPA’s book by using crowd sourcing and offering prizes to civilians that can do the work for them. You know that if the government sought to solve the problem itself the initial estimated cost would 1 million with the final costs being 5 million.
Use a multivariable affine transformation…. except that’s easier said than done.
+1
I work with aerial photos as an photogrammetrist and use transformations to solve parallax all day.
Hade a check with trimble, They Will know how to solve this.
Or some university with an maths/geomatics department.
The issue is combining multiple inputs, real time, and making it simple, small and low power enough for a Soldier to wear.