Indoor positioning and tracking techniques using consumer mobile devices

Visual odometry is the process of using images, obtained by an onboard camera, to determine the position and orientation of a device. Although widely adopted in the fields of mobile robotics and space exploration, this method has several drawbacks, including the need for extensive calibration prior to each use. Researchers at the University of Oxford have developed visual odometry

Researchers at the University of Oxford have developed visual odometry algorithms based on a machine learning approach, which proves to be robust and does not require calibration. They are also able to incorporate inertial data from an IMU (inertial measurement unit), where present, to provide a more accurate estimation of position and orientation. In the absence of calibration, the Oxford approach proves to be significantly more effective than all other visual odometry algorithms. Furthermore, due to the machine learning basis, the algorithms are continually improving and can effectively handle unknown features and low-quality images. This increased tolerance allows visual odometry to be applied to consumer devices such as mobile phones and wearable cameras.

Visual odometry – localising mobile devices

Odometry is the use of data collected during motion to calculate the relative location of a device in space. Visual odometry relies on images captured by onboard cameras to determine position and orientation. Visual odometry algorithms have been applied in a range of situations, from the MER (Mars Exploration Rover) missions to autonomous passenger vehicles.
Building on the visual approach, by combining image data with inertial data (IMU in visual-inertial odometry) a more accurate estimation of position and orientation can be generated, however, such algorithms are not as widely applied at present.

Less calibration, better localisation

Current visual odometry approaches require large datasets and time-consuming calibration before they can function at an optimal level. This becomes an issue in visual-inertial odometry, where the two sensors must be calibrated independently and together.

From the Mars Rover to a mobile roamer

Researchers at the University of Oxford have developed new visual and visual-intertial odometry algorithms that utilise a machine learning approach. These robust algorithms can operate effectively without calibration and in low-light environments. Automotive and pedestrian data sets have been used in validation and they have been tested in real-time.
We believe that the Oxford algorithms offer the following advantages over existing solutions:

  • No calibration required
  • Function with and without IMU data
  • Can be trained using any image set
  • Tolerates unknown and previously unseen environments
  • Increase in performance over time
  • Applications in VR, mobile phones and low-light environments

Patent protection

Oxford University Innovation have filed patents covering both the visual and visual-inertial odometry approaches and are seeking partners to aid in the commercialisation of this technology.

Request more information
about this technology

Ready to get in touch?

Contact Us
© Oxford University Innovation