Microsoft HoloLens Provides Access to Raw Image Sensor streams with Research Mode
The Research Mode of Microsoft HoloLens holographic computer, available in the newest release of Windows 10 for HoloLens, can be used as a potential computer vision research device.
Application code can not only access video and audio streams but can also at the same time leverage the results of built-in computer vision algorithms such as SLAM (simultaneous localization and mapping) to obtain the motion of the device as well as the spatial-mapping algorithms to obtain 3D meshes of the environment. These capabilities are made possible by several built-in image sensors that complement the color video camera normally accessible to applications.
Specifically, HoloLens has four gray-scale environment tracking cameras and a depth camera to sense its environment and capture gestures of the user. Two of the gray-scale cameras are configured as a stereo rig capturing the area in front of the device so that the absolute depth of tracked visual features can be determined through triangulation. Meanwhile the two additional gray-scale cameras help provide a wider field of view to keep track of features. These synchronized global-shutter cameras are significantly more light-sensitive than the color camera and can be used to capture images at a rate of up to 30 frames per second (FPS).
The depth camera uses active infrared (IR) illumination to determine depth through time-of-flight. The camera can operate in two modes. The first mode enables high-frequency (30 FPS) near-depth sensing, commonly used for hand tracking, while the other is used for lower-frequency (1-5 FPS) far-depth sensing, currently used by spatial mapping. In addition to depth, this camera also delivers actively illuminated IR images that can be valuable in their own right because they are illuminated from the HoloLens and reasonably unaffected by ambient light.
With the newest release of Windows 10 for HoloLens, researchers have the option to enable Research Mode on their HoloLens devices to gain access to all these raw image sensors streams, shown below.
Researchers can still use the results of the built-in computer vision algorithms but can now also choose to use the raw sensor data for their own algorithms. The sensors' streams can either be processed on device or transferred wirelessly to another PC or to the cloud for more computationally demanding tasks. This opens a wide range of new computer vision applications for HoloLens. In egocentric vision, HoloLens can be used to analyze the world from the perspective of a user wearing the device. For these applications, HoloLens abilities to visualize results of the algorithms in the 3D world in front of the user can be a key advantage. HoloLens sensing capabilities can also be very valuable for robotics where these can, for example, enable a robot to navigate its environment.
Microsoft will demonstrate these new HoloLens capabilities at a tutorial on June 19th, 2018, at the IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City.