Saturday 5 March 2016

Korean Researchers unveil Smart Glasses that can Type via Virtual Keyboard



K-Glass is an even stronger model of smart glasses reinforced with augmented reality (AR) that were first developed by the Korea Advanced Institute of Science and Technology (KAIST) in 2014, with the second version released in 2015. This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display.

Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed smart glasses that offer users a virtual keyboard to type text. The glasses, called K-Glass 3, comes with a stereo-vision camera located at the front of the device. The two lenses of the camera work similar to the way human eyes work, and can sense depth. This allows users to surf the internet and type text using the virtual keyboard, or even play a virtual piano in thin air.






The stereo-vision camera, located on the front of K-Glass 3, works in a manner similar to three dimension (3D) sensing in human vision. The camera's two lenses, displayed horizontally from one another just like depth perception produced by left and right eyes, take pictures of the same objects or scenes and combine these two different images to extract spatial depth information, which is necessary to reconstruct 3D environments.


The camera's vision algorithm has an energy efficiency of 20 milliwatts on average, allowing it to operate in the Glass more than 24 hours without interruption. The research team adopted deep-learning-multi core technology dedicated for mobile devices to recognize user's gestures based on the depth of information.





Additionally, K-Glass 3 uses a pre-processing core to use stereo-vision as well as seven deep-learning cores to speed up real-time screen recognition. The researchers say that the camera’s vision algorithm has an energy efficiency of 20 milliwatts on average, allowing it to be used for 24 hours without interruption.

The multi-core processor becomes idle when it detects no motion from the user. The team uses deep-learning multi-core technology that is dedicated for mobile devices to recognize gesture inputs. This has improved the glass’s accuracy with images and speed, while also shortening the time needed to process and analyze data.



Instead, it executes complex deep-learning algorithms with a minimal power to achieve high performance as researchers succeeded in fabricating a low-power multi-core processor that consumes only 126.1 milliwatts of power with a high efficiency rate. This technology has greatly improved the Glass's recognition accuracy with images and speech, while shortening the time needed to process and analyze data. In addition, the Glass's multi-core processor is advanced enough to become idle when it detects no motion from users.

Owing to the deep-learning-multi core technology, this gesture controlled device has been greatly enhanced. The newly constructed one takes minimal time to analyse data and possesses greater sensing power. With all these desirable features, it’s definitely a device to try out on as soon as it launches in the market.


( Sources and Citation : http://www.crazyengineers.com/threads/k-glass-3-featuring-augmented-reality-to-be-launched-soon.87365/ )

No comments:

Post a Comment