LANGUAGE & ACCENT IDENTIFICATION (LID)
Emotech LID is an expandable language and accent detection system. The LID has been designed in a way that enables users to change and/or expand the language and accent classes, and also adapt to a particular domain with a limited amount of data.
MMPA (AUDIO-VISUAL CAPT)
Emotech MMPA is the world first audio-visual Computer Assisted Pronunciation Training (CAPT) system. The Emotech MMPA is a state-of-the-art lipreading and acoustic processing technology, capable of providing high-accuracy pronunciation assessment in noisy real-life conditions like classrooms.
AUTO GWEEK AUDIO
AutoGweek is the back-end processing system of Gweek Speech Intelligence Analytics™. Gweek offers a transformational learning experience for life-long communication skills. At Gweek, we have developed our automatic speech and natural language processing systems to analyze our users' communication skills at different levels using both audio and visual information to generate a personalized feedback for them.
A project on building tools for developing active hearing on the MIRo robot platform.
Speech and Hearing Research Group (SpandH), Department of Computer Science, University of Sheffield.
An API for the KUKA Intelligent Industrial Work Assistant (iiwa) Lightweight Robot (LBR). This API builds upon the safety embedded within the KUKA iiwa to allow close working and interaction with operators. It brings the functionality into the RobotOperating System (ROS), which provides a distributed development environmentallowing multiple new modalities of devices to interface easily.
AUTOMATED LAMENESS SCORING (ALS) SYSTEM
A neural network based system for automatic lameness detection and scoring for dairy cows.
An open source multiple mobile-robots/humanoid control framework in USARSim simulator environment.