Skip to main content
padlock icon - secure page this page is secure

Open Access Ono Lab Research – including Deepening array signal processing theory to expand to asynchronous distributed channel and real world application

Download Article:
 Download
(PDF 402.1 kb)
 
The Ono Laboratory in Japan is a leader in source separation, source localisation and extracting other spatial and spectral information from audio signals. In its latest project, the team is developing sophisticated means to analyse audio data obtained from asynchronous and distributed microphones.
The Ono Laboratory at Tokyo Metropolitan University, Japan, is led by Professor Nobutaka Ono and specialises in the processing of acoustic signals from distributed microphones. As he has pointed out, humans rely on auditory information to make judgements about what is happening outside our visual field. He aspires to develop digital sound processing to the level where it can provide richer environmental and spatial information than our own ears and brain derive. Regarding the laboratory's current project to analyse signals from asynchronous microphone arrays, Ono says: 'For utilising spatial information of sound, we conventionally have to prepare a microphone array as a special device, in which microphones are mounted at known positions and are all synchronised. However, in our daily lives we use many different devices with recording functions and it would be more effective if we could use these as microphone array.' The problems with analysing recordings from such heterogeneous devices include lack of time synchronisation, unknown recording locations and a likely mismatch in sampling frequencies.
Acoustic signal processing is applied to many different scenarios and has real world applications in the enhancement of hearing aids, analysing ecological health and in music and cinema production. Ono explains: 'Various demands for sophisticated acoustic signal processing are coming from industry. One of them is to realise a system that will enable automatic production of meeting minutes and conference proceedings, with each speaker accurately identified and referenced.' Ono has already developed novel and swift algorithms for separating each individual human speaker's voice from overlapping and mixed audio, and is continuing to refine and enhance its processing techniques. The project is being funded by Grant-in-Aid from the Japanese Society for the Promotion of Science and includes collaboration with market end-users and a number of Japanese research institutes.
No References for this article.
No Supplementary Data.
No Article Media
No Metrics

Keywords: ACOUSTIC SIGNAL PROCESSING; ASYNCHRONOUS AND DISTRIBUTED MICROPHONES; ASYNCHRONOUS MICROPHONE ARRAYS; AUDIO SIGNALS; DIGITAL SOUND PROCESSING; ENVIRONMENTAL INFORMATION; PROCESSING OF ACOUSTIC SIGNALS; SAMPLING FREQUENCIES; SOURCE LOCALISATION; SOURCE SEPARATION; SPATIAL AND SPECTRAL INFORMATION

Document Type: Research Article

Publication date: June 1, 2019

More about this publication?
  • Impact is a series of high-quality, open access and free to access science reports designed to enable the dissemination of research impact to key stakeholders. Communicating the impact and relevance of research projects across a large number of subjects in a content format that is easily accessible by an academic and stakeholder audience. The publication features content from the world's leading research councils, policy groups, universities and research projects. Impact is published under a CC-BY Creative Commons licence.

  • Subscribe to this Title
  • Terms & Conditions
  • Disseminating research in Impact
  • Information about Impact
  • Ingenta Connect is not responsible for the content or availability of external websites
  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more