WiSSAP 2017 - SPATIAL AUDIO PROCESSING
Audio/speech processing technologies have addressed mainly the signals/information acquired at the source, whereas most human experience or enjoyment and necessity is physically farther from the sources; thus, speech becomes a spatial audio signal instead of a source signal. Processing of spatial signals is more challenging because of room acoustics, source/receiver radiation pattern, interference from other sources, as well as the location and directionality of the sources. Thus, spatial audio processing encompasses new and bigger signal processing problems, addressed through multi-sensor signals, source localization/tracking, source separation, source classification, estimation of source characteristics, sensor synchronization, calibration, etc. The corresponding human perception/cognition issues become important. The signal processing also includes issues of sound delivery through multi-loudspeaker systems and issues related to optimum listening experience.
Outline of Topics:
a) Spatial Audio Perception; binaural hearing, enhancement, localizaiton, timbre, spatial dimensions, reverberation...
b) Binaural models, HRTF, HRTF modeling, IID, ITD, inter aural coherence
c) Application to hearing aid, virtual reality systems
d) Microphones, directionality, recording techniques, surround sound, sound field, acoustic enclosure
e) Speech enhancement, localizaiton, separation; beamforming, microphone array recording, sound field modeling.
WELCOME TO WiSSAP - 2017
WiSSAP - Winter School on Speech and Audio Processing provides a forum for students, researchers and professionals to enhance their background and get exposed to intricate research areas in the field of speech and audio signal processing. The schools constitute of a theme topic and tutorials surrounding it.
WiSSAP 2017, scheduled between 26th and 29th January, 2017, at Indian Institute of Science (IISc), Bangalore, is the twelth in the series following the very successful earlier eleven winter schools.