Digital Signal Processing Technology Improves the Conference Experience
While modern video conferencing has become a multimedia experience – encompassing not only sound and video but collaboration technologies such as screen sharing, document sharing and messaging – at the core of the experience is still audio. You can carry on remote conferencing without a lot of elements, but clear audio isn’t one of them. Unfortunately, many systems priced to be affordable today leave a lot to be desired when it comes to audio technology.
Getting the audio right isn’t easy over a unified communications solution. Connectivity can affect quality, as can poor speakers, a bad “huddle room” space, echo, hiss and volume considerations. According to a recent white paper by Revolabs entitled “Digital Signal Processing: Its Role in Unified Communications,” digital signal processing (DSP) can overcome all the issues of audio inherent in conferencing by manipulating digital signals to get a desired outcome.
“That could be finding a radar target amidst clutter or facial recognition in a photo. In audio conferencing applications, it removes unwanted noise and sound effects from the call. The DSP system contains hardware- and/or software -encoded algorithms that process electrical signals,” according to Revolabs. “It can filter out audio issues and send a clean digital signal to its destination for conversion back to analog (i.e. the loudspeaker). In a meeting, DSP is there to manage audio. It’s needed to provide intelligible, clear audio to remote participants.”
DSP technology uses algorithmic manipulation that can remove all noise unnecessary to the video/audio conference experience, leaving only what’s pertinent. It does this by adapting to each specific conferencing situation (size and shape of room, number of occupants, background noise, etc). One of the most important functions of acoustic echo cancellation is to prevent the person who is speaking from hearing their voice being played back on their end as an echo.
“When you hear your own voice echo in the local loudspeaker (defined as ‘the near end’), someone else’s communication device has failed to echo-cancel your signal,” according to the white paper’s authors. “As your voice is amplified on ‘the far end’ (the other meeting space you’re connected to), it’s getting picked up by the far end microphone(s) and sent back to you, played over your speaker. What should take place at the far end is that the AEC hardware and software compare the microphone’s signal to your own incoming voice being played out on the far end, and subtract it from that microphone’s signal before passing it on.”
Once the echo is canceled, DSP technology can begin equalizing the volume by amplifying each incoming microphone signal to approximately the same level, preventing some audio from being too soft and others too loud. From here, DSP technology moves on to what’s known as “mixing and gating,” functions that control which microphones are actively transmitting at any given time to specific output channels.
“If all the microphones have to feed to one output (such as a telephone line or a videoconference device), each microphone’s input has to be added to the output signal,” according to Revolabs. “This is called ‘mixing,’ and it defines which microphones send signals to which desired outputs. ‘Gating’ means the system can selectively deactivate certain microphones. For example, some advanced DSP algorithms can distinguish between human speech and noise like typing, and ignore signals from noisy microphones until human speech is detected on it again.
Companies will require different DSP-based solutions depending on their needs (size of conference room, frequency of use, budget, etc). Look for a vendor that can accommodate your specific requirements with a solution tailored to your facilities.
If you’d like to learn more about the power of DSP, download Revolabs’ free white paper HERE.
Edited by Alicia Young