Science Fair Project Encyclopedia
Audio signal processing
Audio signal processing, sometimes referred to as audio processing, is the processing of a representation of auditory signals, or sound. The representation can be digital or analog. An analog representation is usually electrical; a voltage level represents the air pressure waveform of the sound. Similarly, a digital representation expresses the pressure wave-form as a sequence of symbols, usually binary numbers.
The focus in audio signal processing is most typically an analysis of which parts of the signal are audible. For example, a signal can be modified for different purposes such that the modification is controlled in the auditory domain. Which parts of the signal are heard and which are not, is not decided merely by physiology of the human hearing system, but very much by psychological properties. These properties are analysed within the field of psychoacoustics.
Processing methods and application areas include storage, level compression, data compression, transmission, enhancement (e.g., equalization, filtering, noise cancellation, echo or reverb removal or addition, etc.), source separation, sound effects and computer music.
- DFT: discrete Fourier transform
- DWG: digital waveguide, efficient method of modeling the propagation of waves
- FDN: feedback delay network , used to simulate reverberation
- FFT: fast Fourier transform
- FIR: finite impulse response
- IIR: infinite impulse response
- LPC: linear predictive coding
- PSOLA: pitch-synchronous overlap-add method (method of audio time scaling).
- SOLAFS : synchronous overlap-add, fixed synthesis (method of audio time scaling).
- TDHS : time-domain harmonic scaling (method of audio time scaling).
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details