Paul Adenot, @padenot && padenot@mozilla.com
What does the spec tell us?
[...] A high-level JavaScript API for processing and synthesizing audio in web applications. The primary paradigm is of an audio routing graph, where a number of AudioNode objects are connected together to define the overall audio rendering.http://webaudio.github.io/web-audio-api/index.html#abstact
var ac = new AudioContext(); ac.decodeAudioData(ogg_arraybuffer, function(audiobuffer) { var source = ac.createBufferSource(); source.buffer = audiobuffer; var d = ac.createDelay() var osc = ac.createOscillator(); osc.type = "square"; osc.frequency.value = 100; // Hz var g = ac.createGain(); source.connect(d); source.connect(ac.destination); d.connect(ac.destination); g.connect(d); d.connect(g); osc.connect(g); }, function error() { alert("This is broken"); });
ArrayBuffers
(formats: same as <audio>
) MediaRecorder
, WebRTC, <audio>
) AudioNode
s between AudioContext
AudioBuffer
s, though DelayNode
in the cycle)
AudioBufferSourceNode
, sample player, plays
AudioBuffer
. One shot, but cheap to createOscillatorNode
, sine, square, saw, triangle, custom
from FFT. One shot (use a GainNode
)ScriptProcessorNode
(with no input) to generate
arbitrary waveforms using JavaScript (but
beware, it's broken!)MediaElementAudioSourceNode
to pipe the audio from
<audio>
or <video>
in an
AudioContext
MediaStreamAudioSourceNode
, from
PeerConnection
, getUserMedia
.GainNode
: Change the volume DelayNode
: Delays the input, in time. Needed for
cycles. ScriptProcessorNode
(with both input and output
connected): arbitrary processing (but
beware, it's broken)PannerNode
: Position a source and a listener in 3D
and get the sound panned accordinglyChannel{Splitter,Merger}Node
: Merge or Split
multi-channel audio from/to monoConvolverNode
: Perform one-dimensional convolution
between an AudioBuffer
and the input (e.g. reverb)WaveShaperNode
: Non-linear wave shaping (e.g. guitar
distortion) BiquadFilter
: low-pass, high-pass, band-pass,
all-pass, etc.DynamicsCompressorNode
: adjust audio dynamics MediaStreamAudioDestinationNode
: Outputs to a
MediaStream
(send to e.g. a WebRTC PeerConnection
or a
MediaRecorder
to encode)AudioDestinationNode
: Outputs to speakers/headphones ScriptProcessorNode
: arbitrary processing on input audio AnalyserNode
: Get time-domain or frequency-domain
audio data, in ArrayBuffer
sAudioNode
parameter is an AudioParam
g.gain.value = 0.5;
g.gain.setValueAtTime(0.5, ctx.currentTime + 1.0);
g.gain.linearRampToValueAtTime(0.5, ct + 1.0);
g.gain.exponentialRampToValueAtTime(0.0, ct + 1.0);
g.gain.setTargetAtTime(0.0, ct + 1.0, 0.66 /*exp constant*/);
g.gain.setValueCurveAtTime(arrayBuffer, ct + 1.0, arrayBuffer.length / ctx.sampleRate);
g.gain.cancelScheduledValues (ct);
var osc = ctx.createOscillator(); // default: sine
osc.frequency.value = 4; // Hz
var lpf = ctx.createBiquadFilter(); // default: low-pass
var gain = ctx.createGain()
gain.gain.value = 1000; // [0; 1.0] to [0; 1000]
osc.connect(gain.gain); // osc -> AudioParam
gain.connect(lpf.frequency);
// dubstep
osc.start(0);
A fancy jsbin to play with the Web Audio API.
Example code at the bottom of the "output" panel, a drum loop and a reverb impulse are included, feel free to add your own files!
(Fork it before using it, to keep your changes)
ScriptProcessorNode
is broken by design OfflineAudioContext
trick to
same memory) AnalyserNode
MediaSource
for that) webkitAudioContext
vs. AudioContext
MediaElementAudioSourceNode
output is silence when it is from
another server and CORS header is not there Blob
(from IndexedDB
,
XMLHttpRequest
, local file, ArrayBuffer
) MediaElementAudioSourceNode
, best of both worlds!
Open the devtools, cog icon, enable it, reload the page that has an
AudioContext
.
In heavy development (Made by Jordan Santell (@jsantell)), available in Nightly builds. (The canvas editor can also be useful for the MHD)
Possible demo page (but any page should work, or that's a bug).