Paul Adenot, @padenot && paul@paul.cx
Paul Adenot, DemoJS, Paris, 2014-10-11
What does the spec tell us?
[...] A high-level JavaScript API for processing and synthesizing audio in web applications. The primary paradigm is of an audio routing graph, where a number of AudioNode objects are connected together to define the overall audio rendering.http://webaudio.github.io/web-audio-api/index.html#abstact
var ac = new AudioContext(); ac.decodeAudioData(ogg_arraybuffer, function(audiobuffer) { var source = ac.createBufferSource(); source.buffer = audiobuffer; var d = ac.createDelay() var osc = ac.createOscillator(); osc.type = "square"; osc.frequency.value = 100; // Hz var g = ac.createGain(); source.connect(d); source.connect(ac.destination); d.connect(ac.destination); g.connect(d); d.connect(g); osc.connect(g); }, function error() { alert("This is broken"); });
ArrayBuffers (formats: same as <audio>)
AudioBufferSourceNode, sample player, plays
AudioBuffer. One shot, but cheap to createOscillatorNode, sine, square, saw, triangle, custom
from FFT. One shot (use a GainNode)ScriptProcessorNode (with no input) to generate
arbitrary waveforms using JavaScript (but
beware, it's deprecated !)MediaElementAudioSourceNode to pipe the audio from
<audio> or <video> in an
AudioContextMediaStreamAudioSourceNode, from
PeerConnection, getUserMedia.GainNode: Change the volume DelayNode: Delays the input, in time. Needed for
cycles. ScriptProcessorNode (with both input and output
connected): arbitrary processing (but
beware, it's deprecated)PannerNode: Position a source and a listener in 3D
and get the sound panned accordinglyChannel{Splitter,Merger}Node: Merge or Split
multi-channel audio from/to monoConvolverNode: Perform one-dimensional convolution
between an AudioBuffer and the input (e.g. reverb)WaveShaperNode: Non-linear wave shaping (e.g. guitar
distortion) BiquadFilter: low-pass, high-pass, band-pass,
all-pass, etc.DynamicsCompressorNode: adjust audio dynamics MediaStreamAudioDestinationNode: Outputs to a
MediaStream(send to e.g. a WebRTC PeerConnection
or a
MediaRecorder to encode)AudioDestinationNode: Outputs to speakers/headphones ScriptProcessorNode: arbitrary processing on input audio AnalyserNode: Get time-domain or frequency-domain
audio data, in ArrayBuffersAnalyserNode MediaElementAudioSourceNode, best of both worlds!Web Audio API cheat sheet : Here !
Open the devtools, cog icon, enable it, reload the page that has an
AudioContext.
In heavy development (Made by Jordan Santell (@jsantell)), available in Nightly builds. (The canvas editor can also be useful for demos)
Possible demo page (but any page should work, or that's a bug).