Paul Adenot
mozilla
Web Audio Conference 2017
Queen Mary University, London
$ cd ~/src/repositories/web-audio-api/$ git checkout gh-pages$ git pull$ git diff --stat @{2016-04-04}..@{2017-08-23}
$ git log --stat --since=2016-04-04
$ git log --since=2016-04-04 | grep "Date:" | wc -l754
$ git log --color --since=2016-04-04 | grep "Author:" | sort | uniq | wc -l21
Tons more discussing on issues !
$ git log --format="%at" --since=2016-04-04 | python datehist.py `git rev-parse --show-toplevel`
var ac = new AudioContext();[...]// Get a MediaStreamDestinationNodevar msdn = ac.createMediaStreamDestination();// NEW! connect the context destination to something!ac.destination.connect(msdn);// Feed it to a recorder, an RTCPeerConnection...var rec = new MediaRecorder(msdn.stream);
Nominal ranges
value
behaviour changed
Introspection: minValue
, maxValue
, defaultValue
Specification was fundamentaly broken
Behaviour was weird
Usage was minimal
setVelocity
on AudioListener
and PannerNode
AudioContext.baseLatency
AudioContext.outputLatency
->
ear)AudioContext.getOutputTimestamp()
Date.now()
(system clock) &
AudioContext.currentTime
(audio subsystem clock)new AudioContext({ latencyHint: "interactive" });new AudioContext({ latencyHint: "playback" });new AudioContext({ latencyHint: "balanced" });new AudioContext({ latencyHint: 0.05 /* seconds */ });
Trade CPU/battery usage vs. audio latency
Authors can determine the actual latency:
var ac = new AudioContext({latencyHint: "playback"});console.log("Base latency:" + ac.baseLatency);console.log("Audio output latency:" + ac.outputLatency);console.log("Total latency:" + (ac.baseLatency + ac.outputLatency));
AudioParam
s on the panner:PannerNodeposition.{position{X,Y,Z}, orientation{X,Y,Z}}
AudioListener
AudioListener.{position{X,Y,Z},forward{X,Y,Z},up{X,Y,Z}}
a-rate parameters on "equal-power"
PannerNode
, k-rate on "HRTF"
nodes.
panner.setPosition(1,2,3); panner.positionX.value = 1; panner.positionY.value = 2; panner.positionZ.value = 3;
ap.setValueCurveAtTime(array, startTime, duration);
now does the equivalent of:
ap.setValueCurveAtTime(array, startTime, duration);ap.setValueAtTime(array[array.length - 1], startTime + duration);
Incremented by a render quantum (128 sample-frames), atomically, without waiting for stable state, at the end of the render quantum.
var ac = new AudioContext(); var gain1 = ac.createGain(); var gain2 = new GainNode(ac); var buf1 = ac.createBufferSource(2, 128, 44100); var buf2 = new AudioBuffer({ length: 128, channels: 2, sampleRate: 44100 }); // Mono by default var buf2 = new AudioBuffer({ length: 128, sampleRate: 44100}); var pw = ac.createPeriodicWave([1,2,3], [3,2,1]); var pw2 = new PeriodicWave([1,2,3], [3,2,1]);
paves the way for subclassing AudioNode
s.
Allow to share initialization objects, available for all AudioNode
s:
var ac = new AudioContext;var gain = new GainNode(ac, { gain: 3.0 });var delay = new DelayNode(ac, { delayTime: 0.3 });var source = new AudioBufferSourceNode(ac, { buffer: new AudioBuffer(...), playbackRate: 3.0 detune: 700, loop: true });var osc = new OscillatorNode(ac, { type: "square", detune: 700 });
The equivalent of:
var ac = new AudioContext();var source = new AudioBufferSourceNode();var gain = new GainNode();var buffer = new AudioBuffer({length: 1, sampleRate: ac.sampleRate});buffer.getChannelData(0)[0] = 1.0;source.connect(gain);source.buffer = buffer;
and modifying gain.gain
.
var ac = new AudioContext();var c = new ConstantSourceNode();c.offset.value = 1000;c.start(ac.currentTime + 3.5);c.stop();
New AudioParam
method, that does what it says:
var ac = new AudioContext; var gain = new GainNode(ac); gain.gain.setValueAtTime(1.0, ac.currentTime); gain.gain.setTargetAtTime(0.0, ac.currentTime, 0.1); gain.gain.cancelAndHoldAtTime(ac.currentTime + 0.1); // gain.gain.value is some value in between 0.0 and 1.0.
In practice, the initial "suspended" => "running"
transition is allowed to
be delayed, this allow knowing there is no audio that is currently being
output.
More-or-less standardization of Safari's behaviour on mobile, but with more discoverability.
New AudioNode
, allowing to precisely decide which MediaStreamTrack
of a
MediaStream
is going to be routed to an AudioContext
:
// ms is some MediaStreamvar ac = new AudioContext();// route the second audio track of a given MediaStreamvar mstan = new MediaStreamTrackAudioSourceNode(ac, ms.getAudioTracks()[1]);
var ac = new AudioContext({sampleRate: 8000});
Allows lower CPU usage, useful for emulation, lo-fi audio work, and for very
specific jobs (like AudioBufferSourceNode
stiching).
AudioBufferSourceNode
playback algorithmDynamicsCompressorNode
processing algorithmvar ac = new AudioContext();var real = new Float32Array(2);real[0] = 0.0;real[1] = 0.0;var imag = new Float32Array(2);imag[0] = 0.0;imag[1] = 1.0;var wave = ac.createPeriodicWave(real, imag):
vs.
var ac = new AudioContext();var wave = new PeriodicWave({ real: [0,0], imag: [0,1] });
MediaStreamAudioSourceNode.mediaStream
and
MediaElementAudioSourceNode.mediaElement
, instead of having to set it as an
expando or kept around elsewhere.
DynamicsCompressorNode
)web-platform-test
s are lacking a bit (good test suites in implementations,
interoperable)v.next
SharedArrayBuffer
+ AudioWorklet
+ wasm
= ❤️AudioContext
in Web WorkerAudioParam
ratepadenot@mozilla.com
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |