So far we have learned about some basic properties of sound: timing and volume. To do more complex things, such as sound equalization (e.g., increasing the bass and decreasing the treble), we need more complex tools. This section explains some of the tools that allow you to do these more interesting transformations, which include the ability to simulate different sorts of environments and manipulate sounds directly with JavaScript.
The Web Audio API provides a playbackRate
parameter on
each AudioSourceNode
. This value can be set to affect
the pitch of any sound buffer. Note that the pitch as well as the duration
of the sample will be affected in this case. There are sophisticated
methods that try to affect pitch independent of duration, but this is
quite difficult to do in a general-purpose way without introducing blips,
scratches, and other undesirable artifacts to the mix.
As discussed in Basics of Musical Pitch, to compute the frequencies of successive semitones, we simply multiply the frequency by the semitone ratio 21/12. This is very useful if you are developing a musical instrument or using pitch for randomization in a game setting. The following code plays a tone at a given frequency offset in semitones:
function
playNote
(
semitones
)
{
// Assume a new source was created from a buffer.
var
semitoneRatio
=
Math
.
pow
(
2
,
1
/
12
);
source
.
playbackRate
.
value
=
Math
.
pow
(
semitoneRatio
,
semitones
);
source
.
start
(
0
);
}
As we discussed earlier, our ears perceive pitch exponentially. Treating pitch as an exponential quantity can be inconvenient, since we often deal with awkward values such as the twelfth root of two. Instead of doing that, we can use the detune parameter to specify our offset in cents. Thus you can rewrite the above function using detune in an easier way:
function
playNote
(
semitones
)
{
// Assume a new source was created from a buffer.
source
.
detune
.
value
=
semitones
*
100
;
source
.
start
(
0
);
}
If you pitch shift by too many semitones (e.g., by calling
playNote(24);
), you will start to hear distortions. Because
of this, digital pianos include multiple samples for each instrument. Good
digital pianos avoid pitch bending at all, and include a separate sample
recorded specifically for each key. Great digital pianos often include
multiple samples for each key, which are played back depending on the
velocity of the key press.
A key feature of sound effects in games is that there can be many of them simultaneously. Imagine you’re in the middle of a gunfight with multiple actors shooting machine guns. Each machine gun fires many times per second, causing tens of sound effects to be played at the same time. Playing back sound from multiple, precisely-timed sources simultaneously is one place the Web Audio API really shines.
Now, if all of the machine guns in your game sounded exactly the same, that would be pretty boring. Of course the sound would vary based on distance from the target and relative position [more on this later in Spatialized Sound], but even that might not be enough. Luckily the Web Audio API provides a way to easily tweak the previous example in at least two simple ways:
With a subtle shift in time between bullets firing
By changing pitch to better simulate the randomness of the real world
Using our knowledge of timing and pitch, implementing these two effects is pretty straightforward:
function
shootRound
(
numberOfRounds
,
timeBetweenRounds
)
{
var
time
=
context
.
currentTime
;
// Make multiple sources using the same buffer and play in quick succession.
for
(
var
i
=
0
;
i
<
numberOfRounds
;
i
++
)
{
var
source
=
this
.
makeSource
(
bulletBuffer
);
source
.
playbackRate
.
value
=
1
+
Math
.
random
()
*
RANDOM_PLAYBACK
;
source
.
start
(
time
+
i
*
timeBetweenRounds
+
Math
.
random
()
*
RANDOM_VOLUME
);
}
}
The Web Audio API automatically merges multiple sounds playing at once, essentially just adding the waveforms together. This can cause problems such as clipping, which we discuss in Clipping and Metering.
This example adds some variety to AudioBuffers
loaded
from sound files. In some cases, it is desirable to have fully synthesized
sound effects and no buffers at all [see Procedurally Generated Sound].
{{jsbin width="100%" height="380px" src="http://orm-other.s3.amazonaws.com/webaudioapi/samples/rapid-sounds/index.html"}}
As we discussed early in this book, digital sound in the Web Audio
API is represented as an array of floats in AudioBuffers
.
Most of the time, the buffer is created by loading a sound file, or on the
fly from some sound stream. In some cases, we might want to synthesize our
own sounds. We can do this by creating audio buffers programmatically
using JavaScript, which simply evaluate a mathematical function at regular
periods and assign values to an array. By taking this approach, we can
manually change the amplitude and frequency of our sine wave, or even
concatenate multiple sine waves together to create arbitrary sounds
[recall the principles of fourier transformations from Understanding the Frequency Domain].
Though possible, doing this work in JavaScript is inefficient and
complex. Instead, the Web Audio API provides primitives that let you do
this with oscillators: OscillatorNode
. These nodes have
configurable frequency and detune [see the Basics of Musical Pitch]. They
also have a type that represents the kind of wave to generate. Built-in
types include the sine, triangle, sawtooth, and square waves, as shown in
Figure 4-4.
Oscillators can easily be used in audio graphs in place of
AudioBufferSourceNodes
. An example of this follows:
function
play
(
semitone
)
{
// Create some sweet sweet nodes.
var
oscillator
=
context
.
createOscillator
();
oscillator
.
connect
(
context
.
destination
);
// Play a sine type curve at A4 frequency (440hz).
oscillator
.
frequency
.
value
=
440
;
oscillator
.
detune
.
value
=
semitone
*
100
;
// Note: this constant will be replaced with "sine".
oscillator
.
type
=
oscillator
.
SINE
;
oscillator
.
start
(
0
);
}
In addition to these basic wave types, you can create a custom wave table for your oscillator by using harmonic tables. This lets you efficiently create wave shapes that are much more complex than the previous ones. This topic is very important for musical synthesis applications, but is outside of the scope of this book.