Interpret audio in an engine using the AudioInterpret class.

WAGS ships with two engines - Effect Unit for real audio and Instruction for an ADT representation of audio.

#AudioInterpret Source

class AudioInterpret audio engine  where

A class with all possible instructions for interpreting audio. The class is paramaterized by two types:

  • audio: an audio context, which could be nothing (ie Unit) if there is audio or FFIAudio if there is audio.
  • engine: the output of the engine. For real audio, this is Effect Unit, as playing something from a loudspeaker is a side effect that doesn't return anything. For testing, this is the Instruction type, which is an ADT representation of instructions to an audio engine.



#AudioBuffer Source

data AudioBuffer

A multi-channel audio buffer.


#AudioContext Source

#FFIAudioWithBehaviors Source

type FFIAudioWithBehaviors = { buffers :: Behavior (Object BrowserAudioBuffer), context :: AudioContext, floatArrays :: Behavior (Object BrowserFloatArray), microphone :: Behavior (Nullable BrowserMicrophone), periodicWaves :: Behavior (Object BrowserPeriodicWave), recorders :: Behavior (Object (MediaRecorder -> Effect Unit)), units :: Foreign, writeHead :: Number }

#BrowserAudioBuffer Source

#BrowserCamera Source

data BrowserCamera :: Type

The MediaStream object for a camera.

#BrowserFloatArray Source

#BrowserMicrophone Source

data BrowserMicrophone :: Type

The MediaStream object for a microphone.

#BrowserPeriodicWave Source

#FFIAudio Source

#FFIAudioSnapshot' Source

type FFIAudioSnapshot' = { buffers :: Object BrowserAudioBuffer, context :: AudioContext, floatArrays :: Object BrowserFloatArray, microphone :: Nullable BrowserMicrophone, periodicWaves :: Object BrowserPeriodicWave, recorders :: Object (MediaRecorder -> Effect Unit), units :: Foreign, writeHead :: Number }

The audio information that goes to the ffi during rendering.

  • context - the audio context. To create an AudioContext, use context.
  • writeHead - the moment in time in the audio context at which we are writing. The easing algorithm provided to run makes sure that this is always slightly ahead of the actual audio context time. When starting a scene, this should be set to 0.0.
  • units - an object in which audio units are cached and retrieved. To create this, use makeUnitCache.
  • microphone - the browser microphone or null if we do not have one.
  • recorders - an object recorder rendering functions. Because media recorders do not yet exist when a scene starts, we provide rendering functions and then fill in the actual recorders once the rendering starts.
  • buffers - an object containing named audio buffers for playback using PlayBuf or LoopBuf. See the atari-speaks example to see how a buffer is used.
  • floatArrays - arrays of 32=bit floats used for wave shaping.
  • periodicWaves - array of periodic waves used for creating oscillator nodes.


#FFINumericAudioParameter Source

type FFINumericAudioParameter = { cancel :: Boolean, param :: Number, timeOffset :: Number, transition :: String }

An AudioParameter with the transition field stringly-typed for easier rendering in the FFI and cancelation as a boolean


#MediaRecorder Source

#audioBuffer Source

audioBuffer :: forall bch blen. Pos bch => Pos blen => Int -> Vec bch (Vec blen Number) -> AudioBuffer

Make a multi-channel audio buffer. Each vector into the multi-channel buffer must be the same length.

#audioWorkletAddModule Source

audioWorkletAddModule :: AudioContext -> String -> Effect (Promise Unit)

For a given audio context, add the audio worklet module at a given URI.

#close Source

close :: AudioContext -> Effect Unit

Close an audio context.

#context Source

context :: Effect AudioContext

Make a new audio context.

#decodeAudioDataFromBase64EncodedString Source

decodeAudioDataFromBase64EncodedString :: AudioContext -> String -> Effect (Promise BrowserAudioBuffer)

Given an audio context and a base-64-encoded audio file, decode the content of the string to an audio buffer.

#decodeAudioDataFromUri Source

decodeAudioDataFromUri :: AudioContext -> String -> Effect (Promise BrowserAudioBuffer)

Given an audio context and a URI, decode the content of the URI to an audio buffer.

#getAudioClockTime Source

getAudioClockTime :: AudioContext -> Effect Number

Gets the audio clock time from an audio context.

#getMicrophoneAndCamera Source

#isTypeSupported Source

isTypeSupported :: String -> Effect Boolean

Is this MIME type supported by this browser.

#makeAudioBuffer Source

makeAudioBuffer :: AudioContext -> AudioBuffer -> Effect BrowserAudioBuffer

For a given audio context, use an audio buffer to create a browser audio buffer. This is useful when doing DSP in the browser. Note that AudioBuffer is a purescript type whereas BrowserAudioBuffer is an optimized browser-based type. That means that, once you write to BrowserAudioBuffer, it is effectively a blob and its contents cannot be retrieved using the WAGS API.

#makeFloatArray Source

makeFloatArray :: Array Number -> Effect BrowserFloatArray

Make a float 32 array. Useful when creating a waveshaper node.

#makePeriodicWave Source

makePeriodicWave :: forall len. Pos len => AudioContext -> Vec len Number -> Vec len Number -> Effect BrowserPeriodicWave

Make a browser periodic wave. A PureScript-ified version of the periodic wave constructor from the Web Audio API. Given an audio context, a vector of real parts of complex numbers, and a vector of imaginary parts of complex numbers, build a periodic wave interpretable by the Web Audio API.

#makeUnitCache Source

makeUnitCache :: Effect Foreign

Create a unit cache. This returns a fresh empty object {} that is used to cache audio units.

#mediaRecorderToUrl Source

mediaRecorderToUrl :: String -> (String -> Effect Unit) -> MediaRecorder -> Effect Unit

For a given MIME type, pass the URL-ified content of a media recorder as a string to a handler.

mediaRecorderToUrl "audio/ogg" setAudioTagUrlToThisContent recorder

#renderAudio Source

renderAudio :: Array (Effect Unit) -> Effect Unit

Render audio from an array of audio rendering instructions. This is conceptually the same as taking Array Effect Unit -> Effect Unit and doing map fold <<< sequence. The reason this version is used is because it is ~2x more computationally efficient, which is important in order to be able to hit audio deadlines.

#stopMediaRecorder Source

stopMediaRecorder :: MediaRecorder -> Effect Unit

Stops a media recorder