Represents an audio data buffer, used with
XAudio2 audio data is interleaved, data from each channel is adjacent for a particular sample number. For example if there was a 4 channel wave playing into an XAudio2 source voice, the audio data would be a sample of channel 0, a sample of channel 1, a sample of channel 2, a sample of channel 3, and then the next sample of channels 0, 1, 2, 3, etc.
The AudioBytes and pAudioData members of
Memory allocated to hold a
Contains information about an XAPO for use in an effect chain.
XAPO instances are passed to XAudio2 as
For additional information on using XAPOs with XAudio2 see How to: Create an Effect Chain and How to: Use an XAPO in XAudio2.
The
This interface should be implemented by the XAudio2 client. XAudio2 calls these methods via an interface reference provided by the client, using the XAudio2Create method. Methods in this interface return void, rather than an
See XAudio2 Callbacks for restrictions on callback implementation.
Describes I3DL2 (Interactive 3D Audio Rendering Guidelines Level 2.0) parameters for use in the ReverbConvertI3DL2ToNative function.
There are many preset values defined for the
Describes parameters for use in the reverb APO.
All parameters related to sampling rate or time are relative to a 48kHz voice and must be scaled for use with other sampling rates. For example, setting ReflectionsDelay to 300ms gives a true 300ms delay when the reverb is hosted in a 48kHz voice, but becomes a 150ms delay when hosted in a 24kHz voice.
Percentage of the output that will be reverb. Allowable values are from 0 to 100.
The delay time of the first reflection relative to the direct path. Permitted range is from 0 to 300 milliseconds.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Delay of reverb relative to the first reflection. Permitted range is from 0 to 85 milliseconds.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Delay for the left rear output and right rear output. Permitted range is from 0 to 5 milliseconds.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Delay for the left side output and right side output. Permitted range is from 0 to 5 milliseconds.
Note??This value is supported beginning with Windows?10. ? Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Position of the left input within the simulated space relative to the listener. With PositionLeft set to the minimum value, the left input is placed close to the listener. In this position, early reflections are dominant, and the reverb decay is set back in the sound field and reduced in amplitude. With PositionLeft set to the maximum value, the left input is placed at a maximum distance from the listener within the simulated room. PositionLeft does not affect the reverb decay time (liveness of the room), only the apparent position of the source relative to the listener. Permitted range is from 0 to 30 (no units).
Same as PositionLeft, but affecting only the right input. Permitted range is from 0 to 30 (no units).
Note??PositionRight is ignored in mono-in/mono-out mode. ?Gives a greater or lesser impression of distance from the source to the listener. Permitted range is from 0 to 30 (no units).
Gives a greater or lesser impression of distance from the source to the listener. Permitted range is from 0 to 30 (no units).
Note??PositionMatrixRight is ignored in mono-in/mono-out mode. ?Controls the character of the individual wall reflections. Set to minimum value to simulate a hard flat surface and to maximum value to simulate a diffuse surface. Permitted range is from 0 to 15 (no units).
Controls the character of the individual wall reverberations. Set to minimum value to simulate a hard flat surface and to maximum value to simulate a diffuse surface. Permitted range is from 0 to 15 (no units).
Adjusts the decay time of low frequencies relative to the decay time at 1 kHz. The values correspond to dB of gain as follows:
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Gain (dB) | -8 | -7 | -6 | -5 | -4 | -3 | -2 | -1 | 0 | +1 | +2 | +3 | +4 |
?
Note??A LowEQGain value of 8 results in the decay time of low frequencies being equal to the decay time at 1 kHz. ?Permitted range is from 0 to 12 (no units).
Sets the corner frequency of the low pass filter that is controlled by the LowEQGain parameter. The values correspond to frequency in Hz as follows:
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
---|---|---|---|---|---|---|---|---|---|---|
Frequency (Hz) | 50 | 100 | 150 | 200 | 250 | 300 | 350 | 400 | 450 | 500 |
?
Permitted range is from 0 to 9 (no units).
Adjusts the decay time of high frequencies relative to the decay time at 1 kHz. When set to zero, high frequencies decay at the same rate as 1 kHz. When set to maximum value, high frequencies decay at a much faster rate than 1 kHz.
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|
Gain (dB) | -8 | -7 | -6 | -5 | -4 | -3 | -2 | -1 | 0 |
?
Permitted range is from 0 to 8 (no units).
Sets the corner frequency of the high pass filter that is controlled by the HighEQGain parameter. The values correspond to frequency in kHz as follows:
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Frequency (kHz) | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 | 4.5 | 5 | 5.5 | 6 | 6.5 | 7 | 7.5 | 8 |
?
Permitted range is from 0 to 14 (no units).
Sets the corner frequency of the low pass filter for the room effect. Permitted range is from 20 to 20,000 Hz.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Sets the pass band intensity level of the low-pass filter for both the early reflections and the late field reverberation. Permitted range is from -100 to 0 dB.
Sets the intensity of the low-pass filter for both the early reflections and the late field reverberation at the corner frequency (RoomFilterFreq). Permitted range is from -100 to 0 dB.
Adjusts the intensity of the early reflections. Permitted range is from -100 to 20 dB.
Adjusts the intensity of the reverberations. Permitted range is from -100 to 20 dB.
Reverberation decay time at 1 kHz. This is the time that a full scale input signal decays by 60 dB. Permitted range is from 0.1 to infinity seconds.
Controls the modal density in the late field reverberation. For colorless spaces, Density should be set to the maximum value (100). As Density is decreased, the sound becomes hollow (comb filtered). This is an effect that can be useful if you are trying to model a silo. Permitted range as a percentage is from 0 to 100.
The apparent size of the acoustic space. Permitted range is from 1 to 100 feet.
If set to TRUE, disables late field reflection calculations. Disabling late field reflection calculations results in a significant CPU time savings.
Note??The DirectX SDK versions of XAUDIO2 don't support this member. ?Describes parameters for use with the volume meter APO.
This structure is used with the XAudio2
pPeakLevels and pRMSLevels are not returned by
ChannelCount must be set by the application to match the number of channels in the voice the effect is applied to.
Array that will be filled with the maximum absolute level for each channel during a processing pass. The array must be at least ChannelCount ? sizeof(float) bytes. pPeakLevels may be
Array that will be filled with root mean square level for each channel during a processing pass. The array must be at least ChannelCount ? sizeof(float) bytes. pRMSLevels may be
Number of channels being processed.
Represents an audio data buffer, used with
XAudio2 audio data is interleaved, data from each channel is adjacent for a particular sample number. For example if there was a 4 channel wave playing into an XAudio2 source voice, the audio data would be a sample of channel 0, a sample of channel 1, a sample of channel 2, a sample of channel 3, and then the next sample of channels 0, 1, 2, 3, etc.
The AudioBytes and pAudioData members of
Memory allocated to hold a
Indicates the filter type.
Attenuates (reduces) frequencies above the cutoff frequency.
Attenuates frequencies outside a given range.
Attenuates frequencies below the cutoff frequency.
Attenuates frequencies inside a given range.
Attenuates frequencies above the cutoff frequency. This is a one-pole filter, and
Attenuates frequencies below the cutoff frequency. This is a one-pole filter, and
Contains information about the creation flags, input channels, and sample rate of a voice.
Note the DirectX SDK versions of XAUDIO2 do not support the ActiveFlags member.
Flags used to create the voice; see the individual voice interfaces for more information.
Flags that are currently set on the voice.
The number of input channels the voice expects.
The input sample rate the voice expects.
XAudio2 constants that specify default parameters, maximum values, and flags.
XAudio2 boundary values
A mastering voice is used to represent the audio output device.
Data buffers cannot be submitted directly to mastering voices, but data submitted to other types of voices must be directed to a mastering voice to be heard.
Returns the channel mask for this voice.
Returns the channel mask for this voice. This corresponds to the dwChannelMask member of the
This method does not return a value.
The pChannelMask argument is a bit-mask of the various channels in the speaker geometry reported by the audio system. This information is needed for the X3DAudioInitialize SpeakerChannelMask parameter.
The X3DAUDIO.H header declares a number of SPEAKER_ positional defines to decode these channels masks.
Examples include:
Note??For the DirectX SDK versions of XAUDIO, the channel mask for the output device was obtained via the IXAudio2::GetDeviceDetails method, which doesn't exist in Windows?8 and later.?// (0x1) | (0x2) // (0x1) | (0x2) // | (0x4) // | (0x8) // | (0x10) | (0x20)
Returns the channel mask for this voice. (Only valid for XAudio 2.8, returns 0 otherwise)
The pChannelMask argument is a bit-mask of the various channels in the speaker geometry reported by the audio system. This information is needed for the
The X3DAUDIO.H header declares a number of SPEAKER_ positional defines to decode these channels masks.
Examples include:
// (0x1) | (0x2) // (0x1) | (0x2) // | (0x4) // | (0x8) // | (0x10) | (0x20)
Note??For the DirectX SDK versions of XAUDIO, the channel mask for the output device was obtained via the IXAudio2::GetDeviceDetails method, which doesn't exist in Windows?8 and later.
Use a source voice to submit audio data to the XAudio2 processing pipeline.You must send voice data to a mastering voice to be heard, either directly or through intermediate submix voices.
Returns the frequency adjustment ratio of the voice.
GetFrequencyRatio always returns the voice's actual current frequency ratio. However, this may not match the ratio set by the most recent
For information on frequency ratios, see
Reconfigures the voice to consume source data at a different sample rate than the rate specified when the voice was created.
The SetSourceSampleRate method supports reuse of XAudio2 voices by allowing a voice to play sounds with a variety of sample rates. To use SetSourceSampleRate the voice must have been created without the
The typical use of SetSourceSampleRate is to support voice pooling. For example to support voice pooling an application would precreate all the voices it expects to use. Whenever a new sound will be played the application chooses an inactive voice or ,if all voices are busy, picks the least important voice and calls SetSourceSampleRate on the voice with the new sound's sample rate. After SetSourceSampleRate has been called on the voice, the application can immediately start submitting and playing buffers with the new sample rate. This allows the application to avoid the overhead of creating and destroying voices frequently during gameplay.
Starts consumption and processing of audio by the voice. Delivers the result to any connected submix or mastering voices, or to the output device.
Flags that control how the voice is started. Must be 0.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
If the XAudio2 engine is stopped, the voice stops running. However, it remains in the started state, so that it starts running again as soon as the engine starts.
When first created, source voices are in the stopped state. Submix and mastering voices are in the started state.
After Start is called it has no further effect if called again before
Stops consumption of audio by the current voice.
Flags that control how the voice is stopped. Can be 0 or the following:
Value | Description |
---|---|
Continue emitting effect output after the voice is stopped.? |
?
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
All source buffers that are queued on the voice and the current cursor position are preserved. This allows the voice to continue from where it left off, when it is restarted. The
By default, any pending output from voice effects?for example, reverb tails?is not played. Instead, the voice is immediately rendered silent. The
A voice stopped with the
Stop is always asynchronous, even if called within a callback.
Note??XAudio2 never calls any voice callbacks for a voice if the voice is stopped (even if it was stopped withAdds a new audio buffer to the voice queue.
Pointer to an
Pointer to an additional
Returns
The voice processes and plays back the buffers in its queue in the order that they were submitted.
The
If the voice is started and has no buffers queued, the new buffer will start playing immediately. If the voice is stopped, the buffer is added to the voice's queue and will be played when the voice starts.
If only part of the given buffer should be played, the PlayBegin and PlayLength fields in the
If all or part of the buffer should be played in a continuous loop, the LoopBegin, LoopLength and LoopCount fields in
If an explicit play region is specified, it must begin and end within the given audio buffer (or, in the compressed case, within the set of samples that the buffer will decode to). In addition, the loop region cannot end past the end of the play region.
Xbox 360 |
---|
For certain audio formats, there may be additional restrictions on the valid endpoints of any play or loop regions; e.g. for XMA buffers, the regions can only begin or end at 128-sample boundaries in the decoded audio. |
?
The pBuffer reference can be reused or freed immediately after calling this method, but the actual audio data referenced by pBuffer must remain valid until the buffer has been fully consumed by XAudio2 (which is indicated by the
Up to
SubmitSourceBuffer takes effect immediately when called from an XAudio2 callback with an OperationSet of
Xbox 360 |
---|
This method can be called from an Xbox system thread (most other XAudio2 methods cannot). However, a maximum of two source buffers can be submitted from a system thread at a time. |
?
Removes all pending audio buffers from the voice queue.
Returns
If the voice is started, the buffer that is currently playing is not removed from the queue.
FlushSourceBuffers can be called regardless of whether the voice is currently started or stopped.
For every buffer removed, an OnBufferEnd callback will be made, but none of the other per-buffer callbacks (OnBufferStart, OnStreamEnd or OnLoopEnd) will be made.
FlushSourceBuffers does not change a the voice's running state, so if the voice was playing a buffer prior to the call, it will continue to do so, and will deliver all the callbacks for the buffer normally. This means that the OnBufferEnd callback for this buffer will take place after the OnBufferEnd callbacks for the buffers that were removed. Thus, an XAudio2 client that calls FlushSourceBuffers cannot expect to receive OnBufferEnd callbacks in the order in which the buffers were submitted.
No warnings for starvation of the buffer queue will be emitted when the currently playing buffer completes; it is assumed that the client has intentionally removed the buffers that followed it. However, there may be an audio pop if this buffer does not end at a zero crossing. If the application must ensure that the flush operation takes place while a specific buffer is playing?perhaps because the buffer ends with a zero crossing?it must call FlushSourceBuffers from a callback, so that it executes synchronously.
Calling FlushSourceBuffers after a voice is stopped and then submitting new data to the voice resets all of the voice's internal counters.
A voice's state is not considered reset after calling FlushSourceBuffers until the OnBufferEnd callback occurs (if a buffer was previously submitted) or
Notifies an XAudio2 voice that no more buffers are coming after the last one that is currently in its queue.
Returns
Discontinuity suppresses the warnings that normally occur in the debug build of XAudio2 when a voice runs out of audio buffers to play. It is preferable to mark the final buffer of a stream by tagging it with the
Because calling Discontinuity is equivalent to applying the
Stops looping the voice when it reaches the end of the current loop region.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
If the cursor for the voice is not in a loop region, ExitLoop does nothing.
Returns the voice's current state and cursor position data.
Number of audio buffers currently queued on the voice, including the one that is processed currently.
For all encoded formats, including constant bit rate (CBR) formats such as adaptive differential pulse code modulation (ADPCM), SamplesPlayed is expressed in terms of decoded samples. For pulse code modulation (PCM) formats, SamplesPlayed is expressed in terms of either input or output samples. There is a one-to-one mapping from input to output for PCM formats.
If a client needs to get the correlated positions of several voices?that is, to know exactly which sample of a particular voice is playing when a specified sample of another voice is playing?it must make the
Sets the frequency adjustment ratio of the voice.
Frequency adjustment ratio. This value must be between
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
Frequency adjustment is expressed as source frequency / target frequency. Changing the frequency ratio changes the rate audio is played on the voice. A ratio greater than 1.0 will cause the audio to play faster and a ratio less than 1.0 will cause the audio to play slower. Additionally, the frequency ratio affects the pitch of audio on the voice. As an example, a value of 1.0 has no effect on the audio, whereas a value of 2.0 raises pitch by one octave and 0.5 lowers it by one octave.
If SetFrequencyRatio is called specifying a Ratio value outside the valid range, the method will set the frequency ratio to the nearest valid value. A warning also will be generated for debug builds.
Note??Returns the frequency adjustment ratio of the voice.
Returns the current frequency adjustment ratio if successful.
GetFrequencyRatio always returns the voice's actual current frequency ratio. However, this may not match the ratio set by the most recent
For information on frequency ratios, see
Reconfigures the voice to consume source data at a different sample rate than the rate specified when the voice was created.
The new sample rate the voice should process submitted data at. Valid sample rates are 1kHz to 200kHz.
Returns
The SetSourceSampleRate method supports reuse of XAudio2 voices by allowing a voice to play sounds with a variety of sample rates. To use SetSourceSampleRate the voice must have been created without the
The typical use of SetSourceSampleRate is to support voice pooling. For example to support voice pooling an application would precreate all the voices it expects to use. Whenever a new sound will be played the application chooses an inactive voice or ,if all voices are busy, picks the least important voice and calls SetSourceSampleRate on the voice with the new sound's sample rate. After SetSourceSampleRate has been called on the voice, the application can immediately start submitting and playing buffers with the new sample rate. This allows the application to avoid the overhead of creating and destroying voices frequently during gameplay.
A submix voice is used primarily for performance improvements and effects processing.
Data buffers cannot be submitted directly to submix voices and will not be audible unless submitted to a mastering voice. A submix voice can be used to ensure that a particular set of voice data is converted to the same format and/or to have a particular effect chain processed on the collective result.
Designates a new set of submix or mastering voices to receive the output of the voice.
This method is only valid for source and submix voices. Mastering voices can not send audio to another voice.
After calling SetOutputVoices a voice's current send levels will be replaced by a default send matrix. The
It is invalid to call SetOutputVoices from within a callback (that is,
Gets the voice's filter parameters.
GetFilterParameters will fail if the voice was not created with the
GetFilterParameters always returns this voice's actual current filter parameters. However, these may not match the parameters set by the most recent
Sets the overall volume level for the voice.
SetVolume controls a voice's master input volume level. The master volume level is applied at different times depending on the type of voice. For submix and mastering voices the volume level is applied just before the voice's built in filter and effect chain is applied. For source voices the master volume level is applied after the voice's filter and effect chain is applied.
Volume levels are expressed as floating-point amplitude multipliers between -
Returns information about the creation flags, input channels, and sample rate of a voice.
Designates a new set of submix or mastering voices to receive the output of the voice.
Array of
Returns
This method is only valid for source and submix voices. Mastering voices can not send audio to another voice.
After calling SetOutputVoices a voice's current send levels will be replaced by a default send matrix. The
It is invalid to call SetOutputVoices from within a callback (that is,
Replaces the effect chain of the voice.
Pointer to an
Returns
See XAudio2 Error Codes for descriptions of XAudio2 specific error codes.
The number of output channels allowed for a voice's effect chain is locked at creation of the voice. If you create the voice with an effect chain, any new effect chain passed to SetEffectChain must have the same number of input and output channels as the original effect chain. If you create the voice without an effect chain, the number of output channels allowed for the effect chain will default to the number of input channels for the voice. If any part of effect chain creation fails, none of it is applied.
After you attach an effect to an XAudio2 voice, XAudio2 takes control of the effect, and the client should not make any further calls to it. The simplest way to ensure this is to release all references to the effect.
It is invalid to call SetEffectChain from within a callback (that is,
The
Enables the effect at a given position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
Be careful when you enable an effect while the voice that hosts it is running. Such an action can result in a problem if the effect significantly changes the audio's pitch or volume.
The effects in a given XAudio2 voice's effect chain must consume and produce audio at that voice's processing sample rate. The only aspect of the audio format they can change is the channel count. For example a reverb effect can convert mono data to 5.1. The client can use the
EnableEffect takes effect immediately when you call it from an XAudio2 callback with an OperationSet of
Disables the effect at a given position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
The effects in a given XAudio2 voice's effect chain must consume and produce audio at that voice's processing sample rate. The only aspect of the audio format they can change is the channel count. For example a reverb effect can convert mono data to 5.1. The client can use the
Disabling an effect immediately removes it from the processing graph. Any pending audio in the effect?such as a reverb tail?is not played. Be careful disabling an effect while the voice that hosts it is running. This can result in an audible artifact if the effect significantly changes the audio's pitch or volume.
DisableEffect takes effect immediately when called from an XAudio2 callback with an OperationSet of
Returns the running state of the effect at a specified position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
GetEffectState always returns the effect's actual current state. However, this may not be the state set by the most recent
Sets parameters for a given effect in the voice's effect chain.
Zero-based index of an effect within the voice's effect chain.
Returns the current values of the effect-specific parameters.
Size of the pParameters array in bytes.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
Fails with E_NOTIMPL if the effect does not support a generic parameter control interface.
The specific effect being used determines the valid size and format of the pParameters buffer. The call will fail if pParameters is invalid or if ParametersByteSize is not exactly the size that the effect expects. The client must take care to direct the SetEffectParameters call to the right effect. If this call is directed to a different effect that happens to accept the same parameter block size, the parameters will be interpreted differently. This may lead to unexpected results.
The memory pointed to by pParameters must not be freed immediately, because XAudio2 will need to refer to it later when the parameters actually are applied to the effect. This happens during the next audio processing pass if the OperationSet argument is
SetEffectParameters takes effect immediately when called from an XAudio2 callback with an OperationSet of
Returns the current effect-specific parameters of a given effect in the voice's effect chain.
Zero-based index of an effect within the voice's effect chain.
Returns the current values of the effect-specific parameters.
Size, in bytes, of the pParameters array.
Returns
Fails with E_NOTIMPL if the effect does not support a generic parameter control interface.
GetEffectParameters always returns the effect's actual current parameters. However, these may not match the parameters set by the most recent call to
Sets the voice's filter parameters.
Pointer to an
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetFilterParameters will fail if the voice was not created with the
This method is usable only on source and submix voices and has no effect on mastering voices.
Note??Gets the voice's filter parameters.
Pointer to an
GetFilterParameters will fail if the voice was not created with the
GetFilterParameters always returns this voice's actual current filter parameters. However, these may not match the parameters set by the most recent
Sets the filter parameters on one of this voice's sends.
Pointer to an
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetOutputFilterParameters will fail if the send was not created with the
Returns the filter parameters from one of this voice's sends.
Pointer to an
GetOutputFilterParameters will fail if the send was not created with the
Sets the overall volume level for the voice.
Overall volume level to use. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetVolume controls a voice's master input volume level. The master volume level is applied at different times depending on the type of voice. For submix and mastering voices the volume level is applied just before the voice's built in filter and effect chain is applied. For source voices the master volume level is applied after the voice's filter and effect chain is applied.
Volume levels are expressed as floating-point amplitude multipliers between -
Sets the overall volume level for the voice.
Overall volume level to use. See Remarks for more information on volume levels.
SetVolume controls a voice's master input volume level. The master volume level is applied at different times depending on the type of voice. For submix and mastering voices the volume level is applied just before the voice's built in filter and effect chain is applied. For source voices the master volume level is applied after the voice's filter and effect chain is applied.
Volume levels are expressed as floating-point amplitude multipliers between -
Sets the volume levels for the voice, per channel.
Number of channels in the voice.
Array containing the new volumes of each channel in the voice. The array must have Channels elements. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetChannelVolumes controls a voice's per-channel output levels and is applied just after the voice's final SRC and before its sends.
This method is valid only for source and submix voices, because mastering voices do not specify volume per channel.
Volume levels are expressed as floating-point amplitude multipliers between -
Returns the volume levels for the voice, per channel.
Confirms the channel count of the voice.
Returns the current volume level of each channel in the voice. The array must have at least Channels elements. See Remarks for more information on volume levels.
These settings are applied after the effect chain is applied. This method is valid only for source and submix voices, because mastering voices do not specify volume per channel.
Volume levels are expressed as floating-point amplitude multipliers between -2?? to 2??, with a maximum gain of 144.5 dB. A volume of 1 means there is no attenuation or gain, 0 means silence, and negative levels can be used to invert the audio's phase. See XAudio2 Volume and Pitch Control for additional information on volume control.
Note??GetChannelVolumes always returns the volume levels most recently set bySets the volume level of each channel of the final output for the voice. These channels are mapped to the input channels of a specified destination voice.
Pointer to a destination
Confirms the output channel count of the voice. This is the number of channels that are produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels ? DestinationChannels] volume levels sent to the destination voice. The level sent from source channel S to destination channel D is specified in the form pLevelMatrix[SourceChannels ? D + S].
For example, when rendering two-channel stereo input into 5.1 output that is weighted toward the front channels?but is absent from the center and low-frequency channels?the matrix might have the values shown in the following table.
Output | Left Input [Array Index] | Right Input [Array Index] |
---|---|---|
Left | 1.0 [0] | 0.0 [1] |
Right | 0.0 [2] | 1.0 [3] |
Front Center | 0.0 [4] | 0.0 [5] |
LFE | 0.0 [6] | 0.0 [7] |
Rear Left | 0.8 [8] | 0.0 [9] |
Rear Right | 0.0 [10] | 0.8 [11] |
?
Note??The left and right input are fully mapped to the output left and right channels; 80 percent of the left and right input is mapped to the rear left and right channels. ?See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
This method is valid only for source and submix voices, because mastering voices write directly to the device with no matrix mixing.
Volume levels are expressed as floating-point amplitude multipliers between -
The X3DAudio function X3DAudioCalculate can produce an output matrix for use with SetOutputMatrix based on a sound's position and a listener's position.
Note??Gets the volume level of each channel of the final output for the voice. These channels are mapped to the input channels of a specified destination voice.
Pointer specifying the destination
Confirms the output channel count of the voice. This is the number of channels that are produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels * DestinationChannels] volume levels sent to the destination voice. The level sent from source channel S to destination channel D is returned in the form pLevelMatrix[DestinationChannels ? S + D]. See Remarks for more information on volume levels.
This method applies only to source and submix voices, because mastering voices write directly to the device with no matrix mixing. Volume levels are expressed as floating-point amplitude multipliers between -2?? to 2??, with a maximum gain of 144.5 dB. A volume level of 1 means there is no attenuation or gain and 0 means silence. Negative levels can be used to invert the audio's phase. See XAudio2 Volume and Pitch Control for additional information on volume control.
See
Destroys the voice. If necessary, stops the voice and removes it from the XAudio2 graph.
If any other voice is currently sending audio to this voice, the method fails.
DestroyVoice waits for the audio processing thread to be idle, so it can take a little while (typically no more than a couple of milliseconds). This is necessary to guarantee that the voice will no longer make any callbacks or read any audio data, so the application can safely free up these resources as soon as the call returns.
To avoid title thread interruptions from a blocking DestroyVoice call, the application can destroy voices on a separate non-critical thread, or the application can use voice pooling strategies to reuse voices rather than destroying them. Note that voices can only be reused with audio that has the same data format and the same number of channels the voice was created with. A voice can play audio data with different sample rates than that of the voice by calling
It is invalid to call DestroyVoice from within a callback (that is,
Returns information about the creation flags, input channels, and sample rate of a voice.
The
This interface should be implemented by the XAudio2 client. XAudio2 calls these methods through an interface reference provided by the client in the
See the XAudio2 Callbacks topic for restrictions on callback implementation.
This is the only XAudio2 interface that is derived from the COM
The DirectX SDK versions of XAUDIO2 included three member functions that are not present in the Windows 8 version: GetDeviceCount, GetDeviceDetails, and Initialize. These enumeration methods are no longer provided and standard Windows Audio APIs should be used for device enumeration instead.
Returns current resource usage details, such as available memory or CPU usage.
For specific information on the statistics returned by GetPerformanceData, see the
Adds an
Returns
This method can be called multiple times, allowing different components or layers of the same application to manage their own engine callback implementations separately.
It is invalid to call RegisterForCallbacks from within a callback (that is,
Removes an
It is invalid to call UnregisterForCallbacks from within a callback (that is,
Creates and configures a source voice.
If successful, returns a reference to the new
Pointer to a one of the structures in the table below. This structure contains the expected format for all audio buffers submitted to the source voice. XAudio2 supports PCM and ADPCM voice types.
Format tag | Wave format structure | Size (in bytes) |
---|---|---|
PCMWAVEFORMAT | 16 | |
-or- | | 18 |
PCMWAVEFORMAT | 18 | |
ADPCMWAVEFORMAT | 50 | |
| 40 |
?
XAudio2 supports the following PCM formats.
The number of channels in a source voice must be less than or equal to
Flags that specify the behavior of the source voice. A flag can be 0 or a combination of one or more of the following:
Value | Description |
---|---|
No pitch control is available on the voice.? | |
No sample rate conversion is available on the voice. The voice's outputs must have the same sample rate.Note??The | |
The filter effect should be available on this voice.? |
?
Note??The XAUDIO2_VOICE_MUSIC flag is not supported on Windows. ?Highest allowable frequency ratio that can be set on this voice. The value for this argument must be between
If MaxFrequencyRatio is less than 1.0, the voice will use that ratio immediately after being created (rather than the default of 1.0).
Xbox 360 |
---|
For XMA voices, there is one more restriction on the MaxFrequencyRatio argument and the voice's sample rate. The product of these two numbers cannot exceed XAUDIO2_MAX_RATIO_TIMES_RATE_XMA_MONO for one-channel voices or XAUDIO2_MAX_RATIO_TIMES_RATE_XMA_MULTICHANNEL for voices with any other number of channels. If the value specified for MaxFrequencyRatio is too high for the specified format, the call to CreateSourceVoice fails and produces a debug message. |
?
Note??You can use the lowest possible MaxFrequencyRatio value to reduce XAudio2's memory usage. ?Pointer to a client-provided callback interface,
Pointer to a list of
Pointer to a list of
Returns
See XAudio2 Error Codes for descriptions of XAudio2-specific error codes.
Source voices read audio data from the client. They process the data and send it to the XAudio2 processing graph.
A source voice includes a variable-rate sample rate conversion, to convert data from the source format sample rate to the output rate required for the voice send list. If you use a
You cannot create any source or submix voices until a mastering voice exists, and you cannot destory a mastering voice if any source or submix voices still exist.
Source voices are always processed before any submix or mastering voices. This means that you do not need a ProcessingStage parameter to control the processing order.
When first created, source voices are in the stopped state.
XAudio2 uses an internal memory pooler for voices with the same format. This means memory allocation for voices will occur less frequently as more voices are created and then destroyed. To minimize just-in-time allocations, a title can create the anticipated maximum number of voices needed up front, and then delete them as necessary. Voices will then be reused from the XAudio2 pool. The memory pool is tied to an XAudio2 engine instance. You can reclaim all the memory used by an instance of the XAudio2 engine by destroying the XAudio2 object and recreating it as necessary (forcing the memory pool to grow via preallocation would have to be reapplied as needed).
It is invalid to call CreateSourceVoice from within a callback (that is,
The
Creates and configures a submix voice.
On success, returns a reference to the new
Number of channels in the input audio data of the submix voice. InputChannels must be less than or equal to
Sample rate of the input audio data of submix voice. This rate must be a multiple of XAUDIO2_QUANTUM_DENOMINATOR. InputSampleRate must be between
Flags that specify the behavior of the submix voice. It can be 0 or the following:
Value | Description |
---|---|
The filter effect should be available on this voice. |
?
An arbitrary number that specifies when this voice is processed with respect to other submix voices, if the XAudio2 engine is running other submix voices. The voice is processed after all other voices that include a smaller ProcessingStage value and before all other voices that include a larger ProcessingStage value. Voices that include the same ProcessingStage value are processed in any order. A submix voice cannot send to another submix voice with a lower or equal ProcessingStage value. This prevents audio being lost due to a submix cycle.
Pointer to a list of
Pointer to a list of
Returns
See XAudio2 Error Codes for descriptions of XAudio2 specific error codes.
Submix voices receive the output of one or more source or submix voices. They process the output, and then send it to another submix voice or to a mastering voice.
A submix voice performs a sample rate conversion from the input sample rate to the input rate of its output voices in pSendList. If you specify multiple voice sends, they must all have the input same sample rate.
You cannot create any source or submix voices until a mastering voice exists, and you cannot destroy a mastering voice if any source or submix voices still exist.
When first created, submix voices are in the started state.
XAudio2 uses an internal memory pooler for voices with the same format. This means that memory allocation for voices will occur less frequently as more voices are created and then destroyed. To minimize just-in-time allocations, a title can create the anticipated maximum number of voices needed up front, and then delete them as necessary. Voices will then be reused from the XAudio2 pool. The memory pool is tied to an XAudio2 engine instance. You can reclaim all the memory used by an instance of the XAudio2 engine by destroying the XAudio2 object and recreating it as necessary (forcing the memory pool to grow via preallocation would have to be reapplied as needed).
It is invalid to call CreateSubmixVoice from within a callback (that is,
The
Creates and configures a mastering voice.
If successful, returns a reference to the new
Number of channels the mastering voice expects in its input audio. InputChannels must be less than or equal to
You can set InputChannels to
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of XAUDIO2_QUANTUM_DENOMINATOR. InputSampleRate must be between
You can set InputSampleRate to
Windows XP defaults to 44100.
Windows Vista and Windows 7 default to the setting specified in the Sound Control Panel. The default for this setting is 44100 (or 48000 if required by the driver). Flags
Flags that specify the behavior of the mastering voice. Must be 0.
Identifier of the device to receive the output audio. Specifying the default value of
Pointer to an
The audio stream category to use for this mastering voice.
Returns
See XAudio2 Error Codes for descriptions of XAudio2 specific error codes.
Mastering voices receive the output of one or more source or submix voices. They process the data, and send it to the audio output device.
Typically, you should create a mastering voice with an input sample rate that will be used by the majority of the title's audio content. The mastering voice performs a sample rate conversion from this input sample rate to the actual device output rate.
You cannot create a source or submix voices until a mastering voice exists. You cannot destroy a mastering voice if any source or submix voices still exist.
Mastering voices are always processed after all source and submix voices. This means that you need not specify a ProcessingStage parameter to control the processing order.
XAudio2 only allows one mastering voice to exist at once. If you attempt to create more than one voice,
When first created, mastering voices are in the started state.
It is invalid to call CreateMasteringVoice from within a callback (that is,
The
Note that the DirectX SDK XAUDIO2 version of CreateMasteringVoice took a DeviceIndex argument instead of a szDeviceId and a StreamCategory argument. This reflects the changes needed for the standard Windows device enumeration model.
Starts the audio processing thread.
Returns
After StartEngine is called, all started voices begin to consume audio. All enabled effects start running, and the resulting audio is sent to any connected output devices. When XAudio2 is first initialized, the engine is already in the started state.
It is invalid to call StartEngine from within a callback (that is,
Stops the audio processing thread.
When StopEngine is called, all output is stopped immediately. However, the audio graph is left untouched, preserving effect parameters, effect histories (for example, the data stored by a reverb effect in order to emit echoes of a previous sound), voice states, pending source buffers, cursor positions, and so forth. When the engine is restarted, the resulting audio output will be identical?apart from a period of silence?to the output that would have been produced if the engine had never been stopped.
It is invalid to call StopEngine from within a callback (that is,
Atomically applies a set of operations that are tagged with a given identifier.
Identifier of the set of operations to be applied. To commit all pending operations, pass
Returns
CommitChanges does nothing if no operations are tagged with the given identifier.
See the XAudio2 Operation Sets overview about working with CommitChanges and XAudio2 interface methods that may be deferred.
Returns current resource usage details, such as available memory or CPU usage.
On success, reference to an
For specific information on the statistics returned by GetPerformanceData, see the
Changes global debug logging options for XAudio2.
Pointer to a
This parameter is reserved and must be
SetDebugConfiguration sets the debug configuration for the given instance of XAudio2 engine. See
Used with
When streaming an xWMA file a few packets at a time,
In addition, when streaming an xWMA file a few packets at a time, the application should subtract pDecodedPacketCumulativeBytes[PacketCount-1] of the previous packet from all the entries of the currently submitted packet.
The members of
Memory allocated to hold a
XAUDIO 2.8 in Windows 8.x does not support xWMA decoding. Use Windows Media Foundation APIs to perform the decoding from WMA to PCM instead. This functionality is available in the DirectX SDK versions of XAUDIO and in XAUDIO 2.9 in Windows?10.
Contains the new global debug configuration for XAudio2. Used with the SetDebugConfiguration function.
Debugging messages can be completely turned off by initializing
Defines an effect chain.
Number of effects in the effect chain for the voice.
Array of
Defines filter parameters for a source voice.
Setting
FilterParams; FilterParams.Frequency = 1.0f; FilterParams.OneOverQ = 1.0f; FilterParams.Type = LowPassFilter;
The following formulas show the relationship between the members of
Yl( n ) = F1 yb( n ) + yl( n - 1 ) Yb( n ) = F1 yh( n ) + yb( n - 1 ) Yh( n ) = x( n ) - yl( n ) - OneOverQ(yb( n - 1 ) Yn( n ) = Yl(n) + Yh(n)
Where:
Yl = lowpass output Yb = bandpass output Yh = highpass output Yn = notch output F1 =.Frequency OneOverQ = .OneOverQ
The
Filter radian frequency calculated as (2 * sin(pi * (desired filter cutoff frequency) / sampleRate)). The frequency must be greater than or equal to 0 and less than or equal to
Reciprocal of Q factor. Controls how quickly frequencies beyond Frequency are dampened. Larger values result in quicker dampening while smaller values cause dampening to occur more gradually. Must be greater than 0 and less than or equal to
Contains performance information.
CPU cycles are recorded using . Use to convert these values.
CPU cycles spent on audio processing since the last call to the
Total CPU cycles elapsed since the last call.
Note??This only counts cycles on the CPU on which XAudio2 is running. ?Fewest CPU cycles spent on processing any single audio quantum since the last call.
Most CPU cycles spent on processing any single audio quantum since the last call.
Total memory currently in use.
Minimum delay that occurs between the time a sample is read from a source buffer and the time it reaches the speakers.
Windows |
---|
The delay reported is a variable value equal to the rough distance between the last sample submitted to the driver by XAudio2 and the sample currently playing. The following factors can affect the delay: playing multichannel audio on a hardware-accelerated device; the type of audio device (WavePci, WaveCyclic, or WaveRT); and, to a lesser extent, audio hardware implementation. |
?
Xbox 360 |
---|
The delay reported is a fixed value, which is normally 1,024 samples (21.333 ms at 48 kHz). If XOverrideSpeakerConfig has been called using the XAUDIOSPEAKERCONFIG_LOW_LATENCY flag, the delay reported is 512 samples (10.667 ms at 48 kHz). |
?
Total audio dropouts since the engine started.
Number of source voices currently playing.
Total number of source voices currently in existence.
Number of submix voices currently playing.
Number of resampler xAPOs currently active.
Number of matrix mix xAPOs currently active.
Windows |
---|
Unsupported. |
?
Xbox 360 |
---|
Number of source voices decoding XMA data. |
?
Windows |
---|
Unsupported. |
?
Xbox 360 |
---|
A voice can use more than one XMA stream. |
?
Contains information about the creation flags, input channels, and sample rate of a voice.
Note the DirectX SDK versions of XAUDIO2 do not support the ActiveFlags member.
Flags used to create the voice; see the individual voice interfaces for more information.
Flags that are currently set on the voice.
The number of input channels the voice expects.
The input sample rate the voice expects.
Defines a destination voice that is the target of a send from another voice and specifies whether a filter should be used.
Indicates whether a filter should be used on data sent to the voice pointed to by pOutputVoice. Flags can be 0 or
A reference to an
Defines a set of voices to receive data from a single output voice.
If pSends is not
Setting SendCount to 0 is useful for certain effects such as volume meters or file writers that don't generate any audio output to pass on to another voice.
If needed, a voice will perform a single sample rate conversion, from the voice's input sample rate to the input sample rate of the voice's output voices. Because only one sample rate conversion will be performed, all the voice's output voices must have the same input sample rate. If the input sample rates of the voice and its output voices are the same, no sample rate conversion is performed.
Number of voices to receive the output of the voice. An OutputCount value of 0 indicates the voice should not send output to any voices.
Array of
Returns the voice's current state and cursor position data.
For all encoded formats, including constant bit rate (CBR) formats such as adaptive differential pulse code modulation (ADPCM), SamplesPlayed is expressed in terms of decoded samples. For pulse code modulation (PCM) formats, SamplesPlayed is expressed in terms of either input or output samples. There is a one-to-one mapping from input to output for PCM formats.
If a client needs to get the correlated positions of several voices?that is, to know exactly which sample of a particular voice is playing when a specified sample of another voice is playing?it must make the
Pointer to a buffer context provided in the
Number of audio buffers currently queued on the voice, including the one that is processed currently.
Total number of samples processed by this voice since it last started, or since the last audio stream ended (as marked with the
Creates a new XAudio2 object and returns a reference to its
Returns
The DirectX SDK versions of XAUDIO2 supported a flag
Note??No versions of the DirectX SDK contain the xaudio2.lib import library. DirectX SDK versions use COM to create a new XAudio2 object.
Creates a new reverb audio processing object (APO), and returns a reference to it.
Contains a reference to the reverb APO that is created.
If this function succeeds, it returns
XAudio2CreateReverb creates an effect performing Princeton Digital Reverb. The XAPO effect library (XAPOFX) includes an alternate reverb effect. Use CreateFX to create this alternate effect.
The reverb APO supports has the following restrictions:
For information about creating new effects for use with XAudio2, see the XAPO Overview.
Windows |
---|
Because XAudio2CreateReverb calls CoCreateInstance on Windows, the application must have called the CoInitializeEx method before calling XAudio2CreateReverb. A typical calling pattern on Windows would be as follows: #ifndef _XBOX CoInitializeEx( |
?
The xaudio2fx.h header defines the AudioReverb class
class __declspec(uuid("C2633B16-471B-4498-B8C5-4F0959E2EC09")) AudioReverb;
XAudio2CreateReverb returns this object as a reference to a reference to
The reverb uses the
Note??XAudio2CreateReverb is an inline function in xaudio2fx.h that calls CreateAudioReverb:
XAUDIO2FX_STDAPI CreateAudioReverb(_Outptr_ ** ppApo);
__inline XAudio2CreateReverb(_Outptr_ ** ppApo, UINT32 /*Flags*/ DEFAULT(0))
{ return CreateAudioReverb(ppApo);
}
Creates a new volume meter audio processing object (APO) and returns a reference to it.
Contains the created volume meter APO.
If this function succeeds, it returns
For information on creating new effects for use with XAudio2, see the XAPO Overview.
Windows |
---|
Because XAudio2CreateVolumeMeter calls CoCreateInstance on Windows, the application must have called the CoInitializeEx method before calling XAudio2CreateVolumeMeter. A typical calling pattern on Windows would be as follows: #ifndef _XBOX CoInitializeEx( |
?
The xaudio2fx.h header defines the AudioVolumeMeter class
class __declspec(uuid("4FC3B166-972A-40CF-BC37-7DB03DB2FBA3")) AudioVolumeMeter;
XAudio2CreateVolumeMeter returns this object as a reference to a reference to
The volume meter uses the
Note??XAudio2CreateVolumeMeter is an inline function in xaudio2fx.h that calls CreateAudioVolumeMeter:
XAUDIO2FX_STDAPI CreateAudioVolumeMeter(_Outptr_ ** ppApo);
__inline XAudio2CreateVolumeMeter(_Outptr_ ** ppApo, UINT32 /*Flags*/ DEFAULT(0))
{ return CreateAudioVolumeMeter(ppApo);
}
Specifies directionality for a single-channel non-LFE emitter by scaling DSP behavior with respect to the emitter's orientation.
For a detailed explanation of sound cones see Sound Cones.
Inner cone angle in radians. This value must be within 0.0f to X3DAUDIO_2PI.
Outer cone angle in radians. This value must be within InnerAngle to X3DAUDIO_2PI.
Volume scaler on/within inner cone. This value must be within 0.0f to 2.0f.
Volume scaler on/beyond outer cone. This value must be within 0.0f to 2.0f.
LPF direct-path or reverb-path coefficient scaler on/within inner cone. This value is only used for LPF calculations and must be within 0.0f to 1.0f.
LPF direct-path or reverb-path coefficient scaler on or beyond outer cone. This value is only used for LPF calculations and must be within 0.0f to 1.0f.
Reverb send level scaler on or within inner cone. This must be within 0.0f to 2.0f.
Reverb send level scaler on/beyond outer cone. This must be within 0.0f to 2.0f.
Defines a DSP setting at a given normalized distance.
Normalized distance. This must be within 0.0f to 1.0f.
DSP control setting.
Defines an explicit piecewise curve made up of linear segments, directly defining DSP behavior with respect to normalized distance.
Number of distance curve points. There must be two or more points since all curves must have at least two endpoints defining values at 0.0f and 1.0f normalized distance, respectively.
Receives the results from a call to X3DAudioCalculate.
The following members must be initialized before passing this structure to the X3DAudioCalculate function:
The following members are returned by passing this structure to the X3DAudioCalculate function:
Defines a single-point or multiple-point 3D audio source that is used with an arbitrary number of sound channels.
The parameter type
X3DAudio uses a left-handed Cartesian coordinate system, with values on the x-axis increasing from left to right, on the y-axis from bottom to top, and on the z-axis from near to far. Azimuths are measured clockwise from a given reference direction. To use X3DAudio with right-handed coordinates, you must negate the .z element of OrientFront, OrientTop, Position, and Velocity.
For user-defined distance curves, the distance field of the first point must be 0.0f and the distance field of the last point must be 1.0f.
If an emitter moves beyond a distance of (CurveDistanceScaler ? 1.0f), the last point on the curve is used to compute the volume output level. The last point is determined by the following:
.pPoints[PointCount-1].DSPSetting)
Pointer to a sound cone. Used only with single-channel emitters for matrix, LPF (both direct and reverb paths), and reverb calculations.
Orientation of the front direction. This value must be orthonormal with OrientTop. OrientFront must be normalized when used. For single-channel emitters without cones OrientFront is only used for emitter angle calculations. For multi channel emitters or single-channel with cones OrientFront is used for matrix, LPF (both direct and reverb paths), and reverb calculations.
Orientation of the top direction. This value must be orthonormal with OrientFront. OrientTop is only used with multi-channel emitters for matrix calculations.
Position in user-defined world units. This value does not affect Velocity.
Velocity vector in user-defined world units/second. This value is used only for doppler calculations. It does not affect Position.
Value to be used for the inner radius calculations. If InnerRadius is 0, then no inner radius is used, but InnerRadiusAngle may still be used. This value must be between 0.0f and MAX_FLT.
Value to be used for the inner radius angle calculations. This value must be between 0.0f and X3DAUDIO_PI/4.0.
Number of emitters defined by the
Distance from Position that channels will be placed if ChannelCount is greater than 1. ChannelRadius is only used with multi-channel emitters for matrix calculations. Must be greater than or equal to 0.0f.
Table of channel positions, expressed as an azimuth in radians along the channel radius with respect to the front orientation vector in the plane orthogonal to the top orientation vector. An azimuth of X3DAUDIO_2PI specifies a channel is a low-frequency effects (LFE) channel. LFE channels are positioned at the emitter base and are calculated with respect to pLFECurve only, never pVolumeCurve. pChannelAzimuths must have at least ChannelCount elements, but can be
Volume-level distance curve, which is used only for matrix calculations.
LFE roll-off distance curve, or
Low-pass filter (LPF) direct-path coefficient distance curve, or
LPF reverb-path coefficient distance curve, or
Reverb send level distance curve, or
Curve distance scaler that is used to scale normalized distance curves to user-defined world units, and/or to exaggerate their effect. This does not affect any other calculations. The value must be within the range FLT_MIN to FLT_MAX. CurveDistanceScaler is only used for matrix, LPF (both direct and reverb paths), and reverb calculations.
Doppler shift scaler that is used to exaggerate Doppler shift effect. DopplerScaler is only used for Doppler calculations and does not affect any other calculations. The value must be within the range 0.0f to FLT_MAX.
Defines a point of 3D audio reception.
X3DAudio uses a left-handed Cartesian coordinate system, with values on the x-axis increasing from left to right, on the y-axis from bottom to top, and on the z-axis from near to far. Azimuths are measured clockwise from a given reference direction. To use X3DAudio with right-handed coordinates, you must negate the .z element of OrientFront, OrientTop, Position, and Velocity.
The parameter type
A listener's front and top vectors must be orthonormal. To be considered orthonormal, a pair of vectors must have a magnitude of 1 +- 1x10-5 and a dot product of 0 +- 1x10-5.
Orientation of front direction. When pCone is
Orientation of top direction, used only for matrix and delay calculations. This value must be orthonormal with OrientFront when used.
Position in user-defined world units. This value does not affect Velocity.
Velocity vector in user-defined world units per second, used only for doppler calculations. This value does not affect Position.
Pointer to an
Calculates DSP settings with respect to 3D parameters.
3D audio instance handle. Call
Pointer to an
Pointer to an
Value | Description |
---|---|
Enables matrix coefficient table calculation.? | |
Enables delay time array calculation (stereo only).? | |
Enables low pass filter (LPF) direct-path coefficient calculation.? | |
Enables LPF reverb-path coefficient calculation.? | |
Enables reverb send level calculation.? | |
Enables Doppler shift factor calculation.? | |
Enables emitter-to-listener interior angle calculation.? | |
Fills the center channel with silence. This flag allows you to keep a 6-channel matrix so you do not have to remap the channels, but the center channel will be silent. This flag is only valid if you also set | |
Applies an equal mix of all source channels to a low frequency effect (LFE) destination channel. It only applies to matrix calculations with a source that does not have an LFE channel and a destination that does have an LFE channel. This flag is only valid if you also set |
?
Pointer to an
You typically call
Important?? The listener and emitter values must be valid. Floating-point specials (NaN, QNaN, +INF, -INF) can cause the entire audio output to go silent if introduced into a running audio graph.
Sets all global 3D audio constants.
Assignment of channels to speaker positions. This value must not be zero. The only permissible value on Xbox 360 is SPEAKER_XBOX.
Speed of sound, in user-defined world units per second. Use this value only for doppler calculations. It must be greater than or equal to FLT_MIN.
3D audio instance handle. Use this handle when you call
This function does not return a value.
X3DAUDIO_HANDLE is an opaque data structure. Because the operating system doesn't allocate any additional storage for the 3D audio instance handle, you don't need to free or close it.
Calculates DSP settings with respect to 3D parameters.
3D audio instance handle. Call
Pointer to an
Pointer to an
Value | Description |
---|---|
Enables matrix coefficient table calculation.? | |
Enables delay time array calculation (stereo only).? | |
Enables low pass filter (LPF) direct-path coefficient calculation.? | |
Enables LPF reverb-path coefficient calculation.? | |
Enables reverb send level calculation.? | |
Enables Doppler shift factor calculation.? | |
Enables emitter-to-listener interior angle calculation.? | |
Fills the center channel with silence. This flag allows you to keep a 6-channel matrix so you do not have to remap the channels, but the center channel will be silent. This flag is only valid if you also set | |
Applies an equal mix of all source channels to a low frequency effect (LFE) destination channel. It only applies to matrix calculations with a source that does not have an LFE channel and a destination that does have an LFE channel. This flag is only valid if you also set |
?
Pointer to an
You typically call
Important?? The listener and emitter values must be valid. Floating-point specials (NaN, QNaN, +INF, -INF) can cause the entire audio output to go silent if introduced into a running audio graph.
Sets all global 3D audio constants.
Assignment of channels to speaker positions. This value must not be zero. The only permissible value on Xbox 360 is SPEAKER_XBOX.
Speed of sound, in user-defined world units per second. Use this value only for doppler calculations. It must be greater than or equal to FLT_MIN.
3D audio instance handle. Use this handle when you call
This function does not return a value.
X3DAUDIO_HANDLE is an opaque data structure. Because the operating system doesn't allocate any additional storage for the 3D audio instance handle, you don't need to free or close it.
Describes the contents of a stream buffer.
This metadata can be used to implement optimizations that require knowledge of a stream buffer's contents. For example, XAPOs that always produce silent output from silent input can check the flag on the input stream buffer to determine if any signal processing is necessary. If silent, the XAPO can simply set the flag on the output stream buffer to silent and return, thus averting the work of processing silent data.
Likewise, XAPOs that receive valid input data, but generate silence (for any reason), may set the output stream buffer's flag accordingly, rather than writing silent samples to the buffer.
These flags represent what should be assumed is in the respective buffer. The flags may not reflect what is actually stored in memory. For example, the
Stream buffer contains only silent samples.
Stream buffer contains audio data to be processed.
Initialization parameters for use with the FXECHO XAPOFX.
Use of this structure is optional. The default MaxDelay is
Parameters for use with the FXECHO XAPOFX.
Echo only supports FLOAT32 audio formats.
Parameters for use with the FXEQ XAPO.
Each band ranges from FrequencyCenterN - (BandwidthN / 2) to FrequencyCenterN + (BandwidthN / 2).
Center frequency in Hz for band 0. Must be between
The boost or decrease to frequencies in band 0. Must be between
Width of band 0. Must be between
Center frequency in Hz for band 1. Must be between
The boost or decrease to frequencies in band 1. Must be between
Width of band 1. Must be between
Center frequency in Hz for band 2. Must be between
The boost or decrease to frequencies in band 2. Must be between
Width of band 2. Must be between
Center frequency in Hz for band 3. Must be between
The boost or decrease to frequencies in band 3. Must be between
Width of band 3. Must be between
Parameters for use with the FXMasteringLimiter XAPO.
Parameters for use with the FXReverb XAPO.
Controls the character of the individual wall reflections. Set to minimum value to simulate a hard flat surface and to maximum value to simulate a diffuse surface.Value must be between
Size of the room. Value must be between
The interface for an Audio Processing Object which be used in an XAudio2 effect chain.
The interface for an Audio Processing Object which be used in an XAudio2 effect chain.
Returns the registration properties of an XAPO.
Receives a reference to a
Returns
Queries if a specific input format is supported for a given output format.
Output format.
Input format to check for being supported.
If not
Returns
The
Queries if a specific output format is supported for a given input format.
Input format.
Output format to check for being supported.
If not
Returns
The
Performs any effect-specific initialization.
Effect-specific initialization parameters, may be
Size of pData in bytes, may be 0 if pData is
Returns
The contents of pData are defined by a given XAPO. Immutable parameters (constant for the lifetime of the XAPO) should be set in this method. Once initialized, an XAPO cannot be initialized again. An XAPO should be initialized before passing it to XAudio2 as part of an effect chain.
Note??XAudio2 does not call this method, it should be called by the client before passing the XAPO to XAudio2.?Resets variables dependent on frame history.
Constant and locked parameters such as the input and output formats remain unchanged. Variables set by
For example, an effect with delay should zero out its delay line during this method, but should not reallocate anything as the XAPO remains locked with a constant input and output configuration.
XAudio2 only calls this method if the XAPO is locked.
This method is called from the realtime thread and should not block.
Called by XAudio2 to lock the input and output configurations of an XAPO allowing it to do any final initialization before Process is called on the realtime thread.
Returns
Once locked, the input and output configuration and any other locked parameters remain constant until UnLockForProcess is called. After an XAPO is locked, further calls to LockForProcess have no effect until the UnLockForProcess function is called.
An XAPO indicates what specific formats it supports through its implementation of the IsInputFormatSupported and IsOutputFormatSupported methods. An XAPO should assert the input and output configurations are supported and that any required effect-specific initialization is complete. The IsInputFormatSupported, IsOutputFormatSupported, and Initialize methods should be used as necessary before calling this method.
Because Process is a nonblocking method, all internal memory buffers required for Process should be allocated in LockForProcess.
Process is never called before LockForProcess returns successfully.
LockForProcess is called directly by XAudio2 and should not be called by the client code.
Deallocates variables that were allocated with the LockForProcess method.
Unlocking an XAPO instance allows it to be reused with different input and output formats.
Runs the XAPO's digital signal processing (DSP) code on the given input and output buffers.
Number of elements in pInputProcessParameters.
Note??XAudio2 currently supports only one input stream and one output stream. ? Input array of
Number of elements in pOutputProcessParameters.
Note??XAudio2 currently supports only one input stream and one output stream. ?Output array of
TRUE to process normally;
Implementations of this function should not block, as the function is called from the realtime audio processing thread.
All code that could cause a delay, such as format validation and memory allocation, should be put in the
For in-place processing, the pInputProcessParameters parameter will not necessarily be the same as pOutputProcessParameters. Rather, their pBuffer members will point to the same memory.
Multiple input and output buffers may be used with in-place XAPOs, though the input buffer count must equal the output buffer count. For in-place processing when multiple input and output buffers are used, the XAPO may assume the number of input buffers equals the number of output buffers.
In addition to writing to the output buffer, as appropriate, an XAPO is responsible for setting the output stream's buffer flags and valid frame count.
When IsEnabled is
When writing a Process method, it is important to note XAudio2 audio data is interleaved, which means data from each channel is adjacent for a particular sample number. For example, if there was a 4-channel wave playing into an XAudio2 source voice, the audio data would be a sample of channel 0, a sample of channel 1, a sample of channel 2, a sample of channel 3, and then the next sample of channels 0, 1, 2, 3, and so on.
Returns the number of input frames required to generate the given number of output frames.
The number of output frames desired.
Returns the number of input frames required.
XAudio2 calls this method to determine what size input buffer an XAPO requires to generate the given number of output frames. This method only needs to be called once while an XAPO is locked. CalcInputFrames is only called by XAudio2 if the XAPO is locked.
This function should not block, because it may be called from the realtime audio processing thread.
Returns the number of output frames that will be generated from a given number of input frames.
The number of input frames.
Returns the number of output frames that will be produced.
XAudio2 calls this method to determine how large of an output buffer an XAPO will require for a certain number of input frames. CalcOutputFrames is only called by XAudio2 if the XAPO is locked.
This function should not block, because it may be called from the realtime audio processing thread.
An optional interface that allows an XAPO to use effect-specific parameters.
An optional interface that allows an XAPO to use effect-specific parameters.
Sets effect-specific parameters.
Effect-specific parameter block.
Size of pParameters, in bytes.
The data in pParameters is completely effect-specific and determined by the implementation of the
SetParameters can only be called on the real-time audio processing thread; no synchronization between SetParameters and the
Gets the current values for any effect-specific parameters.
Receives an effect-specific parameter block.
Size of pParameters, in bytes.
The data in pParameters is completely effect-specific and determined by the implementation of the
Unlike SetParameters, XAudio2 does not call this method on the realtime audio processing thread. Thus, the XAPO must protect variables shared with
XAudio2 calls this method from the
This method may block and should never be called from the realtime audio processing thread instead get the current parameters from CXAPOParametersBase::BeginProcess.
Defines stream buffer parameters that may change from one call to the next. Used with the Process method.
Although the format and maximum size values of a particular stream buffer are constant, as defined by the
Defines stream buffer parameters that remain constant while an XAPO is locked. Used with the
The byte size of the respective stream buffer must be at least MaxFrameCount ? (pFormat->nBlockAlign) bytes.
Describes general characteristics of an XAPO. Used with