Enables the application to defer the creation of an object. This interface is exposed by activation objects.
Typically, the application calls some function that returns an
The class identifier that is associated with the activatable runtime class.
An optional friendly name for the activation object. The friendly name is stored in the object's
To create the Windows Runtime object, call
Creates the object associated with this activation object.
Interface identifier (IID) of the requested interface.
A reference to the requested interface. The caller must release the interface.
Some Microsoft Media Foundation objects must be shut down before being released. If so, the caller is responsible for shutting down the object that is returned in ppv. To shut down the object, do one of the following:
The
After the first call to ActivateObject, subsequent calls return a reference to the same instance, until the client calls either ShutdownObject or
Creates the object associated with this activation object. Riid is provided via reflection on the COM object type
A reference to the requested interface. The caller must release the interface.
Some Microsoft Media Foundation objects must be shut down before being released. If so, the caller is responsible for shutting down the object that is returned in ppv. To shut down the object, do one of the following:
The
After the first call to ActivateObject, subsequent calls return a reference to the same instance, until the client calls either ShutdownObject or
Creates the object associated with this activation object.
Interface identifier (IID) of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
Some Microsoft Media Foundation objects must be shut down before being released. If so, the caller is responsible for shutting down the object that is returned in ppv. To shut down the object, do one of the following:
The
After the first call to ActivateObject, subsequent calls return a reference to the same instance, until the client calls either ShutdownObject or
Shuts down the created object.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If you create an object by calling
The component that calls ActivateObject?not the component that creates the activation object?is responsible for calling ShutdownObject. For example, in a typical playback application, the application creates activation objects for the media sinks, but the Media Session calls ActivateObject. Therefore the Media Session, not the application, calls ShutdownObject.
After ShutdownObject is called, the activation object releases all of its internal references to the created object. If you call ActivateObject again, the activation object will create a new instance of the other object.
Detaches the created object from the activation object.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
?
The activation object releases all of its internal references to the created object. If you call ActivateObject again, the activation object will create a new instance of the other object.
The DetachObject method does not shut down the created object. If the DetachObject method succeeds, the client must shut down the created object. This rule applies only to objects that have a shutdown method or that support the
Implementation of this method is optional. If the activation object does not support this method, the method returns E_NOTIMPL.
Provides information about the result of an asynchronous operation.
Use this interface to complete an asynchronous operation. You get a reference to this interface when your callback object's
If you are implementing an asynchronous method, call
Any custom implementation of this interface must inherit the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
The caller of the asynchronous method specifies the state object, and can use it for any caller-defined purpose. The state object can be
If you are implementing an asynchronous method, set the state object on the through the punkState parameter of the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Get or sets the status of the asynchronous operation.
The method returns an
Return code | Description |
---|---|
| The operation completed successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Returns an object associated with the asynchronous operation. The type of object, if any, depends on the asynchronous method that was called.
Receives a reference to the object's
Typically, this object is used by the component that implements the asynchronous method. It provides a way for the function that invokes the callback to pass information to the asynchronous End... method that completes the operation.
If you are implementing an asynchronous method, you can set the object through the punkObject parameter of the
If the asynchronous result object's internal
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns the state object specified by the caller in the asynchronous Begin method.
Receives a reference to the state object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no state object associated with this asynchronous result. |
?
The caller of the asynchronous method specifies the state object, and can use it for any caller-defined purpose. The state object can be
If you are implementing an asynchronous method, set the state object on the through the punkState parameter of the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns the status of the asynchronous operation.
The method returns an
Return code | Description |
---|---|
| The operation completed successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the status of the asynchronous operation.
The status of the asynchronous operation.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If you implement an asynchronous method, call SetStatus to set the status code for the operation.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns an object associated with the asynchronous operation. The type of object, if any, depends on the asynchronous method that was called.
Receives a reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no object associated with this asynchronous result. |
?
Typically, this object is used by the component that implements the asynchronous method. It provides a way for the function that invokes the callback to pass information to the asynchronous End... method that completes the operation.
If you are implementing an asynchronous method, you can set the object through the punkObject parameter of the
If the asynchronous result object's internal
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns the state object specified by the caller in the asynchronous Begin method, without incrementing the object's reference count.
Returns a reference to the state object's
This method cannot be called remotely.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The
The following functions return
A byte stream for a media souce can be opened with read access. A byte stream for an archive media sink should be opened with both read and write access. (Read access may be required, because the archive sink might need to read portions of the file as it writes.)
Some implementations of this interface also expose one or more of the following interfaces:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the characteristics of the byte stream.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the length of the stream.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the current read or write position in the stream.
The methods that update the current position are Read, BeginRead, Write, BeginWrite, SetCurrentPosition, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Queries whether the current position has reached the end of the stream.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Reads data from the stream.
Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
This method reads at most cb bytes from the current position in the stream and copies them into the buffer provided by the caller. The number of bytes that were read is returned in the pcbRead parameter. The method does not return an error code on reaching the end of the file, so the application should check the value in pcbRead after the method returns.
This method is synchronous. It blocks until the read operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous read operation from the stream.
Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been read into the buffer, the callback object's
Do not read from, write to, free, or reallocate the buffer while an asynchronous read is pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous read operation.
Pointer to the
Call this method after the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Writes data to the stream.
Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
This method writes the contents of the pb buffer to the stream, starting at the current stream position. The number of bytes that were written is returned in the pcbWritten parameter.
This method is synchronous. It blocks until the write operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous write operation to the stream.
Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been written to the stream, the callback object's
Do not reallocate, free, or write to the buffer while an asynchronous write is still pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous write operation.
Pointer to the
Call this method when the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Moves the current position in the stream by a specified offset.
Specifies the origin of the seek as a member of the
Specifies the new position, as a byte offset from the seek origin.
Specifies zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| All pending I/O requests are canceled after the seek request completes successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
If this method succeeds, it returns
If the byte stream is read-only, this method has no effect.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the characteristics of the byte stream.
Receives a bitwise OR of zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| The byte stream can be read. |
| The byte stream can be written to. |
| The byte stream can be seeked. |
| The byte stream is from a remote source, such as a network. |
| The byte stream represents a file directory. |
| Seeking within this stream might be slow. For example, the byte stream might download from a network. |
| The byte stream is currently downloading data to a local cache. Read operations on the byte stream might take longer until the data is completely downloaded. This flag is cleared after all of the data has been downloaded. If the MFBYTESTREAM_HAS_SLOW_SEEK flag is also set, it means the byte stream must download the entire file sequentially. Otherwise, the byte stream can respond to seek requests by restarting the download from a new point in the stream. |
| Another thread or process can open this byte stream for writing. If this flag is present, the length of thebyte stream could change while it is being read. This flag can affect the behavior of byte-stream handlers. For more information, see |
| The byte stream is not currentlyusing the network to receive the content. Networking hardwaremay enter a power saving state when this bit is set. Note??Requires Windows?8 or later. ? |
?
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of the stream.
Receives the length of the stream, in bytes. If the length is unknown, this value is -1.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the length of the stream.
Length of the stream in bytes.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the current read or write position in the stream.
Receives the current position, in bytes.
If this method succeeds, it returns
The methods that update the current position are Read, BeginRead, Write, BeginWrite, SetCurrentPosition, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the current read or write position.
New position in the stream, as a byte offset from the start of the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
If the new position is larger than the length of the stream, the method returns E_INVALIDARG.
Implementation notes: This method should update the current position in the stream by setting the current position to the value passed in to the qwPosition parameter. Other methods that can update the current position are Read, BeginRead, Write, BeginWrite, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the current position has reached the end of the stream.
Receives the value TRUE if the end of the stream has been reached, or
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Reads data from the stream.
Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Receives the number of bytes that are copied into the buffer. This parameter cannot be
If this method succeeds, it returns
This method reads at most cb bytes from the current position in the stream and copies them into the buffer provided by the caller. The number of bytes that were read is returned in the pcbRead parameter. The method does not return an error code on reaching the end of the file, so the application should check the value in pcbRead after the method returns.
This method is synchronous. It blocks until the read operation completes.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that were read, which is specified by the value returned in the pcbRead parameter, to the current position. Other methods that can update the current position are Read, Write, BeginWrite, Seek, and SetCurrentPosition.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous read operation from the stream.
Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been read into the buffer, the callback object's
Do not read from, write to, free, or reallocate the buffer while an asynchronous read is pending.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that will be read, which is specified by the value returned in the pcbRead parameter, to the current position. Other methods that can update the current position are BeginRead, Write, BeginWrite, Seek, and SetCurrentPosition.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Completes an asynchronous read operation.
Pointer to the
Receives the number of bytes that were read.
If this method succeeds, it returns
Call this method after the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Writes data to the stream.
Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
Receives the number of bytes that are written.
If this method succeeds, it returns
This method writes the contents of the pb buffer to the stream, starting at the current stream position. The number of bytes that were written is returned in the pcbWritten parameter.
This method is synchronous. It blocks until the write operation completes.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that were written to the stream, which is specified by the value returned in the pcbWritten, to the current position offset.
Other methods that can update the current position are Read, BeginRead, BeginWrite, Seek, and SetCurrentPosition.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous write operation to the stream.
Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been written to the stream, the callback object's
Do not reallocate, free, or write to the buffer while an asynchronous write is still pending.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that will be written to the stream, which is specified by the value returned in the pcbWritten, to the current position. Other methods that can update the current position are Read, BeginRead, Write, Seek, and SetCurrentPosition.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Completes an asynchronous write operation.
Pointer to the
Receives the number of bytes that were written.
If this method succeeds, it returns
Call this method when the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Moves the current position in the stream by a specified offset.
Specifies the origin of the seek as a member of the
Specifies the new position, as a byte offset from the seek origin.
Specifies zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| All pending I/O requests are canceled after the seek request completes successfully. |
?
Receives the new position after the seek.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Implementation notes: This method should update the current position in the stream by adding the qwSeekOffset to the seek SeekOrigin position. This should be the same value passed back in the pqwCurrentPosition parameter. Other methods that can update the current position are Read, BeginRead, Write, BeginWrite, and SetCurrentPosition.
Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
If this method succeeds, it returns
If the byte stream is read-only, this method has no effect.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Controls one or more capture devices. The capture engine implements this interface. To get a reference to this interface, call either MFCreateCaptureEngine or
Creates an instance of the capture engine.
The CLSID of the object to create. Currently, this parameter must equal
The IID of the requested interface. The capture engine supports the
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
Before calling this method, call the
Initializes the capture engine.
A reference to the
A reference to the
You can use this parameter to configure the capture engine. Call
An
If you set the
Otherwise, if pAudioSource is
To override the default audio device, set pAudioSource to an
An
If you set the
Otherwise, if pVideoSource is
To override the default video device, set pVideoSource to an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The Initialize method was already called. |
| No capture devices are available. |
You must call this method once before using the capture engine. Calling the method a second time returns
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_INITIALIZED event through the
Gets a reference to the capture source object. Use the capture source to configure the capture devices.
Initializes the capture engine.
A reference to the
A reference to the
You can use this parameter to configure the capture engine. Call
An
If you set the
Otherwise, if pAudioSource is
To override the default audio device, set pAudioSource to an
An
If you set the
Otherwise, if pVideoSource is
To override the default video device, set pVideoSource to an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The Initialize method was already called. |
| No capture devices are available. |
?
You must call this method once before using the capture engine. Calling the method a second time returns
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_INITIALIZED event through the
Starts preview.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The preview sink was not initialized. |
?
Before calling this method, configure the preview sink by calling
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_PREVIEW_STARTED event through the
After the preview sink is configured, you can stop and start preview by calling
Stops preview.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The capture engine is not currently previewing. |
?
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_PREVIEW_STOPPED event through the
Starts recording audio and/or video to a file.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The recording sink was not initialized. |
?
Before calling this method, configure the recording sink by calling
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_RECORD_STARTED event through the
To stop recording, call
Stops recording.
A Boolean value that specifies whether to finalize the output file. To create a valid output file, specify TRUE. Specify
A Boolean value that specifies if the unprocessed samples waiting to be encoded should be flushed.
If this method succeeds, it returns
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_RECORD_STOPPED event through the
Captures a still image from the video stream.
If this method succeeds, it returns
Before calling this method, configure the photo sink by calling
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_PHOTO_TAKEN event through the
Gets a reference to one of the capture sink objects. You can use the capture sinks to configure preview, recording, or still-image capture.
An
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid argument. |
?
Gets a reference to the capture source object. Use the capture source to configure the capture devices.
Receives a reference to the
If this method succeeds, it returns
Creates an instance of the capture engine.
To get a reference to this interface, call the CoCreateInstance function and specify the CLSID equal to
Calling the MFCreateCaptureEngine function is equivalent to calling
Creates an instance of the capture engine.
The CLSID of the object to create. Currently, this parameter must equal
The IID of the requested interface. The capture engine supports the
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
Before calling this method, call the
Callback interface for receiving events from the capture engine.
To set the callback interface on the capture engine, call the
Callback interface to receive data from the capture engine.
To set the callback interface, call one of the following methods.
Extensions for the
Controls the photo sink. The photo sink captures still images from the video stream.
The photo sink can deliver samples to one of the following destinations:
The application must specify a single destination. Multiple destinations are not supported.
To capture an image, call
Specifies a byte stream that will receive the still image data.
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a callback to receive the still-image data.
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies the name of the output file for the still image.
Calling this method overrides any previous call to
Specifies the name of the output file for the still image.
A null-terminated string that contains the URL of the output file.
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a callback to receive the still-image data.
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies a byte stream that will receive the still image data.
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Controls the preview sink. The preview sink enables the application to preview audio and video from the camera.
To start preview, call
Sets a callback to receive the preview data for one stream.
The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies a window for preview.
Calling this method overrides any previous call to
Specifies a Microsoft DirectComposition visual for preview.
Gets or sets the current mirroring state of the video preview stream.
Sets a custom media sink for preview.
This method overrides the default selection of the media sink for preview.
Specifies a window for preview.
A handle to the window. The preview sink draws the video frames inside this window.
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies a Microsoft DirectComposition visual for preview.
A reference to a DirectComposition visual that implements the
If this method succeeds, it returns
Updates the video frame. Call this method when the preview window receives a WM_PAINT or WM_SIZE message.
If this method succeeds, it returns
Sets a callback to receive the preview data for one stream.
The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Gets the current mirroring state of the video preview stream.
Receives the value TRUE if mirroring is enabled, or
If this method succeeds, it returns
Enables or disables mirroring of the video preview stream.
If TRUE, mirroring is enabled. If
If this method succeeds, it returns
Gets the rotation of the video preview stream.
The zero-based index of the stream. You must specify a video stream.
Receives the image rotation, in degrees.
If this method succeeds, it returns
Rotates the video preview stream.
The zero-based index of the stream to rotate. You must specify a video stream.
The amount to rotate the video, in degrees. Valid values are 0, 90, 180, and 270. The value zero restores the video to its original orientation.
If this method succeeds, it returns
Sets a custom media sink for preview.
A reference to the
If this method succeeds, it returns
This method overrides the default selection of the media sink for preview.
Controls the recording sink. The recording sink creates compressed audio/video files or compressed audio/video streams.
The recording sink can deliver samples to one of the following destinations:
The application must specify a single destination. Multiple destinations are not supported. (However, if a callback is used, you can provide a separate callback for each stream.)
If the destination is a byte stream or an output file, the application specifies a container type, such as MP4 or ASF. The capture engine then multiplexes the audio and video to produce the format defined by the container type. If the destination is a callback interface, however, the capture engine does not multiplex or otherwise interleave the samples. The callback option gives you the most control over the recorded output, but requires more work by the application.
To start the recording, call
Specifies a byte stream that will receive the data for the recording.
A reference to the
A
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a callback to receive the recording data for one stream.
The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies the name of the output file for the recording.
The capture engine uses the file name extension to select the container type for the output file. For example, if the file name extension is ."mp4", the capture engine creates an MP4 file.
Calling this method overrides any previous call to
Sets a custom media sink for recording.
This method overrides the default selection of the media sink for recording.
Specifies a byte stream that will receive the data for the recording.
A reference to the
A
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies the name of the output file for the recording.
A null-terminated string that contains the URL of the output file.
If this method succeeds, it returns
The capture engine uses the file name extension to select the container type for the output file. For example, if the file name extension is ."mp4", the capture engine creates an MP4 file.
Calling this method overrides any previous call to
Sets a callback to receive the recording data for one stream.
The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a custom media sink for recording.
A reference to the
If this method succeeds, it returns
This method overrides the default selection of the media sink for recording.
Gets the rotation that is currently being applied to the recorded video stream.
The zero-based index of the stream. You must specify a video stream.
Receives the image rotation, in degrees.
If this method succeeds, it returns
Rotates the recorded video stream.
The zero-based index of the stream to rotate. You must specify a video stream.
The amount to rotate the video, in degrees. Valid values are 0, 90, 180, and 270. The value zero restores the video to its original orientation.
If this method succeeds, it returns
Controls a capture sink, which is an object that receives one or more streams from a capture device.
The capture engine creates the following capture sinks.
To get a reference to a capture sink, call
Sink | Interface |
---|---|
Photo sink | |
Preview sink | |
Recording sink | |
?
Applications cannot directly create the capture sinks.
If an image stream native media type is set to JPEG, the photo sink should be configured with a format identical to native source format. JPEG native type is passthrough only.
If an image stream native type is set to JPEG, to add an effect, change the native type on the image stream to an uncompressed video media type (such as NV12 or RGB32) and then add the effect.
If the native type is H.264 for the record stream, the record sink should be configured with the same media type. H.264 native type is passthrough only and cannot be decoded.
Record streams that expose H.264 do not expose any other type. H.264 record streams cannot be used in conjunction with effects. To add effects, instead connect the preview stream to the recordsink using AddStream.
Queries the underlying Sink Writer object for an interface.
Gets the output format for a stream on this capture sink.
The zero-based index of the stream to query. The index is returned in the pdwSinkStreamIndex parameter of the
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSinkStreamIndex parameter is invalid. |
?
Queries the underlying Sink Writer object for an interface.
Connects a stream from the capture source to this capture sink.
The source stream to connect. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
An
A reference to the
Receives the index of the new stream on the capture sink. Note that this index will not necessarily match the value of dwSourceStreamIndex.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The format specified in pMediaType is not valid for this capture sink. |
| The dwSourceStreamIndex parameter is invalid, or the specified source stream was already connected to this sink. |
?
Prepares the capture sink by loading any required pipeline components, such as encoders, video processors, and media sinks.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. |
?
Calling this method is optional. This method gives the application an opportunity to configure the pipeline components before they are used. The method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_SINK_PREPARED event through the
Before calling this method, configure the capture sink by adding at least one stream. To add a stream, call
The Prepare method fails if the capture sink is currently in use. For example, calling Prepare on the preview sink fails if the capture engine is currently previewing.
Removes all streams from the capture sink.
If this method succeeds, it returns
You can use this method to reconfigure the sink.
Receives state-change notifications from the presentation clock.
To receive state-change notifications from the presentation clock, implement this interface and call
This interface must be implemented by:
Presentation time sources. The presentation clock uses this interface to request change states from the time source.
Media sinks. Media sinks use this interface to get notifications when the presentation clock changes.
Other objects that need to be notified can implement this interface.
Applies to: desktop apps only
Enables two threads to share the same Direct3D 9 device, and provides access to the DirectX Video Acceleration (DXVA) features of the device.
This interface is exposed by the Direct3D Device Manager. To create the Direct3D device manager, call
To get this interface from the Enhanced Video Renderer (EVR), call
The Direct3D Device Manager supports Direct3D 9 devices only. It does not support DXGI devices.
Enables two threads to share the same Direct3D 9 device, and provides access to the DirectX Video Acceleration (DXVA) features of the device.
This interface is exposed by the Direct3D Device Manager. To create the Direct3D device manager, call
To get this interface from the Enhanced Video Renderer (EVR), call
The Direct3D Device Manager supports Direct3D 9 devices only. It does not support DXGI devices.
Windows Store apps must use IMFDXGIDeviceManager and Direct3D 11 Video APIs.
Applies to: desktop apps only
Creates an instance of the Direct3D Device Manager.
If this function succeeds, it returns
Sets the Direct3D device or notifies the device manager that the Direct3D device was reset.
Pointer to the
Token received in the pResetToken parameter of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid token |
| Direct3D device error. |
?
When you first create the Direct3D device manager, call this method with a reference to the Direct3D device. The device manager does not create the device; the caller must provide the device reference initially.
Also call this method if the Direct3D device becomes lost and you need to reset the device or create a new device. This occurs if
The resetToken parameter ensures that only the component which originally created the device manager can invalidate the current device.
If this method succeeds, all open device handles become invalid.
Gets a handle to the Direct3D device.
Receives the device handle.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Direct3D device manager was not initialized. The owner of the device must call |
?
To get the Direct3D device's
To test whether a device handle is still valid, call
Closes a Direct3D device handle. Call this method to release a device handle retrieved by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid handle. |
?
Tests whether a Direct3D device handle is valid.
Handle to a Direct3D device. To get a device handle, call
The method returns an
Return code | Description |
---|---|
| The device handle is valid. |
| The specified handle is not a Direct3D device handle. |
| The device handle is invalid. |
?
If the method returns DXVA2_E_NEW_VIDEO_DEVICE, call
Gives the caller exclusive access to the Direct3D device.
A handle to the Direct3D device. To get the device handle, call
Receives a reference to the device's
Specifies whether to wait for the device lock. If the device is already locked and this parameter is TRUE, the method blocks until the device is unlocked. Otherwise, if the device is locked and this parmater is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The device handle is invalid. |
| The Direct3D device manager was not initialized. The owner of the device must call |
| The device is locked and fBlock is |
| The specified handle is not a Direct3D device handle. |
?
When you are done using the Direct3D device, call
If the method returns DXVA2_E_NEW_VIDEO_DEVICE, call
If fBlock is TRUE, this method can potentially deadlock. For example, it will deadlock if a thread calls LockDevice and then waits on another thread that calls LockDevice. It will also deadlock if a thread calls LockDevice twice without calling UnlockDevice in between.
Unlocks the Direct3D device. Call this method to release the device after calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified device handle is not locked, or is not a valid handle. |
?
Gets a DirectX Video Acceleration (DXVA) service interface.
A handle to a Direct3D device. To get a device handle, call
The interface identifier (IID) of the requested interface. The Direct3D device might support the following DXVA service interfaces:
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The device handle is invalid. |
| The Direct3D device does not support video acceleration. |
| The Direct3D device manager was not initialized. The owner of the device must call |
| The specified handle is not a Direct3D device handle. |
?
If the method returns DXVA2_E_NEW_VIDEO_DEVICE, call
Specifies how the output alpha values are calculated for Microsoft DirectX Video Acceleration High Definition (DXVA-HD) blit operations.
The Mode member of the
To find out which modes the device supports, call the
Alpha values inside the target rectangle are set to opaque.
Alpha values inside the target rectangle are set to the alpha value specified in the background color. See
Existing alpha values remain unchanged in the output surface.
Alpha values from the input stream are scaled and copied to the corresponding destination rectangle for that stream. If the input stream does not have alpha data, the DXVA-HD device sets the alpha values in the target rectangle to an opaque value. If the input stream is disabled or the source rectangle is empty, the alpha values in the target rectangle are not modified.
Specifies state parameters for blit operations when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
To set a state parameter, call the
Defines video processing capabilities for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
The device can blend video content in linear color space. Most video content is gamma corrected, resulting in nonlinear values. If the DXVA-HD device sets this flag, it means the device converts colors to linear space before blending, which produces better results.
The device supports the xvYCC color space for YCbCr data.
The device can perform range conversion when the input and output are both RGB but use different color ranges (0-255 or 16-235, for 8-bit RGB).
The device can apply a matrix conversion to YCbCr values when the input and output are both YCbCr. For example, the driver can convert colors from BT.601 to BT.709.
Specifies the type of Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
Hardware device. Video processing is performed in the GPU by the driver.
Software device. Video processing is performed in the CPU by a software plug-in.
Reference device. Video processing is performed in the CPU by a software plug-in.
Other. The device is neither a hardware device nor a software plug-in.
Specifies the intended use for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
The graphics driver uses one of these enumeration constants as a hint when it creates the DXVA-HD device.
Normal video playback. The graphics driver should expose a set of capabilities that are appropriate for real-time video playback.
Optimal speed. The graphics driver should expose a minimal set of capabilities that are optimized for performance.
Use this setting if you want better performance and can accept some reduction in video quality. For example, you might use this setting in power-saving mode or to play video thumbnails.
Optimal quality. The grahics driver should expose its maximum set of capabilities.
Specify this setting to get the best video quality possible. It is appropriate for tasks such as video editing, when quality is more important than speed. It is not appropriate for real-time playback.
Defines features that a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device can support.
The device can set the alpha values on the video output. See
The device can downsample the video output. See
The device can perform luma keying. See
The device can apply alpha values from color palette entries. See
Defines the range of supported values for an image filter.
The multiplier enables the filter range to have a fractional step value.
For example, a hue filter might have an actual range of [-180.0 ... +180.0] with a step size of 0.25. The device would report the following range and multiplier:
In this case, a filter value of 2 would be interpreted by the device as 0.50 (or 2 ? 0.25).
The device should use a multiplier that can be represented exactly as a base-2 fraction.
The minimum value of the filter.
The maximum value of the filter.
The default value of the filter.
A multiplier. Use the following formula to translate the filter setting into the actual filter value: Actual Value = Set Value???Multiplier.
Defines capabilities related to image adjustment and filtering for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
The device can adjust the brightness level.
The device can adjust the contrast level.
The device can adjust hue.
The device can adjust the saturation level.
The device can perform noise reduction.
The device can perform edge enhancement.
The device can perform anamorphic scaling. Anamorphic scaling can be used to stretch 4:3 content to a widescreen 16:9 aspect ratio.
Describes how a video stream is interlaced.
Frames are progressive.
Frames are interlaced. The top field of each frame is displayed first.
Frame are interlaced. The bottom field of each frame is displayed first.
Defines capabilities related to input formats for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
These flags define video processing capabilities that are usually not needed, and therefore are not required for DXVA-HD devices to support.
The first three flags relate to RGB support for functions that are normally applied to YCbCr video: deinterlacing, color adjustment, and luma keying. A DXVA-HD device that supports these functions for YCbCr is not required to support them for RGB input. Supporting RGB input for these functions is an additional capability, reflected by these constants. The driver might convert the input to another color space, perform the indicated function, and then convert the result back to RGB.
Similarly, a device that supports de-interlacing is not required to support deinterlacing of palettized formats. This capability is indicated by the
The device can deinterlace an input stream that contains interlaced RGB video.
The device can perform color adjustment on RGB video.
The device can perform luma keying on RGB video.
The device can deinterlace input streams with palettized color formats.
Specifies the inverse telecine (IVTC) capabilities of a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
The video processor can reverse 3:2 pulldown.
The video processor can reverse 2:2 pulldown.
The video processor can reverse 2:2:2:4 pulldown.
The video processor can reverse 2:3:3:2 pulldown.
The video processor can reverse 3:2:3:2:2 pulldown.
The video processor can reverse 5:5 pulldown.
The video processor can reverse 6:4 pulldown.
The video processor can reverse 8:7 pulldown.
The video processor can reverse 2:2:2:2:2:2:2:2:2:2:2:3 pulldown.
The video processor can reverse other telecine modes not listed here.
Describes how to map color data to a normalized [0...1] range.
These flags are used in the
For YUV colors, these flags specify how to convert between Y'CbCr and Y'PbPr. The Y'PbPr color space has a range of [0..1] for Y' (luma) and [-0.5...0.5] for Pb/Pr (chroma).
Value | Description |
---|---|
Should not be used for YUV data. | |
For 8-bit Y'CbCr components:
For samples with n bits of precision, the general equations are:
The inverse equations to convert from Y'CbCr to Y'PbPr are:
| |
For 8-bit Y'CbCr values, Y' range of [0..1] maps to [48...208]. |
?
For RGB colors, the flags differentiate various RGB spaces.
Value | Description |
---|---|
sRGB | |
Studio RGB; ITU-R BT.709 | |
ITU-R BT.1361 RGB |
?
Video data might contain values above or below the nominal range.
Note??The values named
This enumeration is equivalent to the DXVA_NominalRange enumeration used in DXVA 1.0, although it defines additional values.
If you are using the
Specifies the output frame rates for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
This enumeration type is used in the
Specifies the processing capabilities of a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
The video processor can perform blend deinterlacing.
In blend deinterlacing, the two fields from an interlaced frame are blended into a single progressive frame. A video processor uses blend deinterlacing when it deinterlaces at half rate, as when converting 60i to 30p. Blend deinterlacing does not require reference frames.
The video processor can perform bob deinterlacing.
In bob deinterlacing, missing field lines are interpolated from the lines above and below. Bob deinterlacing does not require reference frames.
The video processor can perform adaptive deinterlacing.
Adaptive deinterlacing uses spatial or temporal interpolation, and switches between the two on a field-by-field basis, depending on the amount of motion. If the video processor does not receive enough reference frames to perform adaptive deinterlacing, it falls back to bob deinterlacing.
The video processor can perform motion-compensated deinterlacing.
Motion-compensated deinterlacing uses motion vectors to recreate missing lines. If the video processor does not receive enough reference frames to perform motion-compensated deinterlacing, it falls back to bob deinterlacing.
The video processor can perform inverse telecine (IVTC).
If the video processor supports this capability, the ITelecineCaps member of the
The video processor can convert the frame rate by interpolating frames.
Describes the content of a video sample. These flags are used in the
This enumeration is equivalent to the DXVA_SampleFormat enumeration used in DXVA 1.0.
The following table shows the mapping from
No exact match. Use |
?
With the exception of
The value
Specifies the luma key for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
To use this state, the device must support luma keying, indicated by the
If the device does not support luma keying, the
If the input format is RGB, the device must also support the
The values of Lower and Upper give the lower and upper bounds of the luma key, using a nominal range of [0...1]. Given a format with n bits per channel, these values are converted to luma values as follows:
val = f * ((1 << n)-1)
Any pixel whose luma value falls within the upper and lower bounds (inclusive) is treated as transparent.
For example, if the pixel format uses 8-bit luma, the upper bound is calculated as follows:
BYTE Y = BYTE(max(min(1.0, Upper), 0.0) * 255.0)
Note that the value is clamped to the range [0...1] before multiplying by 255.
If TRUE, luma keying is enabled. Otherwise, luma keying is disabled. The default value is
The lower bound for the luma key. The range is [0?1]. The default state value is 0.0.
The upper bound for the luma key. The range is [0?1]. The default state value is 0.0.
Describes a DirectX surface type for DirectX Video Acceleration (DXVA).
The surface is a decoder render target.
The surface is a video processor render target.
The surface is a Direct3D texture render target.
Specifies the type of video surface created by a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
If the DXVA-HD device is a software plug-in and the surface type is
A surface for an input stream. This surface type is equivalent to an off-screen plain surface in Microsoft Direct3D. The application can use the surface in Direct3D calls.
A private surface for an input stream. This surface type is equivalent to an off-screen plain surface, except that the application cannot use the surface in Direct3D calls.
A surface for an output stream. This surface type is equivalent to an off-screen plain surface in Direct3D. The application can use the surface in Direct3D calls.
This surface type is recommended for video processing applications that need to lock the surface and access the surface memory. For video playback with optimal performance, a render-target surface or swap chain is recommended instead.
Describes how chroma values are positioned relative to the luma samples in a YUV video frame. These flags are used in the
The following diagrams show the most common arrangements.
Describes the intended lighting conditions for viewing video content. These flags are used in the
This enumeration is equivalent to the DXVA_VideoLighting enumeration used in DXVA 1.0.
If you are using the
Specifies the color primaries of a video source. These flags are used in the
Color primaries define how to convert RGB colors into the CIE XYZ color space, and can be used to translate colors between different RGB color spaces. An RGB color space is defined by the chromaticity coordinates (x,y) of the RGB primaries plus the white point, as listed in the following table.
Color space | (Rx, Ry) | (Gx, Gy) | (Bx, By) | White point (Wx, Wy) |
---|---|---|---|---|
BT.709 | (0.64, 0.33) | (0.30, 0.60) | (0.15, 0.06) | D65 (0.3127, 0.3290) |
BT.470-2 System M; EBU 3212 | (0.64, 0.33) | (0.29, 0.60) | (0.15, 0.06) | D65 (0.3127, 0.3290) |
BT.470-4 System B,G | (0.67, 0.33) | (0.21, 0.71) | (0.14, 0.08) | CIE III.C (0.310, 0.316) |
SMPTE 170M; SMPTE 240M; SMPTE C | (0.63, 0.34) | (0.31, 0.595) | (0.155, 0.07) | D65 (0.3127, 0.3291) |
?
The z coordinates can be derived from x and y as follows: z = 1 - x - y. To convert between RGB colors to CIE XYZ tristimulus values, compute a matrix T as follows:
Given T, you can use the following formulas to convert between an RGB color value and a CIE XYZ tristimulus value. These formulas assume that the RGB components are linear (not gamma corrected) and are normalized to the range [0...1].
To convert colors directly from one RGB color space to another, use the following formula, where T1 is the matrix for color space RGB1, and T2 is the matrix for color space RGB2.
For a derivation of these formulas, refer to Charles Poynton, Digital Video and HDTV Algorithms and Interfaces (Morgan Kaufmann, 2003).
This enumeration is equivalent to the DXVA_VideoPrimaries enumeration used in DXVA 1.0.
If you are using the
Specifies the conversion function from linear RGB to non-linear RGB (R'G'B'). These flags are used in the
The following table shows the formulas for the most common transfer functions. In these formulas, L is the linear value and L' is the non-linear (gamma corrected) value. These values are relative to a normalized range [0...1].
Color space | Transfer function |
---|---|
sRGB (8-bit) | L' = 12.92L, for L < 0.031308 L' = 1.055L^1/2.4? 0.055, for L >= 0.031308 |
BT.470-2 System B, G | L' = L^0.36 |
BT.470-2 System M | L' = L^0.45 |
BT.709 | L' = 4.50L, for L < 0.018 L' = 1.099L^0.45? 0.099, for L >= 0.018 |
scRGB | L' = L |
SMPTE 240M | L' = 4.0L, for L < 0.0228 L' = 1.1115L^0.45? 0.01115, for L >= 0.0228 |
?
The following table shows the inverse formulas to obtain the original gamma-corrected values:
Color space | Transfer function |
---|---|
sRGB (8-bit) | L = 1/12.92L', for L' < 0.03928 L = ((L' + 0.055)/1055)^2.4, for L' >= 0.03928 |
BT.470-2 System B, G | L = L'^1/0.36 |
BT.470-2 System M | L = L'^1/0.45 |
BT.709 | L = L'/4.50, for L' < 0.081 L = ((L' + 0.099) / 1.099)^1/0.45, for L' >= 0.081 |
scRGB | L = L' |
SMPTE 240M | L = L'/4.0, for L' < 0.0913 L= ((L' + 0.1115)/1.1115)^1/0.45, for L' >= 0.0913 |
?
This enumeration is equivalent to the DXVA_VideoTransferFunction enumeration used in DXVA 1.0.
If you are using the
Bitmask to validate flag values. This value is not a valid flag.
Unknown. Treat as
Linear RGB (gamma = 1.0).
True 1.8 gamma, L' = L^1/1.8.
True 2.0 gamma, L' = L^1/2.0.
True 2.2 gamma, L' = L^1/2.2. This transfer function is used in ITU-R BT.470-2 System M (NTSC).
ITU-R BT.709 transfer function. Gamma 2.2 curve with a linear segment in the lower range. This transfer function is used in BT.709, BT.601, SMPTE 296M, SMPTE 170M, BT.470, and SMPTE 274M. In addition BT-1361 uses this function within the range [0...1].
SMPTE 240M transfer function. Gamma 2.2 curve with a linear segment in the lower range.
sRGB transfer function. Gamma 2.4 curve with a linear segment in the lower range.
True 2.8 gamma. L' = L^1/2.8. This transfer function is used in ITU-R BT.470-2 System B, G (PAL).
Describes the conversion matrices between Y'PbPr (component video) and studio R'G'B'. These flags are used in the
The transfer matrices are defined as follows.
BT.709 transfer matrices:
Y' 0.212600 0.715200 0.072200 R'
Pb = -0.114572 -0.385428 0.500000 x G'
Pr 0.500000 -0.454153 -0.045847 B' R' 1.000000 0.000000 1.574800 Y'
G' = 1.000000 -0.187324 -0.468124 x Pb
B' 1.000000 1.855600 0.000000 Pr
BT.601 transfer matrices:
Y' 0.299000 0.587000 0.114000 R'
Pb = -0.168736 -0.331264 0.500000 x G'
Pr 0.500000 -0.418688 -0.081312 B' R' 1.000000 0.000000 1.402000 Y'
G' = 1.000000 -0.344136 -0.714136 x Pb
B' 1.000000 1.772000 0.000000 Pr
SMPTE 240M (SMPTE RP 145) transfer matrices:
Y' 0.212000 0.701000 0.087000 R'
Pb = -0.116000 -0.384000 0.500000 x G'
Pr 0.500000 -0.445000 -0.055000 B' R' 1.000000 -0.000000 1.576000 Y'
G' = 1.000000 -0.227000 -0.477000 x Pb
B' 1.000000 1.826000 0.000000 Pr
This enumeration is equivalent to the DXVA_VideoTransferMatrix enumeration used in DXVA 1.0.
If you are using the
Creates an instance of the Direct3D Device Manager.
If this function succeeds, it returns
Windows Store apps must use IMFDXGIDeviceManager and Direct3D 11 Video APIs.
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
A reference to the
The interface identifier (IID) of the requested interface. Any of the following interfaces might be supported by the Direct3D device:
Receives a reference to the interface. The caller must release the interface.
If this function succeeds, it returns
Creates a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
A reference to the
A reference to a
A member of the
A reference to an initialization function for a software device. Set this reference if you are using a software plug-in device. Otherwise, set this parameter to
The function reference type is PDXVAHDSW_Plugin.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Direct3D device does not support DXVA-HD. |
?
Use the
Gets the range of values for an image filter that the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device supports.
To find out which image filters the device supports, check the FilterCaps member of the
Applies to: desktop apps only
Gets the range of values for an image filter that the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device supports.
To find out which image filters the device supports, check the FilterCaps member of the
Gets the capabilities of the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
Creates one or more Microsoft Direct3D video surfaces.
The width of each surface, in pixels.
The height of each surface, in pixels.
The pixel format, specified as a
The memory pool in which the surface is created. This parameter must equal the InputPool member of the
Reserved. Set to 0.
The type of surface to create, specified as a member of the
The number of surfaces to create.
A reference to an array of
Reserved. Set to
If this method succeeds, it returns
Gets the capabilities of the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
A reference to a
If this method succeeds, it returns
Gets a list of the output formats supported by the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
The number of formats to retrieve. This parameter must equal the OutputFormatCount member of the
A reference to an array of
If this method succeeds, it returns
The list of formats can include both
Gets a list of the input formats supported by the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
The number of formats to retrieve. This parameter must equal the InputFormatCount member of the
A reference to an array of
If this method succeeds, it returns
The list of formats can include both
Gets the capabilities of one or more Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processors.
The number of elements in the pCaps array. This parameter must equal the VideoProcessorCount member of the
A reference to an array of
If this method succeeds, it returns
Gets a list of custom rates that a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor supports. Custom rates are used for frame-rate conversion and inverse telecine (IVTC).
A
The number of rates to retrieve. This parameter must equal the CustomRateCount member of the
A reference to an array of
If this method succeeds, it returns
Gets the range of values for an image filter that the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device supports.
The type of image filter, specified as a member of the
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Filter parameter is invalid or the device does not support the specified filter. |
?
To find out which image filters the device supports, check the FilterCaps member of the
Creates a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
A
Receives a reference to the
If this method succeeds, it returns
Applies to: desktop apps only
Creates a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
A reference to the
A reference to a
A member of the
Use the
Represents a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
To get a reference to this interface, call the
Sets a state parameter for a blit operation by a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
The state parameter to set, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer that contains the state data. The meaning of the data depends on the State parameter. Each state has a corresponding data structure; for more information, see
If this method succeeds, it returns
Gets the value of a state parameter for blit operations performed by a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
The state parameter to query, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer allocated by the caller. The method copies the state data into the buffer. The buffer must be large enough to hold the data structure that corresponds to the state parameter. For more information, see
If this method succeeds, it returns
Sets a state parameter for an input stream on a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
The zero-based index of the input stream. To get the maximum number of streams, call
The state parameter to set, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer that contains the state data. The meaning of the data depends on the State parameter. Each state has a corresponding data structure; for more information, see
If this method succeeds, it returns
Call this method to set state parameters that apply to individual input streams.
Gets the value of a state parameter for an input stream on a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
The zero-based index of the input stream. To get the maximum number of streams, call
The state parameter to query, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer allocated by the caller. The method copies the state data into the buffer. The buffer must be large enough to hold the data structure that corresponds to the state parameter. For more information, see
If this method succeeds, it returns
Performs a video processing blit on one or more input samples and writes the result to a Microsoft Direct3D surface.
A reference to the
Frame number of the output video frame, indexed from zero.
Number of input streams to process.
Pointer to an array of
If this method succeeds, it returns
The maximum value of StreamCount is given in the MaxStreamStates member of the
Provides DirectX Video Acceleration (DXVA) services from a Direct3D device. To get a reference to this interface, call
This is the base interface for DXVA services. The Direct3D device can support any of the following DXVA services, which derive from
Applies to: desktop apps only
Provides DirectX Video Acceleration (DXVA) services from a Direct3D device. To get a reference to this interface, call
This is the base interface for DXVA services. The Direct3D device can support any of the following DXVA services, which derive from
Creates a DirectX Video Acceleration (DXVA) video processor or DXVA decoder render target.
The width of the surface, in pixels.
The height of the surface, in pixels.
The number of back buffers. The method creates BackBuffers + 1 surfaces.
The pixel format, specified as a
The memory pool in which to create the surface, specified as a
Reserved. Set this value to zero.
The type of surface to create. Use one of the following values.
Value | Meaning |
---|---|
Video decoder render target. | |
Video processor render target. Used for | |
Software render target. This surface type is for use with software DXVA devices. |
?
The address of an array of
A reference to a handle that is used to share the surfaces between Direct3D devices. Set this parameter to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid parameter |
| The DirectX Video Acceleration Manager is not initialized. |
| |
?
If the method returns E_FAIL, try calling
Applies to: desktop apps only
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
A reference to the
If this function succeeds, it returns
Represents a DirectX Video Acceleration (DXVA) video decoder device.
To get a reference to this interface, call
The
Retrieves the DirectX Video Acceleration (DXVA) decoder service that created this decoder device.
Retrieves the DirectX Video Acceleration (DXVA) decoder service that created this decoder device.
Receives a reference to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the parameters that were used to create this device.
Receives the device
Pointer to a
Pointer to a
Receives an array of
Receives the number of elements in the pppDecoderRenderTargets array. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. At least one parameter must be non- |
?
You can set any parameter to
If you specify a non-
Retrieves a reference to a DirectX Video Acceleration (DXVA) decoder buffer.
Type of buffer to retrieve. Use one of the following values.
Value | Meaning |
---|---|
Picture decoding parameter buffer. | |
Macroblock control command buffer. | |
Residual difference block data buffer. | |
Deblocking filter control command buffer. | |
Inverse quantization matrix buffer. | |
Slice-control buffer. | |
Bitstream data buffer. | |
Motion vector buffer. | |
Film grain synthesis data buffer. |
?
Receives a reference to the start of the memory buffer.
Receives the size of the buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The method locks the Direct3D surface that contains the buffer. When you are done using the buffer, call
This method might block if too many operations have been queued on the GPU. The method unblocks when a free buffer becomes available.
Releases a buffer that was obtained by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Starts the decoding operation.
Pointer to the
Reserved; set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid surface type. See Remarks. |
?
After this method is called, call
Each call to BeginFrame must have a matching call to EndFrame, and BeginFrame calls cannot be nested.
DXVA 1.0 migration note: Unlike the IAMVideoAccelerator::BeginFrame method, which specifies the buffer as an index, this method takes a reference directly to the uncompressed buffer.
The surface pointed to by pRenderTarget must be created by calling
Signals the end of the decoding operation.
Reserved.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Executes a decoding operation on the current frame.
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You must call
Provides access to DirectX Video Acceleration (DXVA) decoder services. Use this interface to query which hardware-accelerated decoding operations are available and to create DXVA video decoder devices.
To get a reference to this interface, call
Applies to: desktop apps only
Provides access to DirectX Video Acceleration (DXVA) decoder services. Use this interface to query which hardware-accelerated decoding operations are available and to create DXVA video decoder devices.
To get a reference to this interface, call
Retrieves an array of GUIDs that identifies the decoder devices supported by the graphics hardware.
Receives the number of GUIDs.
Receives an array of GUIDs. The size of the array is retrieved in the Count parameter. The method allocates the memory for the array. The caller must free the memory by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Error from the Direct3D device. |
| If the Microsoft Basic Display Adapter is being used or the Direct3D?11 device type is the reference rasterizer. These devices do not support video decoders. |
?
The following decoder GUIDs are defined. Some of these GUIDs have alternate names, shown in parentheses.
Description | |
---|---|
DXVA2_ModeH264_A (DXVA2_ModeH264_MoComp_NoFGT) | H.264 motion compensation (MoComp), no film grain technology (FGT). |
DXVA2_ModeH264_B (DXVA2_ModeH264_MoComp_FGT) | H.264 MoComp, FGT. |
DXVA2_ModeH264_C (DXVA2_ModeH264_IDCT_NoFGT) | H.264 inverse discrete cosine transform (IDCT), no FGT. |
DXVA2_ModeH264_D (DXVA2_ModeH264_IDCT_FGT) | H.264 IDCT, FGT. |
DXVA2_ModeH264_E (DXVA2_ModeH264_VLD_NoFGT) | H.264 VLD, no FGT. |
DXVA2_ModeH264_F (DXVA2_ModeH264_VLD_FGT) | H.264 variable-length decoder (VLD), FGT. |
DXVA2_ModeMPEG2_IDCT | MPEG-2 IDCT. |
DXVA2_ModeMPEG2_MoComp | MPEG-2 MoComp. |
DXVA2_ModeMPEG2_VLD | MPEG-2 VLD. |
DXVA2_ModeVC1_A (DXVA2_ModeVC1_PostProc) | VC-1 post processing. |
DXVA2_ModeVC1_B (DXVA2_ModeVC1_MoComp) | VC-1 MoComp. |
DXVA2_ModeVC1_C (DXVA2_ModeVC1_IDCT) | VC-1 IDCT. |
DXVA2_ModeVC1_D (DXVA2_ModeVC1_VLD) | VC-1 VLD. |
DXVA2_ModeWMV8_A (DXVA2_ModeWMV8_PostProc) | Windows Media Video 8 post processing. |
DXVA2_ModeWMV8_B (DXVA2_ModeWMV8_MoComp) | Windows Media Video 8 MoComp. |
DXVA2_ModeWMV9_A (DXVA2_ModeWMV9_PostProc) | Windows Media Video 9 post processing. |
DXVA2_ModeWMV9_B (DXVA2_ModeWMV9_MoComp) | Windows Media Video 9 MoComp. |
DXVA2_ModeWMV9_C (DXVA2_ModeWMV9_IDCT) | Windows Media Video 9 IDCT. |
?
Retrieves the supported render targets for a specified decoder device.
Receives the number of formats.
Receives an array of formats, specified as
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets the configurations that are available for a decoder device.
A
A reference to a
Reserved. Set to
Receives the number of configurations.
Receives an array of
If this method succeeds, it returns
Creates a video decoder device.
Pointer to a
Pointer to a
Pointer to an array of
Size of the ppDecoderRenderTargets array. This value cannot be zero.
Receives a reference to the decoder's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a video decoder device.
Pointer to a
Pointer to a
Pointer to an array of
Size of the ppDecoderRenderTargets array. This value cannot be zero.
Receives a reference to the decoder's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Applies to: desktop apps only
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
A reference to the
If this function succeeds, it returns
Sets the type of video memory for uncompressed video surfaces. This interface is used by video decoders and transforms.
The DirectShow enhanced video renderer (EVR) filter exposes this interface as a service on the filter's input pins. To obtain a reference to this interface, call
A video decoder can use this interface to enumerate the EVR filter's preferred surface types and then select the surface type. The decoder should then create surfaces of that type to hold the results of the decoding operation.
This interface does not define a way to clear the surface type. In the case of DirectShow, disconnecting two filters invalidates the surface type.
Sets the video surface type that a decoder will use for DirectX Video Acceleration (DVXA) 2.0.
By calling this method, the caller agrees to create surfaces of the type specified in the dwType parameter.
In DirectShow, during pin connection, a video decoder that supports DVXA 2.0 should call SetSurface with the value
The only way to undo the setting is to break the pin connection.
Retrieves a supported video surface type.
Zero-based index of the surface type to retrieve. Surface types are indexed in order of preference, starting with the most preferred type.
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index was out of range. |
?
Sets the video surface type that a decoder will use for DirectX Video Acceleration (DVXA) 2.0.
Member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The renderer does not support the specified surface type. |
?
By calling this method, the caller agrees to create surfaces of the type specified in the dwType parameter.
In DirectShow, during pin connection, a video decoder that supports DVXA 2.0 should call SetSurface with the value
The only way to undo the setting is to break the pin connection.
Retrieves the parameters that were used to create this device.
You can set any parameter to
Retrieves the DirectX Video Acceleration (DXVA) video processor service that created this video processor device.
Retrieves the capabilities of the video processor device.
Retrieves the DirectX Video Acceleration (DXVA) video processor service that created this video processor device.
Receives a reference to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the parameters that were used to create this device.
Receives the device
Pointer to a
Receives the render target format, specified as a
Receives the maximum number of streams supported by the device. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. At least one parameter must be non- |
?
You can set any parameter to
Retrieves the capabilities of the video processor device.
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the range of values for a video processor (ProcAmp) setting on this video processor device.
The ProcAmp setting to query. See ProcAmp Settings.
Pointer to a
If this method succeeds, it returns
Retrieves the range of values for an image filter supported by this device.
Filter setting to query. For more information, see DXVA Image Filter Settings.
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Performs a video process operation on one or more input samples and writes the result to a Direct3D9 surface.
A reference to the
A reference to a
A reference to an array of
The maximum number of input samples is given by the constant MAX_DEINTERLACE_SURFACES, defined in the header file dxva2api.h.
The number of elements in the pSamples array.
Reserved; set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Internal driver error. |
| Invalid arguments. |
?
When the method returns, the operation might not be complete.
If the method returns E_INVALIDARG, check for the following:
Provides access to DirectX Video Acceleration (DXVA) video processing services.
Use this interface to query which hardware-accelerated video processing operations are available and to create DXVA video processor devices. To obtain a reference to this interface, call
Applies to: desktop apps only
Provides access to DirectX Video Acceleration (DXVA) video processing services.
Use this interface to query which hardware-accelerated video processing operations are available and to create DXVA video processor devices. To obtain a reference to this interface, call
Registers a software video processing device.
Pointer to an initialization function.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets an array of GUIDs which identify the video processors supported by the graphics hardware.
Pointer to a
Receives the number of GUIDs.
Receives an array of GUIDs. The size of the array is retrieved in the pCount parameter. The method allocates the memory for the array. The caller must free the memory by calling CoTaskMemFree.
If this method succeeds, it returns
The following video processor GUIDs are predefined.
Description | |
---|---|
DXVA2_VideoProcBobDevice | Bob deinterlace device. This device uses a "bob" algorithm to deinterlace the video. Bob algorithms create missing field lines by interpolating the lines in a single field. |
DXVA2_VideoProcProgressiveDevice | Progressive video device. This device is available for progressive video, which does not require a deinterlace algorithm. |
DXVA2_VideoProcSoftwareDevice | Reference (software) device. |
?
The graphics device may define additional vendor-specific GUIDs. The driver provides the list of GUIDs in descending quality order. The mode with the highest quality is first in the list. To get the capabilities of each mode, call
Gets the render target formats that a video processor device supports. The list may include RGB and YUV formats.
A
A reference to a
Receives the number of formats.
Receives an array of formats, specified as
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets a list of substream formats supported by a specified video processor device.
A
A reference to a
The format of the render target surface, specified as a
Receives the number of elements returned in the ppFormats array.
Receives an array of
If this method succeeds, it returns
Gets the capabilities of a specified video processor device.
A
A reference to a
The format of the render target surface, specified as a
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets the range of values for a video processor (ProcAmp) setting.
A
A reference to a
The format of the render target surface, specified as a
The ProcAmp setting to query. See ProcAmp Settings.
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the range of values for an image filter supported by a video processor device.
A
A reference to a
The format of the render target surface, specified as a
The filter setting to query. See DXVA Image Filter Settings.
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a video processor device.
A
A reference to a
The format of the render target surface, specified as a
The maximum number of substreams that will be used with this device.
Receives a reference to the video processor's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Applies to: desktop apps only
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
A reference to the
If this function succeeds, it returns
Contains an initialization vector (IV) for 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher encryption.
For AES-CTR encyption, the pvPVPState member of the
The D3DAES_CTR_IV structure and the
The IV, in big-endian format.
The block count, in big-endian format.
Defines a 16-bit AYUV pixel value.
Contains the Cr chroma value (also called V).
Contains the Cb chroma value (also called U).
Contains the luma value.
Contains the alpha value.
Defines an 8-bit AYUV pixel value.
Contains the Cr chroma value (also called V).
Contains the Cb chroma value (also called U).
Contains the luma value.
Contains the alpha value.
Specifies how the output alpha values are calculated for blit operations when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
Specifies the alpha fill mode, as a member of the
If the FeatureCaps member of the
The default state value is
Zero-based index of the input stream to use for the alpha values. This member is used when the alpha fill mode is
To get the maximum number of streams, call
Specifies the background color for blit operations, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
The background color is used to fill the target rectangle wherever no video image appears. Areas outside the target rectangle are not affected. See
The color space of the background color is determined by the color space of the output. See
The alpha value of the background color is used only when the alpha fill mode is
The default background color is full-range RGB black, with opaque alpha.
If TRUE, the BackgroundColor member specifies a YCbCr color. Otherwise, it specifies an RGB color. The default device state is
A
Specifies whether the output is downsampled in a blit operation, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
If the Enable member is TRUE, the device downsamples the composed target rectangle to the size given in the Size member, and then scales it back to the size of the target rectangle.
The width and height of Size must be greater than zero. If the size is larger than the target rectangle, downsampling does not occur.
To use this state, the device must support downsampling, indicated by the
If the device does not support downsampling, the
Downsampling is sometimes used to reduce the quality of premium content when other forms of content protection are not available.
If TRUE, downsampling is enabled. Otherwise, downsampling is disabled and the Size member is ignored. The default state value is
The sampling size. The default value is (1,1).
Specifies the output color space for blit operations, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
The RGB_Range member applies to RGB output, while the YCbCr_Matrix and YCbCr_xvYCC members apply to YCbCr (YUV) output. If the device performs color-space conversion on the background color, it uses the values that apply to both color spaces.
Extended YCbCr can be used with either transfer matrix. Extended YCbCr does not change the black point or white point?the black point is still 16 and the white point is still 235. However, extended YCbCr explicitly allows blacker-than-black values in the range 1?15, and whiter-than-white values in the range 236?254. When extended YCbCr is used, the driver should not clip the luma values to the nominal 16?235 range.
If the device supports extended YCbCr, it sets the
If the output format is a wide-gamut RGB format, output might fall outside the nominal [0...1] range of sRGB. This is particularly true if one or more input streams use extended YCbCr.
Specifies whether the output is intended for playback or video processing (such as editing or authoring). The device can optimize the processing based on the type. The default state value is 0 (playback).
Value | Meaning |
---|---|
| Playback. |
| Video processing. |
?
Specifies the RGB color range. The default state value is 0 (full range).
Value | Meaning |
---|---|
| Full range (0-255). |
| Limited range (16-235). |
?
Specifies the YCbCr transfer matrix. The default state value is 0 (BT.601).
Value | Meaning |
---|---|
| ITU-R BT.601. |
| ITU-R BT.709. |
?
Specifies whether the output uses conventional YCbCr or extended YCbCr (xvYCC). The default state value is zero (conventional YCbCr).
Value | Meaning |
---|---|
| Conventional YCbCr. |
| Extended YCbCr (xvYCC). |
?
Contains data for a private blit state for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
Use this structure for proprietary or device-specific state parameters.
The caller allocates the pData array. Set the DataSize member to the size of the array in bytes. When retrieving the state data, you can set pData to
A
The size, in bytes, of the buffer pointed to by the pData member.
A reference to a buffer that contains the private state data. The DXVA-HD runtime passes this buffer directly to the device without validation.
Specifies the target rectangle for blitting, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
Specifies whether to use the target rectangle. The default state value is
Value | Meaning |
---|---|
| Use the target rectangle specified by the TargetRect member. |
Use the entire destination surface as the target rectangle. Ignore the TargetRect member. |
?
Specifies the target rectangle. The target rectangle is the area within the destination surface where the output will be drawn. The target rectangle is given in pixel coordinates, relative to the destination surface. The default state value is an empty rectangle, (0, 0, 0, 0).
If the Enable member is
Defines a color value for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
This union can represent both RGB and YCbCr colors. The interpretation of the union depends on the context.
A
A
Specifies an RGB color value.
The RGB values have a nominal range of [0...1]. For an RGB format with n bits per channel, the value of each color component is calculated as follows:
val = f * ((1 << n)-1)
For example, for RGB-32 (8 bits per channel), val = BYTE(f * 255.0)
.
For full-range RGB, reference black is (0.0, 0.0, 0.0), which corresponds to (0, 0, 0) in an 8-bit representation. For limited-range RGB, reference black is (0.0625, 0.0625, 0.0625), which corresponds to (16, 16, 16) in an 8-bit representation. For wide-gamut formats, the values might fall outside of the [0...1] range.
The red value.
The green value.
The blue value.
The alpha value. Values range from 0 (transparent) to 1 (opaque).
Defines a color value for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
This union can represent both RGB and YCbCr colors. The interpretation of the union depends on the context.
A
A
Describes the configuration of a DXVA decoder device.
Defines the encryption protocol type for bit-stream data buffers. If no encryption is applied, the value is DXVA_NoEncrypt. If ConfigBitstreamRaw is 0, the value must be DXVA_NoEncrypt.
Defines the encryption protocol type for macroblock control data buffers. If no encryption is applied, the value is DXVA_NoEncrypt. If ConfigBitstreamRaw is 1, the value must be DXVA_NoEncrypt.
Defines the encryption protocol type for residual difference decoding data buffers (buffers containing spatial-domain data or sets of transform-domain coefficients for accelerator-based IDCT). If no encryption is applied, the value is DXVA_NoEncrypt. If ConfigBitstreamRaw is 1, the value must be DXVA_NoEncrypt.
Indicates whether the host-decoder sends raw bit-stream data. If the value is 1, the data for the pictures will be sent in bit-stream buffers as raw bit-stream content. If the value is 0, picture data will be sent using macroblock control command buffers. If either ConfigResidDiffHost or ConfigResidDiffAccelerator is 1, the value must be 0.
Specifies whether macroblock control commands are in raster scan order or in arbitrary order. If the value is 1, the macroblock control commands within each macroblock control command buffer are in raster-scan order. If the value is 0, the order is arbitrary. For some types of bit streams, forcing raster order either greatly increases the number of required macroblock control buffers that must be processed, or requires host reordering of the control information. Therefore, supporting arbitrary order can be more efficient.
Contains the host residual difference configuration. If the value is 1, some residual difference decoding data may be sent as blocks in the spatial domain from the host. If the value is 0, spatial domain data will not be sent.
Indicates the word size used to represent residual difference spatial-domain blocks for predicted (non-intra) pictures when using host-based residual difference decoding.
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 1, the host will send residual difference spatial-domain blocks for non-intra macroblocks using 8-bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 0, the host will send residual difference spatial-domain blocks of data for non-intra macroblocks using 16- bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 0, ConfigSpatialResid8 must be 0.
For intra pictures, spatial-domain blocks must be sent using 8-bit samples if bits-per-pixel (BPP) is 8, and using 16-bit samples if BPP > 8. If ConfigIntraResidUnsigned is 0, these samples are sent as signed integer values relative to a constant reference value of 2^(BPP?1), and if ConfigIntraResidUnsigned is 1, these samples are sent as unsigned integer values relative to a constant reference value of 0.
If the value is 1, 8-bit difference overflow blocks are subtracted rather than added. The value must be 0 unless ConfigSpatialResid8 is 1.
The ability to subtract differences rather than add them enables 8-bit difference decoding to be fully compliant with the full ?255 range of values required in video decoder specifications, because +255 cannot be represented as the addition of two signed 8-bit numbers, but any number in the range ?255 can be represented as the difference between two signed 8-bit numbers (+255 = +127 minus ?128).
If the value is 1, spatial-domain blocks for intra macroblocks must be clipped to an 8-bit range on the host and spatial-domain blocks for non-intra macroblocks must be clipped to a 9-bit range on the host. If the value is 0, no such clipping is necessary by the host.
The value must be 0 unless ConfigSpatialResid8 is 0 and ConfigResidDiffHost is 1.
If the value is 1, any spatial-domain residual difference data must be sent in a chrominance-interleaved form matching the YUV format chrominance interleaving pattern. The value must be 0 unless ConfigResidDiffHost is 1 and the YUV format is NV12 or NV21.
Indicates the method of representation of spatial-domain blocks of residual difference data for intra blocks when using host-based difference decoding.
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 0, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 1, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
The value of the member must be 0 unless ConfigResidDiffHost is 1.
If the value is 1, transform-domain blocks of coefficient data may be sent from the host for accelerator-based IDCT. If the value is 0, accelerator-based IDCT will not be used. If both ConfigResidDiffHost and ConfigResidDiffAccelerator are 1, this indicates that some residual difference decoding will be done on the host and some on the accelerator, as indicated by macroblock-level control commands.
The value must be 0 if ConfigBitstreamRaw is 1.
If the value is 1, the inverse scan for transform-domain block processing will be performed on the host, and absolute indices will be sent instead for any transform coefficients. If the value is 0, the inverse scan will be performed on the accelerator.
The value must be 0 if ConfigResidDiffAccelerator is 0 or if Config4GroupedCoefs is 1.
If the value is 1, the IDCT specified in Annex W of ITU-T Recommendation H.263 is used. If the value is 0, any compliant IDCT can be used for off-host IDCT.
The H.263 annex does not comply with the IDCT requirements of MPEG-2 corrigendum 2, so the value must not be 1 for use with MPEG-2 video.
The value must be 0 if ConfigResidDiffAccelerator is 0, indicating purely host-based residual difference decoding.
If the value is 1, transform coefficients for off-host IDCT will be sent using the DXVA_TCoef4Group structure. If the value is 0, the DXVA_TCoefSingle structure is used. The value must be 0 if ConfigResidDiffAccelerator is 0 or if ConfigHostInverseScan is 1.
Specifies how many frames the decoder device processes at any one time.
Contains decoder-specific configuration information.
Describes a video stream for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
The display driver can use the information in this structure to optimize the capabilities of the video processor. For example, some capabilities might not be exposed for high-definition (HD) content, for performance reasons.
Frame rates are expressed as ratios. For example, 30 frames per second (fps) is expressed as 30:1, and 29.97 fps is expressed as 30000/1001. For interlaced content, a frame consists of two fields, so that the frame rate is half the field rate.
If the application will composite two or more input streams, use the largest stream for the values of InputWidth and InputHeight.
A member of the
The frame rate of the input video stream, specified as a
The width of the input frames, in pixels.
The height of the input frames, in pixels.
The frame rate of the output video stream, specified as a
The width of the output frames, in pixels.
The height of the output frames, in pixels.
Specifies a custom rate for frame-rate conversion or inverse telecine (IVTC).
The CustomRate member gives the rate conversion factor, while the remaining members define the pattern of input and output samples.
Here are some example uses for this structure:
Frame rate conversion from 60p to 120p (doubling the frame rate).
Reverse 2:3 pulldown (IVTC) from 60i to 24p.
(Ten interlaced fields are converted into four progressive frames.)
The ratio of the output frame rate to the input frame rate, expressed as a
The number of output frames that will be generated for every N input samples, where N = InputFramesOrFields.
If TRUE, the input stream must be interlaced. Otherwise, the input stream must be progressive.
The number of input fields or frames for every N output frames that will be generated, where N = OutputFrames.
Describes a buffer sent from a decoder to a DirectX Video Acceleration (DXVA) device.
This structure corresponds closely to the DXVA_BufferDescription structure in DXVA 1, but some of the fields are no longer used in DXVA 2.
Identifies the type of buffer passed to the accelerator. Must be one of the following values.
Value | Meaning |
---|---|
Picture decoding parameter buffer. | |
Macroblock control command buffer. | |
Residual difference block data buffer. | |
Deblocking filter control command buffer. | |
Inverse quantization matrix buffer. | |
Slice-control buffer. | |
Bitstream data buffer. | |
Motion vector buffer. | |
Film grain synthesis data buffer. |
?
Reserved. Set to zero.
Specifies the offset of the relevant data from the beginning of the buffer, in bytes. Currently this value must be zero.
Specifies the amount of relevant data in the buffer, in bytes. The location of the last byte of content in the buffer is DataOffset + DataSize ? 1.
Specifies the macroblock address of the first macroblock in the buffer. The macroblock address is given in raster scan order.
Specifies the number of macroblocks of data in the buffer. This count includes skipped macroblocks. This value must be zero if the data buffer type is one of the following: picture decoding parameters, inverse-quantization matrix, AYUV, IA44/AI44, DPXD, Highlight, or DCCMD.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
Pointer to a byte array that contains an initialization vector (IV) for encrypted data. If the decode buffer does not contain encrypted data, set this member to
Contains parameters for the
Contains private data for the
This structure corresponds to parameters of the IAMVideoAccelerator::Execute method in DirectX Video Acceleration (DXVA) version 1.
Describes the format of a video stream.
Most of the values in this structure can be translated directly to and from
Describes the interlacing of the video frames. Contains a value from the
Describes the chroma siting. Contains a value from the
Describes the nominal range of the Y'CbCr or RGB color data. Contains a value from the
Describes the transform from Y'PbPr (component video) to studio R'G'B'. Contains a value from the
Describes the intended viewing conditions. Contains a value from the
Describes the color primaries. Contains a value from the
Describes the gamma correction transfer function. Contains a value from the
Use this member to access all of the bits in the union.
Defines the range of supported values for an image filter.
The multiplier enables the filter range to have a fractional step value.
For example, a hue filter might have an actual range of [-180.0 ... +180.0] with a step size of 0.25. The device would report the following range and multiplier:
In this case, a filter value of 2 would be interpreted by the device as 0.50 (or 2 ? 0.25).
The device should use a multiplier that can be represented exactly as a base-2 fraction.
The minimum value of the filter.
The maximum value of the filter.
The default value of the filter.
A multiplier. Use the following formula to translate the filter setting into the actual filter value: Actual Value = Set Value???Multiplier.
Contains parameters for a DirectX Video Acceleration (DXVA) image filter.
Filter level.
Filter threshold.
Filter radius.
Returns a
You can use this function for DirectX Video Acceleration (DXVA) operations that require alpha values expressed as fixed-point numbers.
Defines a video frequency.
The value 0/0 indicates an unknown frequency. Values of the form n/0, where n is not zero, are invalid. Values of the form 0/n, where n is not zero, indicate a frequency of zero.
Numerator of the frequency.
Denominator of the frequency.
Contains values for DirectX Video Acceleration (DXVA) video processing operations.
Brightness value.
Contrast value.
Hue value.
Saturation value.
Contains a rational number (ratio).
Values of the form 0/n are interpreted as zero. The value 0/0 is interpreted as zero. However, these values are not necessarily valid in all contexts.
Values of the form n/0, where n is nonzero, are invalid.
The numerator of the ratio.
The denominator of the ratio.
Contains per-stream data for the
Specifies the planar alpha value for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
For each pixel, the destination color value is computed as follows:
Cd = Cs * (As * Ap * Ae) + Cd * (1.0 - As * Ap * Ae)
where
Cd
= Color value of the destination pixel.Cs
= Color value of source pixel.As
= Per-pixel source alpha.Ap
= Planar alpha value.Ae
= Palette-entry alpha value, or 1.0 (see Note).Note??Palette-entry alpha values apply only to palettized color formats, and only when the device supports the
The destination alpha value is computed according to the
To get the device capabilities, call
If TRUE, alpha blending is enabled. Otherwise, alpha blending is disabled. The default state value is
Specifies the planar alpha value as a floating-point number from 0.0 (transparent) to 1.0 (opaque).
If the Enable member is
Specifies the pixel aspect ratio (PAR) for the source and destination rectangles.
Pixel aspect ratios of the form 0/n and n/0 are not valid.
If the Enable member is
If TRUE, the SourceAspectRatio and DestinationAspectRatio members contain valid values. Otherwise, the pixel aspect ratios are unspecified.
A
A
Specifies the format for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
The surface format, specified as a
The default state value is
Specifies the destination rectangle for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
Specifies whether to use the destination rectangle, or use the entire output surface. The default state value is
Value | Meaning |
---|---|
| Use the destination rectangle given in the DestinationRect member. |
Use the entire output surface as the destination rectangle. |
?
The destination rectangle, which defines the portion of the output surface where the source rectangle is blitted. The destination rectangle is given in pixel coordinates, relative to the output surface. The default value is an empty rectangle, (0, 0, 0, 0).
If the Enable member is
Specifies the level for a filtering operation on a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream.
For a list of image filters that are defined for DXVA-HD, see
If TRUE, the filter is enabled. Otherwise, the filter is disabled.
The level for the filter. The meaning of this value depends on the implementation. To get the range and default value of a particular filter, call the
If the Enable member is
Specifies how a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream is interlaced.
Some devices do not support interlaced RGB. Interlaced RGB support is indicated by the
Some devices do not support interlaced formats with palettized color. This support is indicated by the
To get the device's capabilities, call
The video interlacing, specified as a
The default state value is
Specifies the color space for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream.
The RGB_Range member applies to RGB input, while the YCbCr_Matrix and YCbCr_xvYCC members apply to YCbCr (YUV) input.
In some situations, the device might perform an intermediate color conversion on the input stream. If so, it uses the flags that apply to both color spaces. For example, suppose the device converts from RGB to YCbCr. If the RGB_Range member is 0 and the YCbCr_Matrix member is 1, the device will convert from full-range RGB to BT.709 YCbCr.
If the device supports xvYCC, it returns the
Specifies whether the input stream contains video or graphics. The device can optimize the processing based on the type. The default state value is 0 (video).
Value | Meaning |
---|---|
| Video. |
| Graphics. |
?
Specifies the RGB color range. The default state value is 0 (full range).
Value | Meaning |
---|---|
| Full range (0-255). |
| Limited range (16-235). |
?
Specifies the YCbCr transfer matrix. The default state value is 0 (BT.601).
Value | Meaning |
---|---|
| ITU-R BT.601. |
| ITU-R BT.709. |
?
Specifies whether the input stream uses conventional YCbCr or extended YCbCr (xvYCC). The default state value is 0 (conventional YCbCr).
Value | Meaning |
---|---|
| Conventional YCbCr. |
| Extended YCbCr (xvYCC). |
?
Specifies the luma key for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
To use this state, the device must support luma keying, indicated by the
If the device does not support luma keying, the
If the input format is RGB, the device must also support the
The values of Lower and Upper give the lower and upper bounds of the luma key, using a nominal range of [0...1]. Given a format with n bits per channel, these values are converted to luma values as follows:
val = f * ((1 << n)-1)
Any pixel whose luma value falls within the upper and lower bounds (inclusive) is treated as transparent.
For example, if the pixel format uses 8-bit luma, the upper bound is calculated as follows:
BYTE Y = BYTE(max(min(1.0, Upper), 0.0) * 255.0)
Note that the value is clamped to the range [0...1] before multiplying by 255.
If TRUE, luma keying is enabled. Otherwise, luma keying is disabled. The default value is
The lower bound for the luma key. The range is [0?1]. The default state value is 0.0.
The upper bound for the luma key. The range is [0?1]. The default state value is 0.0.
Specifies the output frame rate for an input stream when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
The output rate might require the device to convert the frame rate of the input stream. If so, the value of RepeatFrame controls whether the device creates interpolated frames or simply repeats input frames.
Specifies how the device performs frame-rate conversion, if required. The default state value is
Value | Meaning |
---|---|
| The device repeats frames. |
The device interpolates frames. |
?
Specifies the output rate, as a member of the
Specifies a custom output rate, as a
To get the list of custom rates supported by the video processor, call
Contains the color palette entries for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
This stream state is used for input streams that have a palettized color format. Palettized formats with 4 bits per pixel (bpp) use the first 16 entries in the list. Formats with 8 bpp use the first 256 entries.
If a pixel has a palette index greater than the number of entries, the device treats the pixel as being white with opaque alpha. For full-range RGB, this value will be (255, 255, 255, 255); for YCbCr the value will be (255, 235, 128, 128).
The caller allocates the pEntries array. Set the Count member to the number of elements in the array. When retrieving the state data, you can set the pEntries member to
If the DXVA-HD device does not have the
To get the device capabilities, call
The number of palette entries. The default state value is 0.
A reference to an array of
Contains data for a private stream state, for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream.
Use this structure for proprietary or device-specific state parameters.
The caller allocates the pData array. Set the DataSize member to the size of the array in bytes. When retrieving the state data, you can set the pData member to
A
Value | Meaning |
---|---|
| Retrieves statistics about inverse telecine. The state data (pData) is a |
?
A device can define additional GUIDs for use with custom stream states. The interpretation of the data is then defined by the device.
The size, in bytes, of the buffer pointed to by the pData member.
A reference to a buffer that contains the private state data. The DXVA-HD runtime passes this buffer directly to the device, without validation.
Contains inverse telecine (IVTC) statistics from a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
If the DXVA-HD device supports IVTC statistics, it can detect when the input video contains telecined frames. You can use this information to enable IVTC in the device.
To enable IVTC statistics, do the following:
sizeof( )
.To get the most recent IVTC statistics from the device, call the
Typically, an application would use this feature as follows:
Specifies whether IVTC statistics are enabled. The default state value is
If the driver detects that the frames are telecined, and is able to perform inverse telecine, this field contains a member of the
The number of consecutive telecined frames that the device has detected.
The index of the most recent input field. The value of this member equals the most recent value of the InputFrameOrField member of the
Specifies the source rectangle for an input stream when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD)
Specifies whether to blit the entire input surface or just the source rectangle. The default state value is
Value | Meaning |
---|---|
| Use the source rectangle specified in the SourceRect member. |
Blit the entire input surface. Ignore the SourceRect member. |
?
The source rectangle, which defines the portion of the input sample that is blitted to the destination surface. The source rectangle is given in pixel coordinates, relative to the input surface. The default state value is an empty rectangle, (0, 0, 0, 0).
If the Enable member is
Contains references to functions implemented by a software plug-in for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
If you provide a software plug-in for DXVA-HD, the plug-in must implement a set of functions that are defined by the function reference types in this structure.
At initialization, the DXVA-HD runtime calls the plug-in device's PDXVAHDSW_Plugin function. This function fills in a
Function reference of type PDXVAHDSW_CreateDevice.
Function reference of type PDXVAHDSW_ProposeVideoPrivateFormat.
Function reference of type PDXVAHDSW_GetVideoProcessorDeviceCaps.
Function reference of type PDXVAHDSW_GetVideoProcessorOutputFormats.
Function reference of type PDXVAHDSW_GetVideoProcessorInputFormats.
Function reference of type PDXVAHDSW_GetVideoProcessorCaps.
Function reference of type PDXVAHDSW_GetVideoProcessorCustomRates.
Function reference of type PDXVAHDSW_GetVideoProcessorFilterRange.
Function reference of type PDXVAHDSW_DestroyDevice.
Function reference of type PDXVAHDSW_CreateVideoProcessor.
Function reference of type PDXVAHDSW_SetVideoProcessBltState.
Function reference of type PDXVAHDSW_GetVideoProcessBltStatePrivate.
Function reference of type PDXVAHDSW_SetVideoProcessStreamState.
Function reference of type PDXVAHDSW_GetVideoProcessStreamStatePrivate.
Function reference of type PDXVAHDSW_VideoProcessBltHD.
Function reference of type PDXVAHDSW_DestroyVideoProcessor.
Defines the range of supported values for a DirectX Video Acceleration (DXVA) operation.
All values in this structure are specified as
Minimum supported value.
Maximum supported value.
Default value.
Minimum increment between values.
Describes a video stream for a DXVA decoder device or video processor device.
The InputSampleFreq member gives the frame rate of the decoded video stream, as received by the video renderer. The OutputFrameFreq member gives the frame rate of the video that is displayed after deinterlacing. If the input video is interlaced and the samples contain interleaved fields, the output frame rate is twice the input frame rate. If the input video is progressive or contains single fields, the output frame rate is the same as the input frame rate.
Decoders should set the values of InputSampleFreq and OutputFrameFreq if the frame rate is known. Otherwise, set these members to 0/0 to indicate an unknown frame rate.
Width of the video frame, in pixels.
Height of the video frame, in pixels.
Additional details about the video format, specified as a
Surface format, specified as a
Frame rate of the input video stream, specified as a
Frame rate of the output video, specified as a
Level of data protection required when the user accessible bus (UAB) is present. If TRUE, the video must be protected when a UAB is present. If
Reserved. Must be zero.
Contains parameters for the
Describes the capabilities of a DirectX Video Acceleration (DVXA) video processor mode.
Identifies the type of device. The following values are defined.
Value | Meaning |
---|---|
DXVA 2.0 video processing is emulated by using DXVA 1.0. An emulated device may be missing significant processing capabilities and have lower image quality and performance. | |
Hardware device. | |
Software device. |
?
The Direct3D memory pool used by the device.
Number of forward reference samples the device needs to perform deinterlacing. For the bob, progressive scan, and software devices, the value is zero.
Number of backward reference samples the device needs to perform deinterlacing. For the bob, progressive scan, and software devices, the value is zero.
Reserved. Must be zero.
Identifies the deinteracing technique used by the device. This value is a bitwise OR of one or more of the following flags.
Value | Meaning |
---|---|
The algorithm is unknown or proprietary. | |
The algorithm creates missing lines by repeating the line either above or below the missing line. This algorithm produces a jagged image and is not recommended. | |
The algorithm creates missing lines by averaging two lines. Slight vertical adjustments are made so that the resulting image does not bob up and down. | |
The algorithm creates missing lines by applying a [?1, 9, 9, ?1]/16 filter across four lines. Slight vertical adjustments are made so that the resulting image does not bob up and down. | |
The algorithm uses median filtering to recreate the pixels in the missing lines. | |
The algorithm uses an edge filter to create the missing lines. In this process, spatial directional filters are applied to determine the orientation of edges in the picture content. Missing pixels are created by filtering along (rather than across) the detected edges. | |
The algorithm uses spatial or temporal interpolation, switching between the two on a field-by-field basis, depending on the amount of motion. | |
The algorithm uses spatial or temporal interpolation, switching between the two on a pixel-by-pixel basis, depending on the amount of motion. | |
The algorithm identifies objects within a sequence of video fields. Before it recreates the missing pixels, it aligns the movement axes of the individual objects in the scene to make them parallel with the time axis. | |
The device can undo the 3:2 pulldown process used in telecine. |
?
Specifies the available video processor (ProcAmp) operations. The value is a bitwise OR of ProcAmp Settings constants.
Specifies operations that the device can perform concurrently with the
Value | Meaning |
---|---|
The device can convert the video from YUV color space to RGB color space, with at least 8 bits of precision for each RGB component. | |
The device can stretch or shrink the video horizontally. If this capability is present, aspect ratio correction can be performed at the same time as deinterlacing. | |
The device can stretch or shrink the video vertically. If this capability is present, image resizing and aspect ratio correction can be performed at the same time. | |
The device can alpha blend the video. | |
The device can operate on a subrectangle of the video frame. If this capability is present, source images can be cropped before further processing occurs. | |
The device can accept substreams in addition to the primary video stream, and can composite them. | |
The device can perform color adjustments on the primary video stream and substreams, at the same time that it deinterlaces the video and composites the substreams. The destination color space is defined in the DestFormat member of the | |
The device can convert the video from YUV to RGB color space when it writes the deinterlaced and composited pixels to the destination surface. An RGB destination surface could be an off-screen surface, texture, Direct3D render target, or combined texture/render target surface. An RGB destination surface must use at least 8 bits for each color channel. | |
The device can perform an alpha blend operation with the destination surface when it writes the deinterlaced and composited pixels to the destination surface. | |
The device can downsample the output frame, as specified by the ConstrictionSize member of the | |
The device can perform noise filtering. | |
The device can perform detail filtering. | |
The device can perform a constant alpha blend to the entire video stream when it composites the video stream and substreams. | |
The device can perform accurate linear RGB scaling, rather than performing them in nonlinear gamma space. | |
The device can correct the image to compensate for artifacts introduced when performing scaling in nonlinear gamma space. | |
The deinterlacing algorithm preserves the original field lines from the interlaced field picture, unless scaling is also applied. For example, in deinterlacing algorithms such as bob and median filtering, the device copies the original field into every other scan line and then applies a filter to reconstruct the missing scan lines. As a result, the original field can be recovered by discarding the scan lines that were interpolated. If the image is scaled vertically, however, the original field lines cannot be recovered. If the image is scaled horizontally (but not vertically), the resulting field lines will be equivalent to scaling the original field picture. (In other words, discarding the interpolated scan lines will yield the same result as stretching the original picture without deinterlacing.) |
?
Specifies the supported noise filters. The value is a bitwise OR of the following flags.
Value | Meaning |
---|---|
Noise filtering is not supported. | |
Unknown or proprietary filter. | |
Median filter. | |
Temporal filter. | |
Block noise filter. | |
Mosquito noise filter. |
?
Specifies the supported detail filters. The value is a bitwise OR of the following flags.
Value | Meaning |
---|---|
Detail filtering is not supported. | |
Unknown or proprietary filter. | |
Edge filter. | |
Sharpen filter. |
?
Specifies an input sample for the
Specifies the capabilities of the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
A
The number of past reference frames required to perform the optimal video processing.
The number of future reference frames required to perform the optimal video processing.
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The number of custom output frame rates. To get the list of custom frame rates, call the
Specifies the capabilities of a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
In DXVA-HD, the device stores state information for each input stream. These states persist between blits. With each blit, the application selects which streams to enable or disable. Disabling a stream does not affect the state information for that stream.
The MaxStreamStates member gives the maximum number of stream states that can be set by the application. The MaxInputStreams member gives the maximum number of streams that can be enabled during a blit. These two values can differ.
To set the state data for a stream, call
Specifies the device type, as a member of the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The memory pool that is required for the input video surfaces.
The number of supported output formats. To get the list of output formats, call the
The number of supported input formats. To get the list of input formats, call the
The number of video processors. Each video processor represents a distinct set of processing capabilities. To get the capabilities of each video processor, call the
The maximum number of input streams that can be enabled at the same time.
The maximum number of input streams for which the device can store state data.
Enables two threads to share the same Microsoft Direct3D?11 device.
This interface is exposed by the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager. To create the DXGI Device Manager, call the
When you create an
For Microsoft Direct3D?9 devices, use the IDirect3DDeviceManager9 interface.
Windows Store apps must use
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Creates an instance of the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Sets the Microsoft Direct3D device or notifies the device manager that the Direct3D device was reset.
A reference to the
When you first create the DXGI Device Manager, call this method with a reference to the Direct3D device. (The device manager does not create the device; the caller must provide the device reference initially.) Also call this method if the Direct3D device becomes lost and you need to reset the device or create a new device.
The resetToken parameter ensures that only the component that originally created the device manager can invalidate the current device.
If this method succeeds, all open device handles become invalid.
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Unlocks the Microsoft Direct3D device.
A handle to the Direct3D device. To get the device handle, call
Call this method to release the device after calling
Enables two threads to share the same Microsoft Direct3D?11 device.
This interface is exposed by the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager. To create the DXGI Device Manager, call the
When you create an
For Microsoft Direct3D?9 devices, use the IDirect3DDeviceManager9 interface.
Windows Store apps must use
Queries the Microsoft Direct3D device for an interface.
A handle to the Direct3D device. To get the device handle, call
The interface identifier (IID) of the requested interface. The Direct3D device supports the following interfaces:
Receives a reference to the requested interface. The caller must release the interface.
If the method returns
For more info see, Supporting Direct3D 11 Video Decoding in Media Foundation.
Gives the caller exclusive access to the Microsoft Direct3D device.
A handle to the Direct3D device. To get the device handle, call
The interface identifier (IID) of the requested interface. The Direct3D device will support the following interfaces:
Specifies whether to wait for the device lock. If the device is already locked and this parameter is TRUE, the method blocks until the device is unlocked. Otherwise, if the device is locked and this parameter is
Receives a reference to the requested interface. The caller must release the interface.
When you are done using the Direct3D device, call
If the method returns
If fBlock is TRUE, this method can potentially deadlock. For example, it will deadlock if a thread calls LockDevice and then waits on another thread that calls LockDevice. It will also deadlock if a thread calls LockDevice twice without calling UnlockDevice in between.
Gets a handle to the Microsoft Direct3D device.
Receives the device handle.
Enables two threads to share the same Microsoft Direct3D?11 device.
This interface is exposed by the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager. To create the DXGI Device Manager, call the
When you create an
For Microsoft Direct3D?9 devices, use the IDirect3DDeviceManager9 interface.
Windows Store apps must use
Tests whether a Microsoft Direct3D device handle is valid.
A handle to the Direct3D device. To get the device handle, call
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The specified handle is not a Direct3D device handle. |
| The device handle is invalid. |
?
If the method returns
Unlocks the Microsoft Direct3D device.
A handle to the Direct3D device. To get the device handle, call
Reserved.
If this method succeeds, it returns
Call this method to release the device after calling
Defines the ASF indexer options.
The indexer creates a new index object.
The indexer returns values for reverse playback.
The indexer creates an index object for a live ASF stream.
Defines the ASF multiplexer options.
The multiplexer automatically adjusts the bit rate of the ASF content in response to the characteristics of the streams being multiplexed.
Defines the selection options for an ASF stream.
No samples from the stream are delivered.
Only samples from the stream that are clean points are delivered.
All samples from the stream are delivered.
Defines the ASF splitter options.
The splitter delivers samples for the ASF content in reverse order to accommodate reverse playback.
The splitter delivers samples for streams that are protected with Windows Media Digital Rights Management.
Defines status conditions for the
Defines the ASF stream selector options.
The stream selector will not set thinning. Thinning is the process of removing samples from a stream to reduce the bit rate.
The stream selector will use the average bit rate of streams when selecting streams.
Specifies the type of work queue for the
Defines flags for serializing and deserializing attribute stores.
If this flag is set,
Specifies how to compare the attributes on two objects.
Check whether all the attributes in pThis exist in pTheirs and have the same data, where pThis is the object whose Compare method is being called and pTheirs is the object given in the pTheirs parameter.
Check whether all the attributes in pTheirs exist in pThis and have the same data, where pThis is the object whose Compare method is being called and pTheirs is the object given in the pTheirs parameter.
Check whether both objects have identical attributes with the same data.
Check whether the attributes that exist in both objects have the same data.
Find the object with the fewest number of attributes, and check if those attributes exist in the other object and have the same data.
Defines the data type for a key/value pair.
Unsigned 32-bit integer.
Unsigned 64-bit integer.
Floating-point number.
Byte array.
Specifies values for audio constriction.
Values defined by the
Audio is not constricted.
Audio is down sampled to 48 kHz/16-bit.
Audio is down sampled to 44 kHz/16-bit.
Audio is down sampled to 14hKz/16-bit.
Audio is muted.
Contains flags for the
Specifies the origin for a seek request.
The seek position is specified relative to the start of the stream.
The seek position is specified relative to the current read/write position in the stream.
Specifies a type of capture device.
An audio capture device, such as a microphone.
A video capture device, such as a webcam.
Specifies a type of capture sink.
A recording sink, for capturing audio and video to a file.
A preview sink, for previewing live audio or video.
A photo sink, for capturing still images.
Defines the values for the source stream category.
Specifies a video preview stream.
Specifies a video capture stream.
Specifies an independent photo stream.
Specifies a dependent photo stream.
Specifies an audio stream.
Specifies an unsupported stream.
Contains flags that describe the characteristics of a clock. These flags are returned by the
Defines properties of a clock.
Jitter values are always negative. In other words, the time returned by
Defines the state of a clock.
The clock is invalid. A clock might be invalid for several reasons. Some clocks return this state before the first start. This state can also occur if the underlying device is lost.
The clock is running. While the clock is running, the time advances at the clock's frequency and current rate.
The clock is stopped. While stopped, the clock reports a time of 0.
The clock is paused. While paused, the clock reports the time it was paused.
Specifies how the topology loader connects a topology node. This enumeration is used with the
The SetOutputStreamState method sets the Device MFT output stream state and media type.
This interface method helps to transition the output stream to a specified state with specified media type set on the output stream. This will be used by the DTM when the Device Source requests a specific output stream?s state and media type to be changed. Device MFT should change the specified output stream?s media type and state to the requested media type.
If the incoming media type and stream state are same as the current media type and stream state the method return
If the incoming media type and current media type of the stream are the same, Device MFT must change the stream?s state to the requested value and return the appropriate
When a change in the output stream?s media type requires a corresponding change in the input then Device MFT must post the
As an example, consider a Device MFT that has two input streams and three output streams. Let Output 1 and Output 2 source from Input 1 and stream at 720p. Now, let us say Output 2?s media type changes to 1080p. To satisfy this request, Device MFT must change the Input 1 media type to 1080p, by posting
Stream ID of the input stream where the state and media type needs to be changed.
Preferred media type for the input stream is passed in through this parameter. Device MFT should change the media type only if the incoming media type is different from the current media type.
Specifies the DeviceStreamState which the input stream should transition to.
Must be zero.
The DMO_INPUT_DATA_BUFFER_FLAGS
enumeration defines flags that describe an input buffer.
The beginning of the data is a synchronization point.
The buffer's time stamp is valid.
The buffer's indicated time length is valid.
The buffer's indicated time length is valid.
Media Foundation transforms (MFTs) are an evolution of the transform model first introduced with DirectX Media Objects (DMOs). This topic summarizes the main ways in which MFTs differ from DMOs. Read this topic if you are already familiar with the DMO interfaces, or if you want to convert an existing DMO into an MFT.
This topic contains the following sections:
The DMO_INPUT_STREAM_INFO_FLAGS
enumeration defines flags that describe an input stream.
The stream requires whole samples. Samples must not span multiple buffers, and buffers must not contain partial samples.
Each buffer must contain exactly one sample.
All the samples in this stream must be the same size.
The DMO performs lookahead on the incoming data, and may hold multiple input buffers for this stream.
The DMO_PROCESS_OUTPUT_FLAGS
enumeration defines flags that specify output processing requests.
Discard the output when the reference to the output buffer is
The DMO_SET_TYPE_FLAGS
enumeration defines flags for setting the media type on a stream.
The
Test the media type but do not set it.
Clear the media type that was set for the stream.
Contains flags that are used to configure the Microsoft DirectShow enhanced video renderer (EVR) filter.
Enables dynamic adjustments to video quality during playback.
Specifies the requested access mode for opening a file.
Read mode.
Write mode.
Read and write mode.
Specifies the behavior when opening a file.
Use the default behavior.
Open the file with no system caching.
Subsequent open operations can have write access to the file.
Note??Requires Windows?7 or later. ?
Specifies how to open or create a file.
Open an existing file. Fail if the file does not exist.
Create a new file. Fail if the file already exists.
Open an existing file and truncate it, so that the size is zero bytes. Fail if the file does not already exist.
If the file does not exist, create a new file. If the file exists, open it.
Create a new file. If the file exists, overwrite the file.
Describes the type of data provided by a frame source.
The values of this enumeration are used with the MF_DEVICESTREAM_ATTRIBUTE_FRAMESOURCE_TYPES attribute.
The frame source provides color data.
The frame source provides infrared data.
The frame source provides depth data.
The frame source provides custom data.
Specifies the likelihood that the Media Engine can play a specified type of media resource.
The Media Engine cannot play the resource.
The Media Engine might be able to play the resource.
The Media Engine can probably play the resource.
Contains flags for the
Defines error status codes for the Media Engine.
The values greater than zero correspond to error codes defined for the MediaError object in HTML5.
No error.
The process of fetching the media resource was stopped at the user's request.
A network error occurred while fetching the media resource.
An error occurred while decoding the media resource.
The media resource is not supported.
An error occurred while encrypting the media resource.
Supported in Windows?8.1 and later.
Defines event codes for the Media Engine.
The application receives Media Engine events through the
Values below 1000 correspond to events defined in HTML 5 for media elements.
The Media Engine has started to load the source. See
The Media Engine is loading the source.
The Media Engine has suspended a load operation.
The Media Engine cancelled a load operation that was in progress.
An error occurred.
Event Parameter | Description |
---|---|
param1 | A member of the |
param2 | An |
?
The Media Engine has switched to the
The Load algorithm is stalled, waiting for data.
The Media Engine is switching to the playing state. See
The media engine has paused. See
The Media Engine has loaded enough source data to determine the duration and dimensions of the source.
The Media Engine has loaded enough data to render some content (for example, a video frame).
Playback has stopped because the next frame is not available.
Playback has started. See
Playback can start, but the Media Engine might need to stop to buffer more data.
The Media Engine can probably play through to the end of the resource, without stopping to buffer data.
The Media Engine has started seeking to a new playback position. See
The Media Engine has seeked to a new playback position. See
The playback position has changed. See
Playback has reached the end of the source. This event is not sent if the GetLoopis TRUE.
The playback rate has changed. See
The duration of the media source has changed. See
The audio volume changed. See
The output format of the media source has changed.
Event Parameter | Description |
---|---|
param1 | Zero if the video format changed, 1 if the audio format changed. |
param2 | Zero. |
?
The Media Engine flushed any pending events from its queue.
The playback position reached a timeline marker. See
The audio balance changed. See
The Media Engine has finished downloading the source data.
The media source has started to buffer data.
The media source has stopped buffering data.
The
The Media Engine's Load algorithm is waiting to start.
Event Parameter | Description |
---|---|
param1 | A handle to a waitable event, of type HANDLE. |
param2 | Zero. |
?
If Media Engine is created with the
If the Media Engine is not created with the
The first frame of the media source is ready to render.
Raised when a new track is added or removed.
Supported in Windows?8.1 and later.
Raised when there is new information about the Output Protection Manager (OPM).
This event will be raised when an OPM failure occurs, but ITA allows fallback without the OPM. In this case, constriction can be applied.
This event will not be raised when there is an OPM failure and the fallback also fails. For example, if ITA blocks playback entirely when OPM cannot be established.
Supported in Windows?8.1 and later.
Raised when one of the component streams of a media stream fails. This event is only raised if the media stream contains other component streams that did not fail.
Raised when one of the component streams of a media stream fails. This event is only raised if the media stream contains other component streams that did not fail.
Specifies media engine extension types.
Specifies the content protection requirements for a video frame.
The video frame should be protected.
Direct3D surface protection must be applied to any surface that contains the frame.
Direct3D anti-screen-scrape protection must be applied to any surface that contains the frame.
Defines media key error codes for the media engine.
Unknown error occurred.
An error with the client occurred.
An error with the service occurred.
An error with the output occurred.
An error occurred related to a hardware change.
An error with the domain occurred.
Defines network status codes for the Media Engine.
The initial state.
The Media Engine has started the resource selection algorithm, and has selected a media resource, but is not using the network.
The Media Engine is loading a media resource.
The Media Engine has started the resource selection algorithm, but has not selected a media resource.
Defines the status of the Output Protection Manager (OPM).
Defines preload hints for the Media Engine. These values correspond to the preload attribute of the HTMLMediaElement interface in HTML5.
The preload attribute is missing.
The preload attribute is an empty string. This value is equivalent to
The preload attribute is "none". This value is a hint to the user agent not to preload the resource.
The preload attribute is "metadata". This value is a hint to the user agent to fetch the resource metadata.
The preload attribute is "auto". This value is a hint to the user agent to preload the entire resource.
Contains flags that specify whether the Media Engine will play protected content, and whether the Media Engine will use the Protected Media Path (PMP).
These flags are used with the
Defines ready-state values for the Media Engine.
These values correspond to constants defined for the HTMLMediaElement.readyState attribute in HTML5.
No data is available.
Some metadata is available, including the duration and, for video files, the video dimensions. No media data is available.
There is media data for the current playback position, but not enough data for playback or seeking.
There is enough media data to enable some playback or seeking. The amount of data might be a little as the next video frame.
There is enough data to play the resource, based on the current rate at which the resource is being fetched.
Specifies the layout for a packed 3D video frame.
None.
The views are packed side-by-side in a single frame.
The views are packed top-to-bottom in a single frame.
Defines values for the media engine seek mode.
This enumeration is used with the MediaEngineEx::SetCurrentTimeEx.
Specifies normal seek.
Specifies an approximate seek.
Identifies statistics that the Media Engine tracks during playback. To get a playback statistic from the Media Engine, call
In the descriptions that follow, the data type and value-type tag for the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Identifies the kind of media stream that failed.
The stream type is unknown.
The stream is an audio stream.
The stream is a video stream.
Defines the characteristics of a media source. These flags are retrieved by the
To skip forward or backward in a playlist, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Specifies options for the
The following typedef is defined for combining flags from this enumeration.
typedef UINT32 MFP_CREATION_OPTIONS;
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains flags for the
Some of these flags, marked [out], convey information back to the MFPlay player object. The application should set or clear these flags as appropriate, before returning from the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains flags that describe a media item.
The following typedef is defined for combining flags from this enumeration.
typedef UINT32 MFP_MEDIAITEM_CHARACTERISTICS;
Not supported.
Note??Earlier versions of this documentation described the _MFT_DRAIN_TYPE enumeration incorrectly. The enumeration is not supported. For more information, see
Defines flags for the
Indicates the status of an input stream on a Media Foundation transform (MFT).
The input stream can receive more data at this time. To deliver more input data, call
Describes an input stream on a Media Foundation transform (MFT).
Before the client sets the media types on the transform, the only flags guaranteed to be accurate are the
In the default processing model, an MFT holds a reference count on the sample that it receives in ProcessInput. It does not process the sample immediately inside ProcessInput. When ProcessOutput is called, the MFT produces output data and then discards the input sample. The following variations on this model are defined:
If an MFT never holds onto input samples between ProcessInput and ProcessOutput, it can set the
If an MFT holds some input samples beyond the next call to ProcessOutput, it can set the
Each media sample (
For uncompressed audio formats, this flag is always implied. (It is valid to set the flag, but not required.) An uncompressed audio frame should never span more than one media sample.
Each media sample that the client provides as input must contain exactly one unit of data, as defined for the
If this flag is present, the
An MFT that processes uncompressed audio should not set this flag. The MFT should accept buffers that contain more than a single audio frame, for efficiency.
All input samples must be the same size. The size is given in the cbSize member of the
The MFT might hold one or more input samples after
The MFT does not hold input samples after the
If this flag is absent, the MFT might hold a reference count on the samples that are passed to the ProcessInput method. The client must not re-use or delete the buffer memory until the MFT releases the sample's
If this flag is absent, it does not guarantee that the MFT holds a reference count on the input samples. It is valid for an MFT to release input samples in ProcessInput even if the MFT does not set this flag. However, setting this flag might enable to client to optimize how it re-uses buffers.
An MFT should not set this flag if it ever holds onto an input sample after returning from ProcessInput.
This input stream can be removed by calling
This input stream is optional. The transform can produce output without receiving input from this stream. The caller can deselect the stream by not setting a media type or by setting a
The MFT can perform in-place processing. In this mode, the MFT directly modifies the input buffer. When the client calls ProcessOutput, the same sample that was delivered to this stream is returned in the output stream that has a matching stream identifier. This flag implies that the MFT holds onto the input buffer, so this flag cannot combined with the
If this flag is present, the MFT must set the
Defines flags for the
The values in this enumeration are not bit flags, so they should not be combined with a bitwise OR. Also, the caller should test for these flags with the equality operator, not a bitwise AND:
// Correct.
if (Buffer.dwStatus == )
{ ...
} // Incorrect.
if ((Buffer.dwStatus & ) != 0)
{ ...
}
Indicates whether a Media Foundation transform (MFT) can produce output data.
There is a sample available for at least one output stream. To retrieve the available output samples, call
Describes an output stream on a Media Foundation transform (MFT).
Before the client sets the media types on the MFT, the only flag guaranteed to be accurate is the
The
MFT_OUTPUT_STREAM_DISCARDABLE: The MFT discards output data only if the client calls ProcessOutput with the
MFT_OUTPUT_STREAM_LAZY_READ: If the client continues to call ProcessInput without collecting the output from this stream, the MFT eventually discards the output. If all output streams have the
If neither of these flags is set, the MFT never discards output data.
Each media sample (
For uncompressed audio formats, this flag is always implied. (It is valid to set the flag, but not required.) An uncompressed audio frame should never span more than one media sample.
Each output sample contains exactly one unit of data, as defined for the
If this flag is present, the
An MFT that outputs uncompressed audio should not set this flag. For efficiency, it should output more than one audio frame at a time.
All output samples are the same size.
The MFT can discard the output data from this output stream, if requested by the client. To discard the output, set the
This output stream is optional. The client can deselect the stream by not setting a media type or by setting a
The MFT provides the output samples for this stream, either by allocating them internally or by operating directly on the input samples. The MFT cannot use output samples provided by the client for this stream.
If this flag is not set, the MFT must set cbSize to a nonzero value in the
The MFT can either provide output samples for this stream or it can use samples that the client allocates. This flag cannot be combined with the
If the MFT does not set this flag or the
The MFT does not require the client to process the output for this stream. If the client continues to send input data without getting the output from this stream, the MFT simply discards the previous input.
The MFT might remove this output stream during streaming. This flag typically applies to demultiplexers, where the input data contains multiple streams that can start and stop during streaming. For more information, see
Defines flags for the setting or testing the media type on a Media Foundation transform (MFT).
Test the proposed media type, but do not set it.
Defines the different error states of the Media Source Extension.
Specifies no error.
Specifies an error with the network.
Specifies an error with decoding.
Specifies an unknown error.
Defines the different ready states of the Media Source Extension.
The media source is closed.
The media source is open.
The media source is ended.
Specifies how the user's credentials will be used.
The credentials will be used to authenticate with a proxy.
The credentials will be sent over the network unencrypted.
The credentials must be from a user who is currently logged on.
Describes options for the caching network credentials.
Allow the credential cache object to save credentials in persistant storage.
Do not allow the credential cache object to cache the credentials in memory. This flag cannot be combined with the
The user allows credentials to be sent over the network in clear text.
By default,
Do not set this flag without notifying the user that credentials might be sent in clear text.
Specifies how the credential manager should obtain user credentials.
The application implements the credential manager, which must expose the
The credential cache object sets the
The credential manager should prompt the user to provide the credentials.
Note??Requires Windows?7 or later. ?
The credentials are saved to persistent storage. This flag acts as a hint for the application's UI. If the application prompts the user for credentials, the UI can indicate that the credentials have already been saved.
Specifies how the default proxy locator will specify the connection settings to a proxy server. The application must set these values in the MFNETSOURCE_PROXYSETTINGS property.
Defines the status of the cache for a media file or entry.
The cache for a file or entry does not exist.
The cache for a file or entry is growing.
The cache for a file or entry is completed.
Indicates the type of control protocol that is used in streaming or downloading.
The protocol type has not yet been determined.
The protocol type is HTTP. This includes HTTPv9, WMSP, and HTTP download.
The protocol type is Real Time Streaming Protocol (RTSP).
The content is read from a file. The file might be local or on a remote share.
The protocol type is multicast.
Note??Requires Windows?7 or later. ?Defines statistics collected by the network source. The values in this enumeration define property identifiers (PIDs) for the MFNETSOURCE_STATISTICS property.
To retrieve statistics from the network source, call
In the descriptions that follow, the data type and value-type tag for the
Describes the type of transport used in streaming or downloading data (TCP or UDP).
The data transport type is UDP.
The data transport type is TCP.
Specifies whether color data includes headroom and toeroom. Headroom allows for values beyond 1.0 white ("whiter than white"), and toeroom allows for values below reference 0.0 black ("blacker than black").
This enumeration is used with the
For more information about these values, see the remarks for the DXVA2_NominalRange enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
Unknown nominal range.
Equivalent to
Equivalent to
The normalized range [0...1] maps to [0...255] for 8-bit samples or [0...1023] for 10-bit samples.
The normalized range [0...1] maps to [16...235] for 8-bit samples or [64...940] for 10-bit samples.
The normalized range [0..1] maps to [48...208] for 8-bit samples or [64...940] for 10-bit samples.
The normalized range [0..1] maps to [64...127] for 8-bit samples or [256...508] for 10-bit samples. This range is used in the xRGB color space.
Note??Requires Windows?7 or later. ?
Defines the object types that are created by the source resolver.
Media source. You can query the object for the
Byte stream. You can query the object for the
Invalid type.
Defines protection levels for MFPROTECTION_ACP.
Specifies ACP is disabled.
Specifies ACP is level one.
Specifies ACP is level two.
Specifies ACP is level three.
Reserved.
Defines protection levels for MFPROTECTION_CGMSA.
These flags are equivalent to the OPM_CGMSA_Protection_Level enumeration constants used in the Output Protection Protocol (OPM).
CGMS-A is disabled.
The protection level is Copy Freely.
The protection level is Copy No More.
The protection level is Copy One Generation.
The protection level is Copy Never.
Redistribution control (also called the broadcast flag) is required. This flag can be combined with the other flags.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Defines event types for the
For each event type, the
In your implementation of OnMediaPlayerEvent, you must cast the pEventHeader parameter to the correct structure type. A set of macros is defined for this purpose. These macros check the value of the event type and return
Event type | Event structure Pointer cast macro |
MFP_GET_PLAY_EVENT | |
MFP_GET_PAUSE_EVENT | |
MFP_GET_STOP_EVENT | |
MFP_GET_POSITION_SET_EVENT | |
MFP_GET_RATE_SET_EVENT | |
MFP_GET_MEDIAITEM_CREATED_EVENT | |
MFP_GET_MEDIAITEM_SET_EVENT | |
MFP_GET_FRAME_STEP_EVENT | |
MFP_GET_MEDIAITEM_CLEARED_EVENT | |
MFP_GET_MF_EVENT | |
MFP_GET_ERROR_EVENT | |
MFP_GET_PLAYBACK_ENDED_EVENT | |
MFP_GET_ACQUIRE_USER_CREDENTIAL_EVENT |
?
Defines policy settings for the
Specifies the object type for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Specifies the current playback state.
Contains flags that define the behavior of the
Defines actions that can be performed on a stream.
No action.
Play the stream.
Copy the stream.
Export the stream to another format.
Extract the data from the stream and pass it to the application. For example, acoustic echo cancellation requires this action.
Reserved.
Reserved.
Reserved.
Last member of the enumeration.
Contains flags for the
If the decoder sets the
Specifies how aggressively a pipeline component should drop samples.
In drop mode, a component drops samples, more or less aggressively depending on the level of the drop mode. The specific algorithm used depends on the component. Mode 1 is the least aggressive mode, and mode 5 is the most aggressive. A component is not required to implement all five levels.
For example, suppose an encoded video stream has three B-frames between each pair of P-frames. A decoder might implement the following drop modes:
Mode 1: Drop one out of every three B frames.
Mode 2: Drop one out of every two B frames.
Mode 3: Drop all delta frames.
Modes 4 and 5: Unsupported.
The enhanced video renderer (EVR) can drop video frames before sending them to the EVR mixer.
Normal processing of samples. Drop mode is disabled.
First drop mode (least aggressive).
Second drop mode.
Third drop mode.
Fourth drop mode.
Fifth drop mode (most aggressive, if it is supported; see Remarks).
Maximum number of drop modes. This value is not a valid flag.
Specifies the quality level for a pipeline component. The quality level determines how the component consumes or produces samples.
Each successive quality level decreases the amount of processing that is needed, while also reducing the resulting quality of the audio or video. The specific algorithm used to reduce quality depends on the component. Mode 1 is the least aggressive mode, and mode 5 is the most aggressive. A component is not required to implement all five levels. Also, the same quality level might not be comparable between two different components.
Video decoders can often reduce quality by leaving out certain post-processing steps. The enhanced video renderer (EVR) can sometimes reduce quality by switching to a different deinterlacing mode.
Normal quality.
One level below normal quality.
Two levels below normal quality.
Three levels below normal quality.
Four levels below normal quality.
Five levels below normal quality.
Maximum number of quality levels. This value is not a valid flag.
Specifies the direction of playback (forward or reverse).
Forward playback.
Reverse playback.
Defines the version number for sample protection.
No sample protection.
Version 1.
Version 2.
Version 3.
Specifies how a video stream is interlaced.
In the descriptions that follow, upper field refers to the field that contains the leading half scan line. Lower field refers to the field that contains the first full scan line.
Scan lines in the lower field are 0.5 scan line lower than those in the upper field. In NTSC television, a frame consists of a lower field followed by an upper field. In PAL television, a frame consists of an upper field followed by a lower field.
The upper field is also called the even field, the top field, or field 2. The lower field is also called the odd field, the bottom field, or field 1.
If the interlace mode is
The type of interlacing is not known.
Progressive frames.
Specifies how to open or create a file.
Open an existing file. Fail if the file does not exist.
Create a new file. Fail if the file already exists.
Open an existing file and truncate it, so that the size is zero bytes. Fail if the file does not already exist.
If the file does not exist, create a new file. If the file exists, open it.
Create a new file. If the file exists, overwrite the file.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies whether a stream associated with an
Contains flags for adding a topology to the sequencer source, or updating a topology already in the queue.
This topology is the last topology in the sequence.
Retrieves an interface from the enhanced video renderer (EVR), or from the video mixer or video presenter.
This method can be called only from inside the
The presenter can use this method to query the EVR and the mixer. The mixer can use it to query the EVR and the presenter. Which objects are queried depends on the caller and the service
Caller | Service | Objects queried |
---|---|---|
Presenter | MR_VIDEO_RENDER_SERVICE | EVR |
Presenter | MR_VIDEO_MIXER_SERVICE | Mixer |
Mixer | MR_VIDEO_RENDER_SERVICE | Presenter and EVR |
?
The following interfaces are available from the EVR:
IMediaEventSink. This interface is documented in the DirectShow SDK documentation.
The following interfaces are available from the mixer:
Specifies the scope of the search. Currently this parameter is ignored. Use the value
Reserved, must be zero.
Service
Interface identifier of the requested interface.
Array of interface references. If the method succeeds, each member of the array contains either a valid interface reference or
Pointer to a value that specifies the size of the ppvObjects array. The value must be at least 1. In the current implementation, there is no reason to specify an array size larger than one element. The value is not changed on output.
Defines flags for the
Defines the behavior of the
These flags are optional, and are not mutually exclusive. If no flags are set, the Media Session resolves the topology and then adds it to the queue of pending presentations.
Describes the current status of a call to the
Specifies how the ASF file sink should apply Windows Media DRM.
Undefined action.
Encode the content using Windows Media DRM. Use this flag if the source content does not have DRM protection.
Transcode the content using Windows Media DRM. Use this flag if the source content has Windows Media DRM protection and you want to change the encoding parameters but not the DRM protection.
Transcrypt the content. Use this flag if the source content has DRM protection and you want to change the DRM protection; for example, if you want to convert from Windows Media DRM version 1 to Windows Media DRM version 7 or later.
Reserved. Do not use.
Contains flags for the
Contains flags that indicate the status of the
Contains values that specify common video formats.
Reserved; do not use.
NTSC television (720 x 480i).
PAL television (720 x 576i).
DVD, NTSC standard (720 x 480).
DVD, PAL standard (720 x 576).
DV video, PAL standard.
DV video, NTSC standard.
ATSC digital television, SD (480i).
ATSC digital television, HD interlaced (1080i)
ATSC digital television, HD progressive (720p)
Defines stream marker information for the
If the Streaming Audio Renderer receives an
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text is aligned in its parent block element.
Text is aligned at the start of its parent block element.
Text is aligned at the end of its parent block element.
Text is aligned in the center of its parent block element.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the type of a timed text cue event.
The cue has become active.
The cue has become inactive.
All cues have been deactivated.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text is decorated (underlined and so on).
Text isn't decorated.
Text is underlined.
Text has a line through it.
Text has a line over it.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text is aligned with the display.
Text is aligned before an element.
Text is aligned after an element.
Text is aligned in the center between elements.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the kind error that occurred with a timed text track.
This enumeration is used to return error information from the
No error occurred.
A fatal error occurred.
An error occurred with the data format of the timed text track.
A network error occurred when trying to load the timed text track.
An internal error occurred.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the font style of the timed text.
The font style is normal, sometimes referred to as roman.
The font style is oblique.
The font style is italic.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text appears when the parent element is scrolled.
Text pops on when the parent element is scrolled.
Text rolls up when the parent element is scrolled.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the kind of timed text track.
The kind of timed text track is unknown.
The kind of timed text track is subtitles.
The kind of timed text track is closed captions.
The kind of timed text track is metadata.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the units in which the timed text is measured.
The timed text is measured in pixels.
The timed text is measured as a percentage.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the sequence in which text is written on its parent element.
Text is written from left to right and top to bottom.
Text is written from right to left and top to bottom.
Text is written from top to bottom and right to left.
Text is written from top to bottom and left to right.
Text is written from left to right.
Text is written from right to left.
Text is written from top to bottom.
Contains flags for the
Defines messages for a Media Foundation transform (MFT). To send a message to an MFT, call
Some messages require specific actions from the MFT. These events have "MESSAGE" in the message name. Other messages are informational; they notify the MFT of some action by the client, and do not require any particular response from the MFT. These messages have "NOTIFY" in the messages name. Except where noted, an MFT should not rely on the client sending notification messages.
Specifies whether the topology loader enables Microsoft DirectX Video Acceleration (DXVA) in the topology.
This enumeration is used with the
If an MFT supports DXVA, the MFT must return TRUE for the
Previous versions of Microsoft Media Foundation supported DXVA only for decoders.
The topology loader enables DXVA on the decoder if possible, and drops optional Media Foundation transforms (MFTs) that do not support DXVA.
The topology loader disables all video acceleration. This setting forces software processing, even when the decoder supports DXVA.
The topology loader enables DXVA on every MFT that supports it.
Specifies whether the topology loader will insert hardware-based Media Foundation transforms (MFTs) into the topology.
This enumeration is used with the
Use only software MFTs. Do not use hardware-based MFTs. This mode is the default, for backward compatibility with existing applications.
Use hardware-based MFTs when possible, and software MFTs otherwise. This mode is the recommended one.
If hardware-based MFTs are available, the topoloader will insert them. If not, the connection will fail.
Supported in Windows?8.1 and later.
Defines status flags for the
Specifies the status of a topology during playback.
This enumeration is used with the
For a single topology, the Media Session sends these status flags in numerical order, starting with
This value is not used.
The topology is ready to start. After this status flag is received, you can use the Media Session's
The Media Session has started to read data from the media sources in the topology.
The Media Session modified the topology, because the format of a stream changed.
The media sinks have switched from the previous topology to this topology. This status value is not sent for the first topology that is played. For the first topology, the
Playback of this topology is complete. The Media Session might still use the topology internally. The Media Session does not completely release the topology until it sends the next
Defines the type of a topology node.
Output node. Represents a media sink in the topology.
Source node. Represents a media stream in the topology.
Transform node. Represents a Media Foundation Transform (MFT) in the topology.
Tee node. A tee node does not hold a reference to an object. Instead, it represents a fork in the stream. A tee node has one input and multiple outputs, and samples from the upstream node are delivered to all of the downstream nodes.
Reserved.
Defines at what times a transform in a topology is drained.
The transform is drained when the end of a stream is reached. It is not drained when markout is reached at the end of a segment.
The transform is drained whenever a topology ends.
The transform is never drained.
Defines when a transform in a topology is flushed.
The transform is flushed whenever the stream changes, including seeks and new segments.
The transform is flushed when seeking is performed on the stream.
The transform is never flushed during streaming. It is flushed only when the object is released.
Defines the profile flags that are set in the
These flags are checked by
For more information about the stream settings that an application can specify, see Using the Transcode API.
If the
The
For the video stream, the required attributes are as follows:
If these attributes are not set,
Use the
For example, assume that your input source is an MP3 file. You set the container to be
Defines flags for the
Contains flags for registering and enumeration Media Foundation transforms (MFTs).
These flags are used in the following functions:
For registration, these flags describe the MFT that is being registered. Some flags do not apply in that context. For enumeration, these flags control which MFTs are selected in the enumeration. For more details about the precise meaning of these flags, see the reference topics for
For registration, the
Defines flags for processing output samples in a Media Foundation transform (MFT).
Do not produce output for streams in which the pSample member of the
Regenerates the last output sample.
Note Requires Windows?8.
Indicates the status of a call to
If the MFT sets this flag, the ProcessOutput method returns
Call
Call
Call
Until these steps are completed, all further calls to ProcessOutput return
Indicates whether the URL is from a trusted source.
The validity of the URL cannot be guaranteed because it is not signed. The application should warn the user.
The URL is the original one provided with the content.
The URL was originally signed and has been tampered with. The file should be considered corrupted, and the application should not navigate to the URL without issuing a strong warning the user.
Specifies how 3D video frames are stored in memory.
This enumeration is used with the
The base view is stored in a single buffer. The other view is discarded.
Each media sample contains multiple buffers, one for each view.
Each media sample contains one buffer, with both views packed side-by-side into a single frame.
Each media sample contains one buffer, with both views packed top-and-bottom into a single frame.
Specifies how to output a 3D stereoscopic video stream.
This enumeration is used with the
Output the base view only. Discard the other view.
Output a stereo view (two buffers).
Specifies how a 3D video frame is stored in a media sample.
This enumeration is used with the
The exact layout of the views in memory is specified by the following media type attributes:
Each view is stored in a separate buffer. The sample contains one buffer per view.
All of the views are stored in the same buffer. The sample contains a single buffer.
Specifies the aspect-ratio mode.
Do not maintain the aspect ratio of the video. Stretch the video to fit the output rectangle.
Preserve the aspect ratio of the video by letterboxing or within the output rectangle.
Correct the aspect ratio if the physical size of the display device does not match the display resolution. For example, if the native resolution of the monitor is 1600 by 1200 (4:3) but the display resolution is 1280 by 1024 (5:4), the monitor will display non-square pixels.
If this flag is set, you must also set the
Apply a non-linear horizontal stretch if the aspect ratio of the destination rectangle does not match the aspect ratio of the source rectangle.
The non-linear stretch algorithm preserves the aspect ratio in the middle of the picture and stretches (or shrinks) the image progressively more toward the left and right. This mode is useful when viewing 4:3 content full-screen on a 16:9 display, instead of pillar-boxing. Non-linear vertical stretch is not supported, because the visual results are generally poor.
This mode may cause performance degradation.
If this flag is set, you must also set the
Contains flags that define the chroma encoding scheme for Y'Cb'Cr' data.
These flags are used with the
For more information about these values, see the remarks for the DXVA2_VideoChromaSubSampling enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
Unknown encoding scheme.
Chroma should be reconstructed as if the underlying video was progressive content, rather than skipping fields or applying chroma filtering to minimize artifacts from reconstructing 4:2:0 interlaced chroma.
Chroma samples are aligned horizontally with the luma samples, or with multiples of the luma samples. If this flag is not set, chroma samples are located 1/2 pixel to the right of the corresponding luma sample.
Chroma samples are aligned vertically with the luma samples, or with multiples of the luma samples. If this flag is not set, chroma samples are located 1/2 pixel down from the corresponding luma sample.
The U and V planes are aligned vertically. If this flag is not set, the chroma planes are assumed to be out of phase by 1/2 chroma sample, alternating between a line of U followed by a line of V.
Specifies the chroma encoding scheme for MPEG-2 video. Chroma samples are aligned horizontally with the luma samples, but are not aligned vertically. The U and V planes are aligned vertically.
Specifies the chroma encoding scheme for MPEG-1 video.
Specifies the chroma encoding scheme for PAL DV video.
Chroma samples are aligned vertically and horizontally with the luma samples. YUV formats such as 4:4:4, 4:2:2, and 4:1:1 are always cosited in both directions and should use this flag.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Specifies the type of copy protection required for a video stream.
Use these flags with the
No copy protection is required.
Analog copy protection should be applied.
Digital copy protection should be applied.
Contains flags that describe a video stream.
These flags are used in the
Developers are encouraged to use media type attributes instead of using the
Flags | Media Type Attribute |
---|---|
| |
| |
| |
| |
Use the |
?
The following flags were defined to describe per-sample interlacing information, but are obsolete:
Instead, components should use sample attributes to describe per-sample interlacing information, as described in the topic Video Interlacing.
Specifies how a video stream is interlaced.
In the descriptions that follow, upper field refers to the field that contains the leading half scan line. Lower field refers to the field that contains the first full scan line.
Scan lines in the lower field are 0.5 scan line lower than those in the upper field. In NTSC television, a frame consists of a lower field followed by an upper field. In PAL television, a frame consists of an upper field followed by a lower field.
The upper field is also called the even field, the top field, or field 2. The lower field is also called the odd field, the bottom field, or field 1.
If the interlace mode is
The type of interlacing is not known.
Progressive frames.
Interlaced frames. Each frame contains two fields. The field lines are interleaved, with the upper field appearing on the first line.
Interlaced frames. Each frame contains two fields. The field lines are interleaved, with the lower field appearing on the first line.
Interlaced frames. Each frame contains one field, with the upper field appearing first.
Interlaced frames. Each frame contains one field, with the lower field appearing first.
The stream contains a mix of interlaced and progressive modes.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Describes the optimal lighting for viewing a particular set of video content.
This enumeration is used with the
The optimal lighting is unknown.
Bright lighting; for example, outdoors.
Medium brightness; for example, normal office lighting.
Dim; for example, a living room with a television and additional low lighting.
Dark; for example, a movie theater.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Contains flags that are used to configure how the enhanced video renderer (EVR) performs deinterlacing.
To set these flags, call the
These flags control some trade-offs between video quality and rendering speed. The constants named "MFVideoMixPrefs_Allow..." enable lower-quality settings, but only when the quality manager requests a drop in quality. The constants named "MFVideoMixPrefs_Force..." force the EVR to use lower-quality settings regardless of what the quality manager requests. (For more information about the quality manager, see
Currently two lower-quality modes are supported, as described in the following table. Either is preferable to dropping an entire frame.
Mode | Description |
---|---|
Half interface | The EVR's video mixer skips the second field (relative to temporal order) of each interlaced frame. The video mixer still deinterlaces the first field, and this operation typically interpolates data from the second field. The overall frame rate is unaffected. |
Bob deinterlacing | The video mixer uses bob deinterlacing, even if the driver supports a higher-quality deinterlacing algorithm. |
?
Force the EVR to skip the second field (in temporal order) of every interlaced frame.
If the EVR is falling behind, allow it to skip the second field (in temporal order) of every interlaced frame.
If the EVR is falling behind, allow it to use bob deinterlacing, even if the driver supports a higher-quality deinterlacing mode.
Force the EVR to use bob deinterlacing, even if the driver supports a higher-quality mode.
The bitmask of valid flag values. This constant is not itself a valid flag.
Specifies whether to pad a video image so that it fits within a specified aspect ratio.
Use these flags with the
Do not pad the image.
Pad the image so that it can be displayed in a 4?3 area.
Pad the image so that it can be displayed in a 16?9 area.
Specifies the color primaries of a video source. The color primaries define how to convert colors from RGB color space to CIE XYZ color space.
This enumeration is used with the
For more information about these values, see the remarks for the DXVA2_VideoPrimaries enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
The color primaries are unknown.
Reserved.
ITU-R BT.709. Also used for sRGB and scRGB.
ITU-R BT.470-4 System M (NTSC).
ITU-R BT.470-4 System B,G (NTSC).
SMPTE 170M.
SMPTE 240M.
EBU 3213.
SMPTE C (SMPTE RP 145).
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Defines algorithms for the video processor which is use by MF_VIDEO_PROCESSOR_ALGORITHM.
Specifies how to flip a video image.
Do not flip the image.
Flip the image horizontally.
Flip the image vertically.
Specifies how to rotate a video image.
Do not rotate the image.
Rotate the image to the correct viewing orientation.
Contains flags that define how the enhanced video renderer (EVR) displays the video.
To set these flags, call
The flags named "MFVideoRenderPrefs_Allow..." cause the EVR to use lower-quality settings only when requested by the quality manager. (For more information, see
If this flag is set, the EVR does not draw the border color. By default, the EVR draws a border on areas of the destination rectangle that have no video. See
If this flag is set, the EVR does not clip the video when the video window straddles two monitors. By default, if the video window straddles two monitors, the EVR clips the video to the monitor that contains the largest area of video.
Note??Requires Windows?7 or later. ?
Allow the EVR to limit its output to match GPU bandwidth.
Note??Requires Windows?7 or later. ?
Force the EVR to limit its output to match GPU bandwidth.
Note??Requires Windows?7 or later. ?
Force the EVR to batch Direct3D Present calls. This optimization enables the system to enter to idle states more frequently, which can reduce power consumption.
Note??Requires Windows?7 or later. ?
Allow the EVR to batch Direct3D Present calls.
Note??Requires Windows?7 or later. ?
Force the EVR to mix the video inside a rectangle that is smaller than the output rectangle. The EVR will then scale the result to the correct output size. The effective resolution will be lower if this setting is applied.
Note??Requires Windows?7 or later. ?
Allow the EVR to mix the video inside a rectangle that is smaller than the output rectangle.
Note??Requires Windows?7 or later. ?
Prevent the EVR from repainting the video window after a stop command. By default, the EVR repaints the video window black after a stop command.
Describes the rotation of the video image in the counter-clockwise direction.
This enumeration is used with the
The image is not rotated.
The image is rotated 90 degrees counter-clockwise.
The image is rotated 180 degrees.
The image is rotated 270 degrees counter-clockwise.
Describes the intended aspect ratio for a video stream.
Use these flags with the
The aspect ratio is unknown.
The source is 16?9 content encoded within a 4?3 area.
The source is 2.35:1 content encoded within a 16?9 or 4?3 area.
Specifies the conversion function from linear RGB to non-linear RGB (R'G'B').
These flags are used with the
For more information about these values, see the remarks for the DXVA2_VideoTransferFunction enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
Unknown. Treat as
Linear RGB (gamma = 1.0).
True 1.8 gamma, L' = L^1/1.8.
True 2.0 gamma, L' = L^1/2.0.
True 2.2 gamma, L' = L^1/2.2. This transfer function is used in ITU-R BT.470-2 System M (NTSC).
ITU-R BT.709 transfer function. Gamma 2.2 curve with a linear segment in the lower range. This transfer function is used in BT.709, BT.601, SMPTE 296M, SMPTE 170M, BT.470, and SPMTE 274M. In addition BT-1361 uses this function within the range [0...1].
SPMTE 240M transfer function. Gamma 2.2 curve with a linear segment in the lower range.
sRGB transfer function. Gamma 2.4 curve with a linear segment in the lower range.
True 2.8 gamma. L' = L^1/2.8. This transfer function is used in ITU-R BT.470-2 System B, G (PAL).
Logarithmic transfer (100:1 range); for example, as used in H.264 video.
Note??Requires Windows?7 or later. ?Logarithmic transfer (316.22777:1 range); for example, as used in H.264 video.
Note??Requires Windows?7 or later. ?Symmetric ITU-R BT.709.
Note??Requires Windows?7 or later. ?Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Describes the conversion matrices between Y'PbPr (component video) and studio R'G'B'.
This enumeration is used with the
For more information about these values, see the remarks for the DXVA2_VideoTransferMatrix enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
Unknown transfer matrix. Treat as
ITU-R BT.709 transfer matrix.
ITU-R BT.601 transfer matrix. Also used for SMPTE 170 and ITU-R BT.470-2 System B,G.
SMPTE 240M transfer matrix.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Defines messages for an enhanced video renderer (EVR) presenter. This enumeration is used with the
Contains flags that specify how to convert an audio media type.
Convert the media type to a
Convert the media type to a
Provides configuration information to the dispatching thread for a callback.
The GetParameters method returns information about the callback so that the dispatching thread can optimize the process that it uses to invoke the callback.
If the method returns a value other than zero in the pdwFlags parameter, your Invoke method must meet the requirements described here. Otherwise, the callback might delay the pipeline.
If you want default values for both parameters, return E_NOTIMPL. The default values are given in the parameter descriptions on this page.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Receives a flag indicating the behavior of the callback object's
Value | Meaning |
---|---|
| The callback does not take a long time to complete, but has no specific restrictions on what system calls it makes. The callback generally takes less than 30 milliseconds to complete. |
The callback does very minimal processing. It takes less than 1 millisecond to complete. The callback must be invoked from one of the following work queues: | |
Implies The callback must be invoked from one of the following work queues: | |
Blocking callback. | |
Reply callback. |
?
Receives the identifier of the work queue on which the callback is dispatched.
This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
If the work queue is not compatible with the value returned in pdwFlags, the Media Foundation platform returns
Creates the default video presenter for the enhanced video renderer (EVR).
Pointer to the owner of the object. If the object is aggregated, pass a reference to the aggregating object's
Interface identifier (IID) of the video device interface that will be used for processing the video. Currently the only supported value is IID_IDirect3DDevice9.
IID of the requested interface on the video presenter. The video presenter exposes the
Receives a reference to the requested interface on the video presenter. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates the default video mixer for the enhanced video renderer (EVR).
Pointer to the owner of this object. If the object is aggregated, pass a reference to the aggregating object's
Interface identifier (IID) of the video device interface that will be used for processing the video. Currently the only supported value is IID_IDirect3DDevice9.
IID of the requested interface on the video mixer. The video mixer exposes the
Receives a reference to the requested interface. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the default video mixer and video presenter for the enhanced video renderer (EVR).
Pointer to the owner of the video mixer. If the mixer is aggregated, pass a reference to the aggregating object's
Pointer to the owner of the video presenter. If the presenter is aggregated, pass a reference to the aggregating object's
Interface identifier (IID) of the requested interface on the video mixer. The video mixer exposes the
Receives a reference to the requested interface on the video mixer. The caller must release the interface.
IID of the requested interface on the video presenter. The video presenter exposes the
Receives a reference to the requested interface on the video presenter. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates an instance of the enhanced video renderer (EVR) media sink.
Interface identifier (IID) of the requested interface on the EVR.
Receives a reference to the requested interface. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This function creates the Media Foundation version of the EVR. To create the DirectShow EVR filter, call CoCreateInstance with the class identifier CLSID_EnhancedVideoRenderer.
Creates a media sample that manages a Direct3D surface.
A reference to the
Receives a reference to the sample's
If this function succeeds, it returns
The media sample created by this function exposes the following interfaces in addition to
If pUnkSurface is non-
Alternatively, you can set pUnkSurface to
Creates an object that allocates video samples.
The identifier of the interface to retrieve. Specify one of the following values:
Value | Meaning |
---|---|
| Retrieve an |
| Retrieve an |
| Retrieve an |
?
Receives a reference to the requested interface. The caller must release the interface.
If the function succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Creates a new instance of the MFPlay player object.
If this function succeeds, it returns
Before calling this function, call CoIntialize(Ex) from the same thread to initialize the COM library.
Internally,
Creates the ASF Header Object object.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF profile object.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an ASF profile object from a presentation descriptor.
Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a presentation descriptor from an ASF profile object.
Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF Splitter.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF Multiplexer.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF Indexer object.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a byte stream to access the index in an ASF stream.
Pointer to the
Byte offset of the index within the ASF stream. To get this value, call
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The call succeeded. |
| The offset specified in cbIndexStartOffset is invalid. |
?
Creates the ASF stream selector.
Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF media sink.
Pointer to a byte stream that will be used to write the ASF stream.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an activation object that can be used to create the ASF media sink.
Null-terminated wide-character string that contains the output file name.
A reference to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an activation object that can be used to create a Windows Media Video (WMV) encoder.
A reference to the
A reference to the
Receives a reference to the
If this function succeeds, it returns
Creates an activation object that can be used to create a Windows Media Audio (WMA) encoder.
A reference to the
A reference to the
Receives a reference to the
If this function succeeds, it returns
Creates an activation object for the ASF streaming sink.
The ASF streaming sink enables an application to write streaming Advanced Systems Format (ASF) packets to an HTTP byte stream.
A reference to a byte stream object in which the ASF media sink writes the streamed content.
Receives a reference to the
If this function succeeds, it returns
To create the ASF streaming sink in another process, call
An application can get a reference to the ASF ContentInfo Object by calling IUnknown::QueryInterface on the media sink object received in the ppIMediaSink parameter. The ContentInfo object is used to set the encoder configuration settings, provide stream properties supplied by an ASF profile, and add metadata information. These configuration settings populate the various ASF header objects of the encoded ASF file. For more information, see Setting Properties in the ContentInfo Object.
Creates an activation object for the ASF streaming sink.
The ASF streaming sink enables an application to write streaming Advanced Systems Format (ASF) packets to an HTTP byte stream. The activation object can be used to create the ASF streaming sink in another process.
A reference to the
A reference to an ASF ContentInfo Object that contains the properties that describe the ASF content. These settings can contain stream settings, encoding properties, and metadata. For more information about these properties, see Setting Properties in the ContentInfo Object.
Receives a reference to the
If this function succeeds, it returns
Starting in Windows?7, Media Foundation provides an ASF streaming sink that writes the content in a live streaming scenario. This function should be used in secure transcode scenarios where this media sink needs to be created and configured in the remote process. Like the ASF file sink, the new media sink performs ASF related tasks such as writing the ASF header, generating data packets (muxing). The content is written to a caller-implemented byte stream such as an HTTP byte stream. The caller must also provide an activation object that media sink can use to create the byte stream remotely.
In addition, it performs transcryption for streaming protected content. It hosts the Windows Media Digital Rights Management (DRM) for Network Devices Output Trust Authority (OTA) that handles the license request and response. For more information, see
The new media sink does not perform any time adjustments. If the clock seeks, the timestamps are not changed.
Initializes Microsoft Media Foundation.
Version number. Use the value
This parameter is optional when using C++ but required in C. The value must be one of the following flags:
Value | Meaning |
---|---|
| Do not initialize the sockets library. |
| Equivalent to MFSTARTUP_NOSOCKET. |
| Initialize the entire Media Foundation platform. This is the default value when dwFlags is not specified. |
?
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Version parameter requires a newer version of Media Foundation than the version that is running. |
| The Media Foundation platform is disabled because the system was started in "Safe Mode" (fail-safe boot). |
| Media Foundation is not implemented on the system. This error can occur if the media components are not present (See KB2703761 for more info). |
?
An application must call this function before using Media Foundation. Before your application quits, call
Do not call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Shuts down the Microsoft Media Foundation platform. Call this function once for every call to
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Blocks the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function prevents work queue threads from being shut down when
This function holds a lock on the Media Foundation platform. To unlock the platform, call
The
The default implementation of the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Unlocks the Media Foundation platform after it was locked by a call to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
The application must call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Puts an asynchronous operation on a work queue.
The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
A reference to the
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue. For more information, see |
| The |
?
This function creates an asynchronous result object and puts the result object on the work queue. The work queue calls the
Puts an asynchronous operation on a work queue, with a specified priority.
The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
The priority of the work item. Work items are performed in order of priority.
A reference to the
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue identifier. |
| The |
?
Puts an asynchronous operation on a work queue.
The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue identifier. For more information, see |
| The |
?
To invoke the work-item, this function passes pResult to the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Puts an asynchronous operation on a work queue, with a specified priority.
The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
The priority of the work item. Work items are performed in order of priority.
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue identifier. |
| The |
?
To invoke the work item, this function passes pResult to the
Queues a work item that waits for an event to be signaled.
A handle to an event object. To create an event object, call CreateEvent or CreateEventEx.
The priority of the work item. Work items are performed in order of priority.
A reference to the
Receives a key that can be used to cancel the wait. To cancel the wait, call
If this function succeeds, it returns
This function enables a component to wait for an event without blocking the current thread.
The function puts a work item on the specified work queue. This work item waits for the event given in hEvent to be signaled. When the event is signaled, the work item invokes a callback. (The callback is contained in the result object given in pResult. For more information, see
The work item is dispatched on a work queue by the
Do not use any of the following work queues:
Creates a work queue that is guaranteed to serialize work items. The serial work queue wraps an existing multithreaded work queue. The serial work queue enforces a first-in, first-out (FIFO) execution order.
The identifier of an existing work queue. This must be either a multithreaded queue or another serial work queue. Any of the following can be used:
Receives an identifier for the new serial work queue. Use this identifier when queuing work items.
This function can return one of these values.
Return code | Description |
---|---|
| The function succeeded. |
| The application exceeded the maximum number of work queues. |
| The application did not call |
?
When you are done using the work queue, call
Multithreaded queues use a thread pool, which can reduce the total number of threads in the pipeline. However, they do not serialize work items. A serial work queue enables the application to get the benefits of the thread pool, without needing to perform manual serialization of its own work items.
Schedules an asynchronous operation to be completed after a specified interval.
Pointer to the
Time-out interval, in milliseconds. Set this parameter to a negative value. The callback is invoked after ?Timeout milliseconds. For example, if Timeout is ?5000, the callback is invoked after 5000 milliseconds.
Receives a key that can be used to cancel the timer. To cancel the timer, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
When the timer interval elapses, the timer calls
Schedules an asynchronous operation to be completed after a specified interval.
Pointer to the
Pointer to the
Time-out interval, in milliseconds. Set this parameter to a negative value. The callback is invoked after ?Timeout milliseconds. For example, if Timeout is ?5000, the callback is invoked after 5000 milliseconds.
Receives a key that can be used to cancel the timer. To cancel the timer, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function creates an asynchronous result object. When the timer interval elapses, the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Attempts to cancel an asynchronous operation that was scheduled with
If this function succeeds, it returns
Because work items are asynchronous, the work-item callback might still be invoked after
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the timer interval for the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Sets a callback function to be called at a fixed interval.
Pointer to the callback function, of type MFPERIODICCALLBACK.
Pointer to a caller-provided object that implements
Receives a key that can be used to cancel the callback. To cancel the callback, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To get the timer interval for the periodic callback, call
Cancels a callback function that was set by the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
The callback is dispatched on another thread, and this function does not attempt to synchronize with the callback thread. Therefore, it is possible for the callback to be invoked after this function returns.
Creates a new work queue. This function extends the capabilities of the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The application exceeded the maximum number of work queues. |
| Invalid argument. |
| The application did not call |
?
When you are done using the work queue, call
The
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Creates a new work queue.
Receives an identifier for the work queue.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The application exceeded the maximum number of work queues. |
| The application did not call |
?
When you are done using the work queue, call
Locks a work queue.
The identifier for the work queue. The identifier is returned by the
If this function succeeds, it returns
This function prevents the
Call
Note??The
Unlocks a work queue.
Identifier for the work queue to be unlocked. The identifier is returned by the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
The application must call
Associates a work queue with a Multimedia Class Scheduler Service (MMCSS) task.
The identifier of the work queue. For private work queues, the identifier is returned by the
The name of the MMCSS task.For more information, see Multimedia Class Scheduler Service.
The unique task identifier. To obtain a new task identifier, set this value to zero.
A reference to the
A reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function is asynchronous. When the operation completes, the callback object's
To unregister the work queue from the MMCSS task, call
Associates a work queue with a Multimedia Class Scheduler Service (MMCSS) task.
The identifier of the work queue. For private work queues, the identifier is returned by the
The name of the MMCSS task. For more information, see Multimedia Class Scheduler Service.
The unique task identifier. To obtain a new task identifier, set this value to zero.
The base relative priority for the work-queue threads. For more information, see AvSetMmThreadPriority.
A reference to the
A reference to the
If this function succeeds, it returns
This function extends the
This function is asynchronous. When the operation completes, the callback object's
To unregister the work queue from the MMCSS task, call
Completes an asynchronous request to associate a work queue with a Multimedia Class Scheduler Service (MMCSS) task.
Pointer to the
The unique task identifier.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Call this function when the
To unregister the work queue from the MMCSS class, call
Unregisters a work queue from a Multimedia Class Scheduler Service (MMCSS) task.
The identifier of the work queue. For private work queues, the identifier is returned by the
Pointer to the
Pointer to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function unregisters a work queue that was associated with an MMCSS class through the
This function is asynchronous. When the operation completes, the callback object's
Completes an asynchronous request to unregister a work queue from a Multimedia Class Scheduler Service (MMCSS) task.
Pointer to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Call this function when the
Retrieves the Multimedia Class Scheduler Service (MMCSS) class currently associated with this work queue.
Identifier for the work queue. The identifier is retrieved by the
Pointer to a buffer that receives the name of the MMCSS class. This parameter can be
On input, specifies the size of the pwszClass buffer, in characters. On output, receives the required size of the buffer, in characters. The size includes the terminating null character.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The pwszClass buffer is too small to receive the task name. |
?
If the work queue is not associated with an MMCSS task, the function retrieves an empty string.
To associate a work queue with an MMCSS task, call
Retrieves the Multimedia Class Scheduler Service (MMCSS) task identifier currently associated with this work queue.
Identifier for the work queue. The identifier is retrieved by the
Receives the task identifier.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To associate a work queue with an MMCSS task, call
Registers the standard Microsoft Media Foundation platform work queues with the Multimedia Class Scheduler Service (MMCSS).
The name of the MMCSS task.
The MMCSS task identifier. On input, specify an existing MCCSS task group ID, or use the value zero to create a new task group. On output, receives the actual task group ID.
The base priority of the work-queue threads.
If this function succeeds, it returns
To unregister the platform work queues from the MMCSS class, call
Unregisters the Microsoft Media Foundation platform work queues from a Multimedia Class Scheduler Service (MMCSS) task.
If this function succeeds, it returns
Obtains and locks a shared work queue.
The name of the MMCSS task.
The base priority of the work-queue threads. If the regular-priority queue is being used (wszClass=""), then the value 0 must be passed in.
The MMCSS task identifier. On input, specify an existing MCCSS task group ID , or use the value zero to create a new task group. If the regular priority queue is being used (wszClass=""), then
Receives an identifier for the new work queue. Use this identifier when queuing work items.
If this function succeeds, it returns
A multithreaded work queue uses a thread pool to dispatch work items. Whenever a thread becomes available, it dequeues the next work item from the queue. Work items are dequeued in first-in-first-out order, but work items are not serialized. In other words, the work queue does not wait for a work item to complete before it starts the next work item.
Within a single process, the Microsoft Media Foundation platform creates up to one multithreaded queue for each Multimedia Class Scheduler Service (MMCSS) task. The
The
If the regular priority queue is being used (wszClass=""), then
Gets the relative thread priority of a work queue.
The identifier of the work queue. For private work queues, the identifier is returned by the
Receives the relative thread priority.
If this function succeeds, it returns
This function returns the relative thread priority set by the
Creates an asynchronous result object. Use this function if you are implementing an asynchronous method.
Pointer to the object stored in the asynchronous result. This reference is returned by the
Pointer to the
Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To invoke the callback specified in pCallback, call the
Invokes a callback method to complete an asynchronous operation.
Pointer to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| Invalid work queue. For more information, see |
| The |
?
If you are implementing an asynchronous method, use this function to invoke the caller's
The callback is invoked from a Media Foundation work queue. For more information, see Writing an Asynchronous Method.
The
Creates a byte stream from a file.
The requested access mode, specified as a member of the
The behavior of the function if the file already exists or does not exist, specified as a member of the
Bitwise OR of values from the
Pointer to a null-terminated string that contains the file name.
Receives a reference to the
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a byte stream that is backed by a temporary local file.
The requested access mode, specified as a member of the
The behavior of the function if the file already exists or does not exist, specified as a member of the
Bitwise OR of values from the
Receives a reference to the
If this function succeeds, it returns
This function creates a file in the system temporary folder, and then returns a byte stream object for that file. The full path name of the file is storted in the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous request to create a byte stream from a file.
The requested access mode, specified as a member of the
The behavior of the function if the file already exists or does not exist, specified as a member of the
Bitwise OR of values from the
Pointer to a null-terminated string containing the file name.
Pointer to the
Pointer to the
Receives an
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
When the request is completed, the callback object's
Completes an asynchronous request to create a byte stream from a file.
Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Call this function when the
Cancels an asynchronous request to create a byte stream from a file.
A reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
You can use this function to cancel a previous call to
Allocates system memory and creates a media buffer to manage it.
Size of the buffer, in bytes.
Receives a reference to the
The function allocates a buffer with a 1-byte memory alignment. To allocate a buffer that is aligned to a larger memory boundary, call
When the media buffer object is destroyed, it releases the allocated memory.
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a media buffer that wraps an existing media buffer. The new media buffer points to the same memory as the original media buffer, or to an offset from the start of the memory.
A reference to the
The start of the new buffer, as an offset in bytes from the start of the original buffer.
The size of the new buffer. The value of cbOffset + dwLength must be less than or equal to the size of valid data the original buffer. (The size of the valid data is returned by the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The requested offset or the requested length is not valid. |
?
The maximum size of the wrapper buffer is limited to the size of the valid data in the original buffer. This might be less than the allocated size of the original buffer. To set the size of the valid data, call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Converts a Media Foundation media buffer into a buffer that is compatible with DirectX Media Objects (DMOs).
Pointer to the
Pointer to the
Offset in bytes from the start of the Media Foundation buffer. This offset defines where the DMO buffer starts. If this parameter is zero, the DMO buffer starts at the beginning of the Media Foundation buffer.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| Invalid argument. The pIMFMediaBuffer parameter must not be |
?
The DMO buffer created by this function also exposes the
If the Media Foundation buffer specified by pIMFMediaBuffer exposes the
Converts a Microsoft Direct3D?9 format identifier to a Microsoft DirectX Graphics Infrastructure (DXGI) format identifier.
The D3DFORMAT value or FOURCC code to convert.
Returns a
Converts a Microsoft DirectX Graphics Infrastructure (DXGI) format identifier to a Microsoft Direct3D?9 format identifier.
The
Returns a D3DFORMAT value or FOURCC code.
Locks the shared Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
Receives a token that identifies this instance of the DXGI Device Manager. Use this token when calling
Receives a reference to the
If this function succeeds, it returns
This function obtains a reference to a DXGI Device Manager instance that can be shared between components. The Microsoft Media Foundation platform creates this instance of the DXGI Device Manager as a singleton object. Alternatively, you can create a new DXGI Device Manager by calling
The first time this function is called, the Media Foundation platform creates the shared DXGI Device Manager.
When you are done use the
Unlocks the shared Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
If this function succeeds, it returns
Call this function after a successful call to the
Creates a media buffer object that manages a Direct3D 9 surface.
Identifies the type of Direct3D 9 surface. Currently this value must be IID_IDirect3DSurface9.
A reference to the
If TRUE, the buffer's
For more information about top-down versus bottom-up images, see Image Stride.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
This function creates a media buffer object that holds a reference to the Direct3D surface specified in punkSurface. Locking the buffer gives the caller access to the surface memory. When the buffer object is destroyed, it releases the surface. For more information about media buffers, see Media Buffers.
Note??This function does not allocate the Direct3D surface itself.? The buffer object created by this function also exposes the
This function does not support DXGI surfaces.
Creates a media buffer object that manages a Windows Imaging Component (WIC) bitmap.
Set this parameter to __uuidof(
.
A reference to the
Receives a reference to the
If this function succeeds, it returns
Creates a media buffer to manage a Microsoft DirectX Graphics Infrastructure (DXGI) surface.
Identifies the type of DXGI surface. This value must be IID_ID3D11Texture2D.
A reference to the
The zero-based index of a subresource of the surface. The media buffer object is associated with this subresource.
If TRUE, the buffer's
For more information about top-down versus bottom-up images, see Image Stride.
Receives a reference to the
If this function succeeds, it returns
The returned buffer object supports the following interfaces:
Creates an object that allocates video samples that are compatible with Microsoft DirectX Graphics Infrastructure (DXGI).
The identifier of the interface to retrieve. Specify one of the following values.
Value | Meaning |
---|---|
| Retrieve an |
| Retrieve an |
| Retrieve an |
| Retrieve an |
?
Receives a reference to the requested interface. The caller must release the interface.
If this function succeeds, it returns
This function creates an allocator for DXGI video surfaces. The buffers created by this allocator expose the
Creates an instance of the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
Receives a token that identifies this instance of the DXGI Device Manager. Use this token when calling
Receives a reference to the
If this function succeeds, it returns
When you create an
Allocates system memory with a specified byte alignment and creates a media buffer to manage the memory.
Size of the buffer, in bytes.
Specifies the memory alignment for the buffer. Use one of the following constants.
Value | Meaning |
---|---|
| Align to 1 bytes. |
| Align to 2 bytes. |
| Align to 4 bytes. |
| Align to 8 bytes. |
| Align to 16 bytes. |
| Align to 32 bytes. |
| Align to 64 bytes. |
| Align to 128 bytes. |
| Align to 256 bytes. |
| Align to 512 bytes. |
?
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
When the media buffer object is destroyed, it releases the allocated memory.
Creates a media event object.
The event type. See
The extended type. See
The event status. See
The value associated with the event, if any. See
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an event queue.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function creates a helper object that you can use to implement the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an empty media sample.
Receives a reference to the
Initially the sample does not contain any media buffers.
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an empty attribute store.
Receives a reference to the
The initial number of elements allocated for the attribute store. The attribute store grows as needed.
If this function succeeds, it returns
Attributes are used throughout Microsoft Media Foundation to configure objects, describe media formats, query object properties, and other purposes. For more information, see Attributes in Media Foundation.
For a complete list of all the defined attribute GUIDs in Media Foundation, see Media Foundation Attributes.
Initializes the contents of an attribute store from a byte array.
Pointer to the
Pointer to the array that contains the initialization data.
Size of the pBuf array, in bytes.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The buffer is not valid. |
?
Use this function to deserialize an attribute store that was serialized with the
This function deletes any attributes that were previously stored in pAttributes.
Retrieves the size of the buffer needed for the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Use this function to find the size of the array that is needed for the
Converts the contents of an attribute store to a byte array.
Pointer to the
Pointer to an array that receives the attribute data.
Size of the pBuf array, in bytes. To get the required size of the buffer, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The buffer given in pBuf is too small. |
?
The function skips any attributes with
To convert the byte array back into an attribute store, call
To write an attribute store to a stream, call the
Adds information about a Media Foundation transform (MFT) to the registry.
Applications can enumerate the MFT by calling the
If this function succeeds, it returns
The registry entries created by this function are read by the following functions:
Function | Description |
---|---|
| Enumerates MFTs by media type and category. |
| Extended version of |
| Looks up an MFT by CLSID and retrieves the registry information. |
?
This function does not register the CLSID of the MFT for the CoCreateInstance or CoGetClassObject functions.
To remove the entries from the registry, call
The formats given in the pInputTypes and pOutputTypes parameters are intended to help applications search for MFTs by format. Applications can use the
It is recommended to specify at least one input type in pInputTypes and one output type in the pOutputTypes parameter. Otherwise, the MFT might be skipped in the enumeration.
On 64-bit Windows, the 32-bit version of this function registers the MFT in the 32-bit node of the registry. For more information, see 32-bit and 64-bit Application Data in the Registry.
Unregisters a Media Foundation transform (MFT).
The CLSID of the MFT.
If this function succeeds, it returns
This function removes the registry entries created by the
It is safe to call
Registers a Media Foundation transform (MFT) in the caller's process.
A reference to the
A
A wide-character null-terminated string that contains the friendly name of the MFT.
A bitwise OR of zero or more flags from the _MFT_ENUM_FLAG enumeration.
The number of elements in the pInputTypes array.
A reference to an array of
The number of elements in the pOutputTypes array.
A reference to an array of
If this function succeeds, it returns
The primary purpose of this function is to make an MFT available for automatic topology resolution without making the MFT available to other processes or applications.
After you call this function, the MFT can be enumerated by calling the
The pClassFactory parameter specifies a class factory object that creates the MFT. The class factory's IClassFactory::CreateInstance method must return an object that supports the
To unregister the MFT from the current process, call
If you need to register an MFT in the Protected Media Path (PMP) process, use the
Unregisters one or more Media Foundation transforms (MFTs) from the caller's process.
A reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT specified by the pClassFactory parameter was not registered in this process. |
?
Use this function to unregister a local MFT that was previously registered through the
If the pClassFactory parameter is
Registers a Media Foundation transform (MFT) in the caller's process.
The class identifier (CLSID) of the MFT.
A
A wide-character null-terminated string that contains the friendly name of the MFT.
A bitwise OR of zero or more flags from the _MFT_ENUM_FLAG enumeration.
The number of elements in the pInputTypes array.
A reference to an array of
The number of elements in the pOutputTypes array.
A reference to an array of
If this function succeeds, it returns
The primary purpose of this function is to make an MFT available for automatic topology resolution without making the MFT available to other processes or applications.
After you call this function, the MFT can be enumerated by calling the
To unregister the MFT from the current process, call
If you need to register an MFT in the Protected Media Path (PMP) process, use the
Unregisters a Media Foundation transform (MFT) from the caller's process.
The class identifier (CLSID) of the MFT.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT specified by the clsidMFT parameter was not registered in this process. |
?
Use this function to unregister a local MFT that was previously registered through the
Enumerates Media Foundation transforms (MFTs) in the registry.
Starting in Windows?7, applications should use the
If this function succeeds, it returns
This function returns a list of all the MFTs in the specified category that match the search criteria given by the pInputType, pOutputType, and pAttributes parameters. Any of those parameters can be
If no MFTs match the criteria, the method succeeds but returns the value zero in pcMFTs.
Gets a list of Microsoft Media Foundation transforms (MFTs) that match specified search criteria. This function extends the
If this function succeeds, it returns
The Flags parameter controls which MFTs are enumerated, and the order in which they are returned. The flags for this parameter fall into several groups.
The first set of flags specifies how an MFT processes data.
Flag | Description |
---|---|
| The MFT performs synchronous data processing in software. This is the original MFT processing model, and is compatible with Windows?Vista. |
| The MFT performs asynchronous data processing in software. This processing model requires Windows?7. For more information, see Asynchronous MFTs. |
| The MFT performs hardware-based data processing, using either the AVStream driver or a GPU-based proxy MFT. MFTs in this category always process data asynchronously. For more information, see Hardware MFTs. |
?
Every MFT falls into exactly one of these categories. To enumerate a category, set the corresponding flag in the Flags parameter. You can combine these flags to enumerate more than one category. If none of these flags is specified, the default category is synchronous MFTs (
Next, the following flags include MFTs that are otherwise excluded from the results. By default, flags that match these criteria are excluded from the results. Use any these flags to include them.
Flag | Description |
---|---|
| Include MFTs that must be unlocked by the application. |
| Include MFTs that are registered in the caller's process through either the |
| Include MFTs that are optimized for transcoding rather than playback. |
?
The last flag is used to sort and filter the results:
Flag | Description |
---|---|
| Sort and filter the results. |
?
If the
If you do not set the
Setting the Flags parameter to zero is equivalent to using the value
Setting Flags to
If no MFTs match the search criteria, the function returns
Gets a list of Microsoft Media Foundation transforms (MFTs) that match specified search criteria. This function extends the
If this function succeeds, it returns
The Flags parameter controls which MFTs are enumerated, and the order in which they are returned. The flags for this parameter fall into several groups.
The first set of flags specifies how an MFT processes data.
Flag | Description |
---|---|
| The MFT performs synchronous data processing in software. This is the original MFT processing model, and is compatible with Windows?Vista. |
| The MFT performs asynchronous data processing in software. This processing model requires Windows?7. For more information, see Asynchronous MFTs. |
| The MFT performs hardware-based data processing, using either the AVStream driver or a GPU-based proxy MFT. MFTs in this category always process data asynchronously. For more information, see Hardware MFTs. |
?
Every MFT falls into exactly one of these categories. To enumerate a category, set the corresponding flag in the Flags parameter. You can combine these flags to enumerate more than one category. If none of these flags is specified, the default category is synchronous MFTs (
Next, the following flags include MFTs that are otherwise excluded from the results. By default, flags that match these criteria are excluded from the results. Use any these flags to include them.
Flag | Description |
---|---|
| Include MFTs that must be unlocked by the application. |
| Include MFTs that are registered in the caller's process through either the |
| Include MFTs that are optimized for transcoding rather than playback. |
?
The last flag is used to sort and filter the results:
Flag | Description |
---|---|
| Sort and filter the results. |
?
If the
If you do not set the
Setting the Flags parameter to zero is equivalent to using the value
Setting Flags to
If no MFTs match the search criteria, the function returns
Gets information from the registry about a Media Foundation transform (MFT).
The CLSID of the MFT.
Receives a reference to a wide-character string containing the friendly name of the MFT. The caller must free the string by calling CoTaskMemFree. This parameter can be
Receives a reference to an array of
Receives the number of elements in the ppInputTypes array. If ppInputTypes is
Receives a reference to an array of
Receives the number of elements in the ppOutputType array. If ppOutputTypes is
Receives a reference to the
This parameter can be
If this function succeeds, it returns
Gets a reference to the Microsoft Media Foundation plug-in manager.
Receives a reference to the
If this function succeeds, it returns
Gets the merit value of a hardware codec.
A reference to the
The size, in bytes, of the verifier array.
The address of a buffer that contains one of the following:
Receives the merit value.
If this function succeeds, it returns
The function fails if the MFT does not represent a hardware device with a valid Output Protection Manager (OPM) certificate.
Registers a scheme handler in the caller's process.
A string that contains the scheme. The scheme includes the trailing ':' character; for example, "http:".
A reference to the
If this function succeeds, it returns
Scheme handlers are used in Microsoft Media Foundation during the source resolution process, which creates a media source from a URL. For more information, see Scheme Handlers and Byte-Stream Handlers.
Within a process, local scheme handlers take precedence over scheme handlers that are registered in the registry. Local scheme handlers are not visible to other processes.
Use this function if you want to register a custom scheme handler for your application, but do not want the handler available to other applications.
Registers a byte-stream handler in the caller's process.
A string that contains the file name extension for this handler.
A string that contains the MIME type for this handler.
A reference to the
If this function succeeds, it returns
Byte-stream handlers are used in Microsoft Media Foundation during the source resolution process, which creates a media source from a URL. For more information, see Scheme Handlers and Byte-Stream Handlers.
Within a process, local byte-stream handlers take precedence over byte-stream handlers that are registered in the registry. Local byte-stream handlers are not visible to other processes.
Use this function if you want to register a custom byte-stream handler for your application, but do not want the handler available to other applications.
Either szFileExtension or szMimeType can be
Creates a wrapper for a byte stream.
A reference to the
Receives a reference to the
If this function succeeds, it returns
The
Creates an activation object for a Windows Runtime class.
The class identifier that is associated with the activatable runtime class.
A reference to an optional IPropertySet object, which is used to configure the Windows Runtime class. This parameter can be
The interface identifier (IID) of the interface being requested. The activation object created by this function supports the following interfaces:
Receives a reference to the requested interface. The caller must release the interface.
If this function succeeds, it returns
To create the Windows Runtime object, call
Validates the size of a buffer for a video format block.
Pointer to a buffer that contains the format block.
Size of the pBlock buffer, in bytes.
The function returns an
Return code | Description |
---|---|
| The buffer that contains the format block is large enough. |
| The buffer that contains the format block is too small, or the format block is not valid. |
| This function does not support the specified format type. |
?
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an empty media type.
Receives a reference to the
If this function succeeds, it returns
The media type is created without any attributes.
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Creates an
If this function succeeds, it returns
Converts a Media Foundation audio media type to a
Pointer to the
Receives a reference to the
Receives the size of the
Contains a flag from the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
If the wFormatTag member of the returned structure is
Retrieves the image size for a video format. Given a
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The |
?
Before calling this function, you must set at least the following members of the
Also, if biCompression is BI_BITFIELDS, the
This function fails if the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the image size, in bytes, for an uncompressed video format.
Media subtype for the video format. For a list of subtypes, see Media Type GUIDs.
Width of the image, in pixels.
Height of the image, in pixels.
Receives the size of each frame, in bytes. If the format is compressed or is not recognized, the value is zero.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Converts a video frame rate into a frame duration.
The numerator of the frame rate.
The denominator of the frame rate.
Receives the average duration of a video frame, in 100-nanosecond units.
If this function succeeds, it returns
This function is useful for calculating time stamps on a sample, given the frame rate.
Also, average time per frame is used in the older
For certain common frame rates, the function gets the frame duration from a look-up table:
Frames per second (floating point) | Frames per second (fractional) | Average time per frame |
---|---|---|
59.94 | 60000/1001 | 166833 |
29.97 | 30000/1001 | 333667 |
23.976 | 24000/1001 | 417188 |
60 | 60/1 | 166667 |
30 | 30/1 | 333333 |
50 | 50/1 | 200000 |
25 | 25/1 | 400000 |
24 | 24/1 | 416667 |
?
Most video content uses one of the frame rates listed here. For other frame rates, the function calculates the duration.
Calculates the frame rate, in frames per second, from the average duration of a video frame.
The average duration of a video frame, in 100-nanosecond units.
Receives the numerator of the frame rate.
Receives the denominator of the frame rate.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Average time per frame is used in the older
This function uses a look-up table for certain common durations. The table is listed in the Remarks section for the
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Initializes a media type from an
If this function succeeds, it returns
Initializes a media type from a
Pointer to the
Pointer to a
Size of the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Compares a full media type to a partial media type.
Pointer to the
Pointer to the
If the full media type is compatible with the partial media type, the function returns TRUE. Otherwise, the function returns
A pipeline component can return a partial media type to describe a range of possible formats the component might accept. A partial media type has at least a major type
This function returns TRUE if the following conditions are both true:
Otherwise, the function returns
Creates a media type that wraps another media type.
A reference to the
A
A
Applications can define custom subtype GUIDs.
Receives a reference to the
If this function succeeds, it returns
The original media type (pOrig) is stored in the new media type under the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type that was wrapped in another media type by the
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Creates a video media type from an
If this function succeeds, it returns
Instead of using the
Creates a partial video media type with a specified subtype.
Pointer to a
Receives a reference to the
If this function succeeds, it returns
This function creates a media type and sets the major type equal to
You can get the same result with the following steps:
Queries whether a FOURCC code or D3DFORMAT value is a YUV format.
FOURCC code or D3DFORMAT value.
The function returns one of the following values.
Return code | Description |
---|---|
| The value specifies a YUV format. |
| The value does not specify a recognized YUV format. |
?
This function checks whether Format specifies a YUV format. Not every YUV format is recognized by this function. However, if a YUV format is not recognized by this function, it is probably not supported for video rendering or DirectX video acceleration (DXVA).
This function is not implemented.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Returns E_FAIL.
Calculates the minimum surface stride for a video format.
FOURCC code or D3DFORMAT value that specifies the video format. If you have a video subtype
Width of the image, in pixels.
Receives the minimum surface stride, in pixels.
If this function succeeds, it returns
This function calculates the minimum stride needed to hold the image in memory. Use this function if you are allocating buffers in system memory. Surfaces allocated in video memory might require a larger stride, depending on the graphics card.
If you are working with a DirectX surface buffer, use the
For planar YUV formats, this function returns the stride for the Y plane. Depending on the format, the chroma planes might have a different stride.
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.?
Retrieves the image size, in bytes, for an uncompressed video format.
FOURCC code or D3DFORMAT value that specifies the video format.
Width of the image, in pixels.
Height of the image, in pixels.
Receives the size of one frame, in bytes. If the format is compressed or is not recognized, this value is zero.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function is equivalent to the
Creates a video media type from a
If the function succeeds, it returns
Creates a Media Foundation media type from another format representation.
Description | |
---|---|
AM_MEDIA_TYPE_REPRESENTATION | Convert a DirectShow |
?
Pointer to a buffer that contains the format representation to convert. The layout of the buffer depends on the value of guidRepresentation.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The |
?
If the original format is a DirectShow audio media type, and the format type is not recognized, the function sets the following attributes on the converted media type.
Attribute | Description |
---|---|
| Contains the format type |
| Contains the format block. |
?
[This API is not supported and may be altered or unavailable in the future.]
Creates an audio media type from a
Pointer to a
Receives a reference to the
If this function succeeds, it returns
The
Alternatively, you can call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Returns the FOURCC or D3DFORMAT value for an uncompressed video format.
Returns a FOURCC or D3DFORMAT value that identifies the video format. If the video format is compressed or not recognized, the return value is D3DFMT_UNKNOWN.
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Initializes an
If this function succeeds, it returns
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Initializes an
If this function succeeds, it returns
This function fills in some reasonable default values for the specified RGB format.
Developers are encouraged to use media type attributes instead of using the
In general, you should avoid calling this function. If you know all of the format details, you can fill in the
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Converts the extended color information from an
If this function succeeds, it returns
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Sets the extended color information in a
If this function succeeds, it returns
This function sets the following fields in the
Copies an image or image plane from one buffer to another.
Pointer to the start of the first row of pixels in the destination buffer.
Stride of the destination buffer, in bytes.
Pointer to the start of the first row of pixels in the source image.
Stride of the source image, in bytes.
Width of the image, in bytes.
Number of rows of pixels to copy.
If this function succeeds, it returns
This function copies a single plane of the image. For planar YUV formats, you must call the function once for each plane. In this case, pDest and pSrc must point to the start of each plane.
This function is optimized if the MMX, SSE, or SSE2 instruction sets are available on the processor. The function performs a non-temporal store (the data is written to memory directly without polluting the cache).
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.?Converts an array of 16-bit floating-point numbers into an array of 32-bit floating-point numbers.
Pointer to an array of float values. The array must contain at least dwCount elements.
Pointer to an array of 16-bit floating-point values, typed as WORD values. The array must contain at least dwCount elements.
Number of elements in the pSrc array to convert.
If this function succeeds, it returns
The function converts dwCount values in the pSrc array and writes them into the pDest array.
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.?Converts an array of 32-bit floating-point numbers into an array of 16-bit floating-point numbers.
Pointer to an array of 16-bit floating-point values, typed as WORD values. The array must contain at least dwCount elements.
Pointer to an array of float values. The array must contain at least dwCount elements.
Number of elements in the pSrc array to convert.
If this function succeeds, it returns
The function converts the values in the pSrc array and writes them into the pDest array.
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.?Creates a system-memory buffer object to hold 2D image data.
Width of the image, in pixels.
Height of the image, in pixels.
A FOURCC code or D3DFORMAT value that specifies the video format. If you have a video subtype
If TRUE, the buffer's
For more information about top-down versus bottom-up images, see Image Stride.
Receives a reference to the
This function can return one of these values.
Return code | Description |
---|---|
| Success. |
| Unrecognized video format. |
?
The returned buffer object also exposes the
Allocates a system-memory buffer that is optimal for a specified media type.
A reference to the
The sample duration. This value is required for audio formats.
The minimum size of the buffer, in bytes. The actual buffer size might be larger. Specify zero to allocate the default buffer size for the media type.
The minimum memory alignment for the buffer. Specify zero to use the default memory alignment.
Receives a reference to the
If this function succeeds, it returns
For video formats, if the format is recognized, the function creates a 2-D buffer that implements the
For audio formats, the function allocates a buffer that is large enough to contain llDuration audio samples, or dwMinLength, whichever is larger.
This function always allocates system memory. For Direct3D surfaces, use the
Creates an empty collection object.
Receives a reference to the collection object's
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Allocates a block of memory.
Number of bytes to allocate.
Zero or more flags. For a list of valid flags, see HeapAlloc in the Windows SDK documentation.
Reserved. Set to
Reserved. Set to zero.
Reserved. Set to eAllocationTypeIgnore.
If the function succeeds, it returns a reference to the allocated memory block. If the function fails, it returns
In the current version of Media Foundation, this function is equivalent to calling the HeapAlloc function and specifying the heap of the calling process.
To free the allocated memory, call
Frees a block of memory that was allocated by calling the
Calculates ((a * b) + d) / c, where each term is a 64-bit signed value.
A multiplier.
Another multiplier.
The divisor.
The rounding factor.
Returns the result of the calculation. If numeric overflow occurs, the function returns _I64_MAX (positive overflow) or LLONG_MIN (negative overflow). If Mfplat.dll cannot be loaded, the function returns _I64_MAX.
Gets the class identifier for a content protection system.
The
Receives the class identifier to the content protection system.
If this function succeeds, it returns
The class identifier can be used to create the input trust authority (ITA) for the content protection system. Call CoCreateInstance or
Creates the Media Session in the application's process.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
If your application does not play protected content, you can use this function to create the Media Session in the application's process. To use the Media Session for protected content, you must call
You can use the pConfiguration parameter to specify any of the following attributes:
Creates an instance of the Media Session inside a Protected Media Path (PMP) process.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
You can use the pConfiguration parameter to set any of the following attributes:
If this function cannot create the PMP Media Session because a trusted binary was revoked, the ppEnablerActivate parameter receives an
If the function successfully creates the PMP Media Session, the ppEnablerActivate parameter receives the value
Do not make calls to the PMP Media Session from a thread that is processing a window message sent from another thread. To test whether the current thread falls into this category, call InSendMessage.
Creates the source resolver, which is used to create a media source from a URL or byte stream.
Receives a reference to the source resolver's
If this function succeeds, it returns
[This API is not supported and may be altered or unavailable in the future. Instead, applications should use the PSCreateMemoryPropertyStore function to create property stores.]
Creates an empty property store object.
Receives a reference to the
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the URL schemes that are registered for the source resolver.
Pointer to a
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Retrieves the MIME types that are registered for the source resolver.
Pointer to a
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a topology object.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a topology node.
The type of node to create, specified as a member of the
Receives a reference to the node's
If this function succeeds, it returns
Gets the media type for a stream associated with a topology node.
A reference to the
The identifier of the stream to query. This parameter is interpreted as follows:
If TRUE, the function gets an output type. If
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream index is invalid. |
?
This function gets the actual media type from the object that is associated with the topology node. The pNode parameter should specify a node that belongs to a fully resolved topology. If the node belongs to a partial topology, the function will probably fail.
Tee nodes do not have an associated object to query. For tee nodes, the function gets the node's input type, if available. Otherwise, if no input type is available, the function gets the media type of the node's primary output stream. The primary output stream is identified by the
Queries an object for a specified service interface.
This function is a helper function that wraps the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The service requested cannot be found in the object represented by punkObject. |
?
Returns the system time.
Returns the system time, in 100-nanosecond units.
Creates the presentation clock. The presentation clock is used to schedule the time at which samples are rendered and to synchronize multiple streams.
Receives a reference to the clock's
If this function succeeds, it returns
The caller must shut down the presentation clock by calling
Typically applications do not create the presentation clock. The Media Session automatically creates the presentation clock. To get a reference to the presentation clock from the Media Session, call
Creates a presentation time source that is based on the system time.
Receives a reference to the object's
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a presentation descriptor.
Number of elements in the apStreamDescriptors array.
Array of
Receives a reference to an
If this function succeeds, it returns
If you are writing a custom media source, you can use this function to create the source presentation descriptor. The presentation descriptor is created with no streams selected. Generally, a media source should select at least one stream by default. To select a stream, call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether a media presentation requires the Protected Media Path (PMP).
Pointer to the
The function returns an
Return code | Description |
---|---|
| This presentation requires a protected environment. |
| This presentation does not require a protected environment. |
?
If this function returns
If the function returns S_FALSE, you can use the unprotected pipeline. Call
Internally, this function checks whether any of the stream descriptors in the presentation have the
Serializes a presentation descriptor to a byte array.
Pointer to the
Receives the size of the ppbData array, in bytes.
Receives a reference to an array of bytes containing the serialized presentation descriptor. The caller must free the memory for the array by calling CoTaskMemFree.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To deserialize the presentation descriptor, pass the byte array to the
Deserializes a presentation descriptor from a byte array.
Size of the pbData array, in bytes.
Pointer to an array of bytes that contains the serialized presentation descriptor.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a stream descriptor.
Stream identifier.
Number of elements in the apMediaTypes array.
Pointer to an array of
Receives a reference to the
If this function succeeds, it returns
If you are writing a custom media source, you can use this function to create stream descriptors for the source. This function automatically creates the stream descriptor media type handler and initializes it with the list of types given in apMediaTypes. The function does not set the current media type on the handler, however. To set the type, call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a media-type handler that supports a single media type at a time.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The media-type handler created by this function supports one media type at a time. Set the media type by calling
Shuts down a Media Foundation object and releases all resources associated with the object.
This function is a helper function that wraps the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function is not related to the
Creates the Streaming Audio Renderer.
If this function succeeds, it returns
To configure the audio renderer, set any of the following attributes on the
Attribute | Description |
---|---|
| The audio endpoint device identifier. |
| The audio endpoint role. |
| Miscellaneous configuration flags. |
| The audio policy class. |
| The audio stream category. |
| Enables low-latency audio streaming. |
?
Creates an activation object for the Streaming Audio Renderer.
If this function succeeds, it returns
To create the audio renderer, call
To configure the audio renderer, set any of the following attributes on the
Attribute | Description |
---|---|
| The audio endpoint device identifier. |
| The audio endpoint role. |
| Miscellaneous configuration flags. |
| The audio policy class. |
| The audio stream category. |
| Enables low-latency audio streaming. |
?
Creates an activation object for the enhanced video renderer (EVR) media sink.
Handle to the window where the video will be displayed.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To create the EVR, call
To configure the EVR, set any of the following attributes on the
Attribute | Description |
---|---|
| Activation object for a custom mixer. |
| CLSID for a custom mixer. |
| Flags for creating a custom mixer. |
| Activation object for a custom presenter. |
| CLSID for a custom presenter. |
| Flags for creating a custom presenter. |
?
When
Creates a media sink for authoring MP4 files.
A reference to the
A reference to the
This parameter can be
A reference to the
This parameter can be
Receives a reference to the MP4 media sink's
If this function succeeds, it returns
The MP4 media sink supports a maximum of one video stream and one audio stream. The initial stream formats are given in the pVideoMediaType and pAudioMediaType parameters. To create an MP4 file with one stream, set the other stream type to
The number of streams is fixed when you create the media sink. The sink does not support the
To author 3GP files, use the
Creates a media sink for authoring 3GP files.
A reference to the
A reference to the
This parameter can be
A reference to the
This parameter can be
Receives a reference to the 3GP media sink's
If this function succeeds, it returns
The 3GP media sink supports a maximum of one video stream and one audio stream. The initial stream formats are given in the pVideoMediaType and pAudioMediaType parameters. To create an MP4 file with one stream, set the other stream type to
The number of streams is fixed when you create the media sink. The sink does not support the
To author MP4 files, use the
Creates the MP3 media sink.
A reference to the
Receives a reference to the
If this function succeeds, it returns
The MP3 media sink takes compressed MP3 audio samples as input, and writes an MP3 file with ID3 headers as output. The MP3 media sink does not perform MP3 audio encoding.
Creates an instance of the AC-3 media sink.
A reference to the
A reference to the
Attribute | Value |
---|---|
| |
|
?
Receives a reference to the
If this function succeeds, it returns
The AC-3 media sink takes compressed AC-3 audio as input and writes the audio to the byte stream without modification. The primary use for this media sink is to stream AC-3 audio over a network. The media sink does not perform AC-3 audio encoding.
Creates an instance of the audio data transport stream (ADTS) media sink.
A reference to the
A reference to the
Attribute | Value |
---|---|
| |
| |
| 0 (raw AAC) or 1 (ADTS) |
?
Receives a reference to the
If this function succeeds, it returns
The ADTS media sink converts Advanced Audio Coding (AAC) audio packets into an ADTS stream. The primary use for this media sink is to stream ADTS over a network. The output is not an audio file, but a stream of audio frames with ADTS headers.
The media sink can accept raw AAC frames (
Creates a generic media sink that wraps a multiplexer Microsoft Media Foundation transform (MFT).
The subtype
A list of format attributes for the MFT output type. This parameter is optional and can be
A reference to the
Receives a reference to the
If this function succeeds, it returns
This function attempts to find a multiplexer MFT that supports an output type with the following definition:
To provide a list of additional format attributes:
The multiplexer MFT must be registered in the
Creates a media sink for authoring fragmented MP4 files.
A reference to the
A reference to the
This parameter can be
A reference to the
This parameter can be
Receives a reference to the MP4 media sink's
If this function succeeds, it returns
Creates an Audio-Video Interleaved (AVI) Sink.
Pointer to the byte stream that will be used to write the AVI file.
Pointer to the media type of the video input stream
Pointer to the media type of the audio input stream
Receives a reference to the
If this function succeeds, it returns
Creates an WAVE archive sink. The WAVE archive sink takes audio and writes it to an .wav file.
Pointer to the byte stream that will be used to write the .wav file.
Pointer to the audio media type.
Receives a reference to the
Creates a new instance of the topology loader.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an activation object for the sample grabber media sink.
Pointer to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
To create the sample grabber sink, call
Before calling ActivateObject, you can configure the sample grabber by setting any of the following attributes on the ppIActivate reference:
Creates the default implementation of the quality manager.
Receives a reference to the quality manager's
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates the sequencer source.
Reserved. Must be
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a
Sequencer element identifier. This value specifies the segment in which to begin playback. The element identifier is returned in the
Starting position within the segment, in 100-nanosecond units.
Pointer to a
If this function succeeds, it returns
The
Creates a media source that aggregates a collection of media sources.
A reference to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pSourceCollection collection does not contain any elements. |
?
The aggregated media source is useful for combining streams from separate media sources. For example, you can use it to combine a video capture source and an audio capture source.
Creates a credential cache object. An application can use this object to implement a custom credential manager.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a default proxy locator.
The name of the protocol.
Note??In this release of Media Foundation, the default proxy locator does not support RTSP. ?Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the scheme handler for the network source.
Interface identifier (IID) of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface. The scheme handler exposes the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the protected media path (PMP) server object.
A member of the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the remote desktop plug-in object. Use this object if the application is running in a Terminal Services client session.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| Remote desktop connections are not allowed by the current session policy. |
?
[This API is not supported and may be altered or unavailable in the future. Instead, applications should use the PSCreateMemoryPropertyStore function to create named property stores.]
Creates an empty property store to hold name/value pairs.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an instance of the sample copier transform.
Receives a reference to the
If this function succeeds, it returns
The sample copier is a Media Foundation transform (MFT) that copies data from input samples to output samples without modifying the data. The following data is copied from the sample:
This MFT is useful in the following situation:
The following diagram shows this situation with a media source and a media sink.
In order for the media sink to receive data from the media source, the data must be copied into the media samples owned by the media sink. The sample copier can be used for this purpose.
A specific example of such a media sink is the Enhanced Video Renderer (EVR). The EVR allocates samples that contain Direct3D surface buffers, so it cannot receive video samples directly from a media source. Starting in Windows?7, the topology loader automatically handles this case by inserting the sample copier between the media source and the EVR.
Creates an empty transcode profile object.
The transcode profile stores configuration settings for the output file. These configuration settings are specified by the caller, and include audio and video stream properties, encoder settings, and container settings. To set these properties, the caller must call the appropriate
The configured transcode profile is passed to the
If this function succeeds, it returns
The
For example code that uses this function, see the following topics:
Creates a partial transcode topology.
The underlying topology builder creates a partial topology by connecting the required pipeline objects: source, encoder, and sink. The encoder and the sink are configured according to the settings specified by the caller in the transcode profile.
To create the transcode profile object, call the
The configured transcode profile is passed to the
The function returns an
Return code | Description |
---|---|
| The function call succeeded, and ppTranscodeTopo receives a reference to the transcode topology. |
| pwszOutputFilePath contains invalid characters. |
| No streams are selected in the media source. |
| The profile does not contain the |
| For one or more streams, cannot find an encoder that accepts the media type given in the profile. |
| The profile does not specify a media type for any of the selected streams on the media source. |
?
For example code that uses this function, see the following topics:
Creates a topology for transcoding to a byte stream.
A reference to the
A reference to the
A reference to the
Receives a reference to the
If this function succeeds, it returns
This function creates a partial topology that contains the media source, the encoder, and the media sink.
Gets a list of output formats from an audio encoder.
Specifies the subtype of the output media. The encoder uses this value as a filter when it is enumerating the available output types. For information about the audio subtypes, see Audio Subtype GUIDs.
Bitwise OR of zero or more flags from the _MFT_ENUM_FLAG enumeration.
A reference to the
Value | Meaning |
---|---|
Set this attribute to unlock an encoder that has field-of-use descriptions. | |
Specifies a device conformance profile for a Windows Media encoder. | |
Sets the tradeoff between encoding quality and encoding speed. |
?
Receives a reference to the
This function assumes the encoder will be used in its default encoding mode, which is typically constant bit-rate (CBR) encoding. Therefore, the types returned by the function might not work with other modes, such as variable bit-rate (VBR) encoding.
Internally, this function works by calling
Creates the transcode sink activation object.
The transcode sink activation object can be used to create any of the following file sinks:
The transcode sink activation object exposes the
If this function succeeds, it returns
Creates an
Creates a Microsoft Media Foundation byte stream that wraps an
A reference to the
Receives a reference to the
Returns an
This function enables applications to pass an
Returns an
If this function succeeds, it returns
This function enables an application to pass a Media Foundation byte stream to an API that takes an
Creates a Microsoft Media Foundation byte stream that wraps an IRandomAccessStream object.
If this function succeeds, it returns
Creates an IRandomAccessStream object that wraps a Microsoft Media Foundation byte stream.
If this function succeeds, it returns
The returned byte stream object exposes the
Create an
If this function succeeds, it returns
Creates properties from a
If this function succeeds, it returns
Enumerates a list of audio or video capture devices.
Pointer to an attribute store that contains search criteria. To create the attribute store, call
Value | Meaning |
---|---|
Specifies whether to enumerate audio or video devices. (Required.) | |
For audio capture devices, specifies the device role. (Optional.) | |
For video capture devices, specifies the device category. (Optional.) |
?
Receives an array of
Receives the number of elements in the pppSourceActivate array. If no capture devices match the search criteria, this parameter receives the value 0.
If this function succeeds, it returns
Each returned
Attribute | Description |
---|---|
| The display name of the device. |
| The major type and subtype GUIDs that describe the device's output format. |
| The type of capture device (audio or video). |
| The audio endpoint ID string. (Audio devices only.) |
| The device category. (Video devices only.) |
| Whether a device is a hardware or software device. (Video devices only.) |
| The symbolic link for the device driver. (Video devices only.) |
?
To create a media source from an
Creates a media source for a hardware capture device.
Pointer to the
Receives a reference to the media source's
If this function succeeds, it returns
The pAttributes parameter specifies an attribute store. To create the attribute store, call the
For audio capture devices, optionally set one of the following attributes:
Attribute | Description |
---|---|
| Specifies the audio endpoint ID of the audio capture device. |
| Specifies the device role. If this attribute is set, the function uses the default audio capture device for that device role. Do not combine this attribute with the |
?
If neither attribute is specified, the function selects the default audio capture device for the eCommunications role.
For video capture devices, you must set the following attribute:
Attribute | Description |
---|---|
| Specifies the symbolic link to the device. |
?
Creates an activation object that represents a hardware capture device.
Pointer to the
Receives a reference to the
This function creates an activation object that can be used to create a media source for a hardware device. To create the media source itself, call
The pAttributes parameter specifies an attribute store. To create the attribute store, call the
For audio capture devices, optionally set one of the following attributes:
Attribute | Description |
---|---|
| Specifies the audio endpoint ID of the audio capture device. |
| Specifies the device role. If this attribute is set, the function uses the default audio capture device for that device role. Do not combine this attribute with the |
?
If neither attribute is specified, the function selects the default audio capture device for the eCommunications role.
For video capture devices, you must set the following attribute:
Attribute | Description |
---|---|
| Specifies the symbolic link to the device. |
?
Creates an
Loads a dynamic link library that is signed for the protected environment.
The name of the dynamic link library to load. This dynamic link library must be signed for the protected environment.
Receives a reference to the
A singlemodule load count is maintained on the dynamic link library (as it is with LoadLibrary). This load count is freed when the final release is called on the
Returns an
Gets the local system ID.
Application-specific verifier value.
Length in bytes of verifier.
Returned ID string. This value must be freed by the caller by calling CoTaskMemFree.
The function returns an
Creates an
Checks whether a hardware security processor is supported for the specified media protection system.
The identifier of the protection system that you want to check.
TRUE if the hardware security processor is supported for the specified protection system; otherwise
Creates an
Locks the shared Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
Receives a token that identifies this instance of the DXGI Device Manager. Use this token when calling
Receives a reference to the
If this function succeeds, it returns
This function obtains a reference to a DXGI Device Manager instance that can be shared between components. The Microsoft Media Foundation platform creates this instance of the DXGI Device Manager as a singleton object. Alternatively, you can create a new DXGI Device Manager by calling
The first time this function is called, the Media Foundation platform creates the shared DXGI Device Manager.
When you are done use the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Creates an instance of the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The supplied |
| The supplied LPCWSTR is null. |
?
Creates the source reader from a URL.
The URL of a media file to open.
Pointer to the
Receives a reference to the
If this function succeeds, it returns
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Creates the source reader from a byte stream.
A reference to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Creates the source reader from a media source.
A reference to the
Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The source contains protected content. |
?
Call CoInitialize(Ex) and
By default, when the application releases the source reader, the source reader shuts down the media source by calling
To change this default behavior, set the
When using the Source Reader, do not call any of the following methods on the media source:
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Creates the sink writer from a URL or byte stream.
A null-terminated string that contains the URL of the output file. This parameter can be
Pointer to the
If this parameter is a valid reference, the sink writer writes to the provided byte stream. (The byte stream must be writable.) Otherwise, if pByteStream is
Pointer to the
Receives a reference to the
Call CoInitialize(Ex) and
The first three parameters to this function can be
Description | pwszOutputURL | pByteStream | pAttributes |
---|---|---|---|
Specify a byte stream, with no URL. | non- | Required (must not be | |
Specify a URL, with no byte stream. | not | Optional (may be | |
Specify both a URL and a byte stream. | non- | non- | Optional (may be |
?
The pAttributes parameter is required in the first case and optional in the others.
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Creates the sink writer from a media sink.
Pointer to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
Call CoInitialize(Ex) and
When you are done using the media sink, call the media sink's
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Writes the contents of an attribute store to a stream.
Pointer to the
Bitwise OR of zero or more flags from the
Pointer to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If dwOptions contains the
If the
Otherwise, the function calls CoMarshalInterface to serialize a proxy for the object.
If dwOptions does not include the
To load the attributes from the stream, call
The main purpose of this function is to marshal attributes across process boundaries.
Loads attributes from a stream into an attribute store.
Pointer to the
Bitwise OR of zero or more flags from the
Pointer to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Use this function to deserialize an attribute store that was serialized with the
If dwOptions contains the
If the
Otherwise, the function calls CoUnmarshalInterface to deserialize a proxy for the object.
This function deletes any attributes that were previously stored in pAttr.
Creates a generic activation object for Media Foundation transforms (MFTs).
Receives a reference to the
If this function succeeds, it returns
Most applications will not use this function; it is used internally by the
An activation object is a helper object that creates another object, somewhat similar to a class factory. The
Attribute | Description |
---|---|
| Required. Contains the CLSID of the MFT. The activation object creates the MFT by passing this CLSID to the CoCreateInstance function. |
| Optional. Specifies the category of the MFT. |
| Contains various flags that describe the MFT. For hardware-based MFTs, set the |
| Optional. Contains the merit value of a hardware codec. If this attribute is set and its value is greater than zero, the activation object calls |
| Required for hardware-based MFTs. Specifies the symbolic link for the hardware device. The device proxy uses this value to configure the MFT. |
| Optional. Contains an If this attribute is set and the |
| Optional. Contains the encoding profile for an encoder. The value of this attribute is an If this attribute is set and the value of the |
| Optional. Specifies the preferred output format for an encoder. If this attribute set and the value of the |
?
For more information about activation objects, see Activation Objects.
Enumerates a list of audio or video capture devices.
Pointer to an attribute store that contains search criteria. To create the attribute store, call
Value | Meaning |
---|---|
Specifies whether to enumerate audio or video devices. (Required.) | |
For audio capture devices, specifies the device role. (Optional.) | |
For video capture devices, specifies the device category. (Optional.) |
?
Receives an array of
Receives the number of elements in the pppSourceActivate array. If no capture devices match the search criteria, this parameter receives the value 0.
If this function succeeds, it returns
Each returned
Attribute | Description |
---|---|
| The display name of the device. |
| The major type and subtype GUIDs that describe the device's output format. |
| The type of capture device (audio or video). |
| The audio endpoint ID string. (Audio devices only.) |
| The device category. (Video devices only.) |
| Whether a device is a hardware or software device. (Video devices only.) |
| The symbolic link for the device driver. (Video devices only.) |
?
To create a media source from an
Applies to: desktop apps only
Creates an activation object for the sample grabber media sink.
Pointer to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
To create the sample grabber sink, call
Before calling ActivateObject, you can configure the sample grabber by setting any of the following attributes on the ppIActivate reference:
Applies to: desktop apps | Metro style apps
Copies an image or image plane from one buffer to another.
Pointer to the start of the first row of pixels in the destination buffer.
Stride of the destination buffer, in bytes.
Pointer to the start of the first row of pixels in the source image.
Stride of the source image, in bytes.
Width of the image, in bytes.
Number of rows of pixels to copy.
If this function succeeds, it returns
This function copies a single plane of the image. For planar YUV formats, you must call the function once for each plane. In this case, pDest and pSrc must point to the start of each plane.
This function is optimized if the MMX, SSE, or SSE2 instruction sets are available on the processor. The function performs a non-temporal store (the data is written to memory directly without polluting the cache).
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.
Uses profile data from a profile object to configure settings in the ContentInfo object.
If there is already information in the ContentInfo object when this method is called, it is replaced by the information from the profile object.
Retrieves an Advanced Systems Format (ASF) profile that describes the ASF content.
The profile is set by calling either
The ASF profile object returned by this method does not include any of the MF_PD_ASF_xxx attributes (see Presentation Descriptor Attributes). To get these attributes, do the following:
Call
(Optional.) Call
An ASF profile is a template for file encoding, and is intended mainly for creating ASF content. If you are reading an existing ASF file, it is recommended that you use the presentation descriptor to get information about the file. One exception is that the profile contains the mutual exclusion and stream prioritization objects, which are not exposed directly from the presentation descriptor.
Retrieves the size of the header section of an Advanced Systems Format (ASF) file.
The
Receives the size, in bytes, of the header section of the content. The value includes the size of the ASF Header Object plus the size of the header section of the Data Object. Therefore, the resulting value is the offset to the start of the data packets in the ASF Data Object.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer does not contain valid ASF data. |
| The buffer does not contain enough valid data. |
?
The header of an ASF file or stream can be passed to the
Parses the information in an ASF header and uses that information to set values in the ContentInfo object. You can pass the entire header in a single buffer or send it in several pieces.
Pointer to the
Offset, in bytes, of the first byte in the buffer relative to the beginning of the header.
The method returns an
Return code | Description |
---|---|
| The header is completely parsed and validated. |
| The input buffer does not contain valid ASF data. |
| The input buffer is to small. |
| The method succeeded, but the header passed was incomplete. This is the successful return code for all calls but the last one when passing the header in pieces. |
?
If you pass the header in pieces, the ContentInfo object will keep references to the buffer objects until the entire header is parsed. Therefore, do not write over the buffers passed into this method.
The start of the Header object has the following layout in memory:
Field Name | Size in bytes |
---|---|
Object ID | 16 |
Object Size | 8 |
Number of Header Objects | 4 |
Reserved1 | 1 |
Reserved2 | 1 |
?
The first call to ParseHeader reads everything up to and including Rerserved2, so it requires a minimum of 30 bytes. (Note that the
Encodes the data in the MFASFContentInfo object into a binary Advanced Systems Format (ASF) header.
A reference to the
Size of the encoded ASF header in bytes. If pIHeader is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ASF Header Objects do not exist for the media that the ContentInfo object holds reference to. |
| The ASF Header Object size exceeds 10 MB. |
| The buffer passed in pIHeader is not large enough to hold the ASF Header Object information. |
?
The size received in the pcbHeader parameter includes the padding size. The content information shrinks or expands the padding data depending on the size of the ASF Header Objects.
During this call, the stream properties are set based on the encoding properties of the profile. These properties are available through the
Retrieves an Advanced Systems Format (ASF) profile that describes the ASF content.
Receives an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The profile is set by calling either
The ASF profile object returned by this method does not include any of the MF_PD_ASF_xxx attributes (see Presentation Descriptor Attributes). To get these attributes, do the following:
Call
(Optional.) Call
An ASF profile is a template for file encoding, and is intended mainly for creating ASF content. If you are reading an existing ASF file, it is recommended that you use the presentation descriptor to get information about the file. One exception is that the profile contains the mutual exclusion and stream prioritization objects, which are not exposed directly from the presentation descriptor.
Uses profile data from a profile object to configure settings in the ContentInfo object.
The
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If there is already information in the ContentInfo object when this method is called, it is replaced by the information from the profile object.
Creates a presentation descriptor for ASF content.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a property store that can be used to set encoding properties.
Stream number to configure. Set to zero to configure file-level encoding properties.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the flags that indicate the selected indexer options.
You must call this method before initializing the indexer object with
Sets indexer options.
Bitwise OR of zero or more flags from the MFASF_INDEXER_FLAGS enumeration specifying the indexer options to use.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The indexer object was initialized before setting flags for it. For more information, see Remarks. |
?
Retrieves the flags that indicate the selected indexer options.
Receives a bitwise OR of zero or more flags from the MFASF_INDEXER_FLAGS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pdwFlags is |
?
You must call this method before initializing the indexer object with
Initializes the indexer object. This method reads information in a ContentInfo object about the configuration of the content and the properties of the existing index, if present. Use this method before using the indexer for either writing or reading an index. You must make this call before using any of the other methods of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid ASF data. |
| Unexpected error. |
?
The indexer needs to examine the data in the ContentInfo object to properly write or read the index for the content. The indexer will not make changes to the content information and will not hold any references to the
In the ASF header, the maximum data-packet size must equal the minimum data-packet size. Otherwise, the method returns
Retrieves the offset of the index object from the start of the content.
Pointer to the
Receives the offset of the index relative to the beginning of the content described by the ContentInfo object. This is the position relative to the beginning of the ASF file.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pIContentInfo is |
?
The index continues from the offset retrieved by this method to the end of the file.
You must call
If the index is retrieved by using more than one call to
Adds byte streams to be indexed.
An array of
The number of references in the ppIByteStreams array.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The indexer object has already been initialized and it has packets which have been indexed. |
?
For a reading scenario, only one byte stream should be used by the indexer object. For an index generating scenario, it depends how many index objects are needed to be generated.
Retrieves the number of byte streams that are in use by the indexer object.
Receives the number of byte streams that are in use by the indexer object.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pcByteStreams is |
?
Retrieves the index settings for a specified stream and index type.
Pointer to an
A variable that retrieves a Boolean value specifying whether the index described by pIndexIdentifier has been created.
A buffer that receives the index descriptor. The index descriptor consists of an
On input, specifies the size, in bytes, of the buffer that pbIndexDescriptor points to. The value can be zero if pbIndexDescriptor is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer size specified in pcbIndexDescriptor is too small. |
?
To read an existing ASF index, call
If an index exists for the stream and the value passed into pcbIndexDescriptor is smaller than the required size of the pbIndexDescriptor buffer, the method returns
If there is no index for the specified stream, the method returns
Configures the index for a stream.
The index descriptor to set. The index descriptor is an
The size, in bytes, of the index descriptor.
A Boolean value. Set to TRUE to have the indexer create an index of the type specified for the stream specified in the index descriptor.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At attempt was made to change the index status in a seek-only scenario. For more information, see Remarks. |
?
You must make all calls to SetIndexStatus before making any calls to
The indexer object is configured to create temporal indexes for each stream by default. Call this method only if you want to override the default settings.
You cannot use this method in an index reading scenario. You can only use this method when writing indexes.
Given a desired seek time, gets the offset from which the client should start reading data.
The value of the index entry for which to get the position. The format of this value varies depending on the type of index, which is specified in the index identifier. For time-based indexing, the variant type is VT_I8 and the value is the desired seek time, in 100-nanosecond units.
Pointer to an
Receives the offset within the data segment of the ASF Data Object. The offset is in bytes, and is relative to the start of packet 0. The offset gives the starting location from which the client should begin reading from the stream. This location might not correspond exactly to the requested seek time.
For reverse playback, if no key frame exists after the desired seek position, this parameter receives the value MFASFINDEXER_READ_FOR_REVERSEPLAYBACK_OUTOFDATASEGMENT. In that case, the seek position should be 1 byte pass the end of the data segment.
Receives the approximate time stamp of the data that is located at the offset returned in the pcbOffsetWithinData parameter. The accuracy of this value is equal to the indexing interval of the ASF index, typically about 1 second.
If the approximate time stamp cannot be determined, this parameter receives the value MFASFINDEXER_APPROX_SEEK_TIME_UNKNOWN.
Receives the payload number of the payload that contains the information for the specified stream. Packets can contain multiple payloads, each containing data for a different stream. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The requested seek time is out of range. |
| No index exists of the specified type for the specified stream. |
?
Accepts an ASF packet for the file and creates index entries for them.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The argument passed in is |
| The indexer is not initialized. |
?
The ASF indexer creates indexes for a file internally. You can get the completed index for all data packets sent to the indexer by committing the index with
When this method creates index entries, they are immediately available for use by
The media sample specified in pIASFPacketSample must hold a buffer that contains a single ASF packet. Get the sample from the ASF multiplexer by calling the
You cannot use this method while reading an index, only when writing an index.
Adds information about a new index to the ContentInfo object associated with ASF content. You must call this method before copying the index to the content so that the index will be readable by the indexer later.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The caller made an invalid request. For more information, see Remarks. |
?
For the index to function properly, you must call this method after all ASF packets in the file have been passed to the indexer by using the
An application must use the CommitIndex method only when writing a new index otherwise CommitIndex may return
You cannot use this method in an index reading scenario. You can only use this method when writing indexes.
Retrieves the size, in bytes, of the buffer required to store the completed index.
Receives the size of the index, in bytes
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index has not been committed. For more information; see Remarks. |
?
Use this method to get the size of the index and then allocate a buffer big enough to hold it.
The index must be committed with a call to
Call
You cannot use this method in a reading scenario. You can only use this method when writing indexes.
Retrieves the completed index from the ASF indexer object.
Pointer to the
The offset of the data to be retrieved, in bytes from the start of the index data. Set to 0 for the first call. If subsequent calls are needed (the buffer is not large enough to hold the entire index), set to the byte following the last one retrieved.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index was not committed before attempting to get the completed index. For more information, see Remarks. |
?
This method uses as much of the buffer as possible, and updates the length of the buffer appropriately.
If pIIndexBuffer is large enough to contain the entire buffer, cbOffsetWithinIndex should be 0, and the call needs to be made only once. Otherwise, there should be no gaps between successive buffers.
The user must write this data to the content at cbOffsetFromIndexStart bytes after the end of the ASF data object. You can call
This call will not succeed unless
You cannot use this method in an index reading scenario. You can only use this method when writing indexes.
Provides methods to create Advanced Systems Format (ASF) data packets. The methods of this interface process input samples into the packets that make up an ASF data section. The ASF multiplexer exposes this interface. To create the ASF multiplexer, call
Sets the maximum time by which samples from various streams can be out of synchronization. The multiplexer will not accept a sample with a time stamp that is out of synchronization with the latest samples from any other stream by an amount that exceeds the synchronization tolerance.
The synchronization tolerance is the maximum difference in presentation times at any given point between samples of different streams that the ASF multiplexer can accommodate. That is, if the synchronization tolerance is 3 seconds, no stream can be more than 3 seconds behind any other stream in the time stamps passed to the multiplexer. The multiplexer determines a default synchronization tolerance to use, but this method overrides it (usually to increase it). More tolerance means the potential for greater latency in the multiplexer. If the time stamps are synchronized among the streams, actual latency will be much lower than msSyncTolerance.
Initializes the multiplexer with the data from an ASF ContentInfo object.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This call must be made once at the beginning of encoding, with pIContentInfo pointing to the ASF ContentInfo object that describes the content to be encoded. This enables the ASF multiplexer to see, among other things, which streams will be present in the encoding session. This call typically does not affect the data in the ASF ContentInfo object.
Sets multiplexer options.
Bitwise OR of zero or more members of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves flags indicating the configured multiplexer options.
Receives a bitwise OR of zero or more values from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Delivers input samples to the multiplexer.
The stream number of the stream to which the sample belongs.
Pointer to the
The adjustment to apply to the time stamp of the sample. This parameter is used if the caller wants to shift the sample time on pISample. This value should be positive if the time stamp should be pushed ahead and negative if the time stamp should be pushed back. This time stamp is added to sample time on pISample, and the resulting time is used by the multiplexer instead of the original sample time. If no adjustment is needed, set this value to 0.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There are too many packets waiting to be retrieved from the multiplexer. Call |
| The sample that was processed violates the bandwidth limitations specified for the stream in the ASF ContentInfo object. When this error is generated, the sample is dropped. |
| The value passed in wStreamNumber is invalid. |
| The presentation time of the input media sample is earlier than the send time. |
?
The application passes samples to ProcessSample, and the ASF multiplexer queues them internally until they are ready to be placed into ASF packets. Call
After each call to ProcessSample, call GetNextPacket in a loop to get all of the available data packets. For a code example, see Generating New ASF Data Packets.
Retrieves the next output ASF packet from the multiplexer.
Receives zero or more status flags. If more than one packet is waiting, the method sets the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The client needs to call this method, ideally after every call to
If no packets are ready, the method returns
Signals the multiplexer to process all queued output media samples. Call this method after passing the last sample to the multiplexer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You must call Flush after the last sample has been passed into the ASF multiplexer and before you call
Collects data from the multiplexer and updates the ASF ContentInfo object to include that information in the ASF Header Object.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There are pending output media samples waiting in the multiplexer. Call |
?
For non-live encoding scenarios (such as encoding to a file), the user should call End to update the specified ContentInfo object, adding data that the multiplexer has collected during the packet generation process. The user should then call
During live encoding, it is usually not possible to rewrite the header, so this call is not required for live encoding. (The header in those cases will simply lack some of the information that was not available until the end of the encoding session.)
Retrieves multiplexer statistics.
The stream number for which to obtain statistics.
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the maximum time by which samples from various streams can be out of synchronization. The multiplexer will not accept a sample with a time stamp that is out of synchronization with the latest samples from any other stream by an amount that exceeds the synchronization tolerance.
Synchronization tolerance in milliseconds.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The synchronization tolerance is the maximum difference in presentation times at any given point between samples of different streams that the ASF multiplexer can accommodate. That is, if the synchronization tolerance is 3 seconds, no stream can be more than 3 seconds behind any other stream in the time stamps passed to the multiplexer. The multiplexer determines a default synchronization tolerance to use, but this method overrides it (usually to increase it). More tolerance means the potential for greater latency in the multiplexer. If the time stamps are synchronized among the streams, actual latency will be much lower than msSyncTolerance.
Configures an Advanced Systems Format (ASF) mutual exclusion object, which manages information about a group of streams in an ASF profile that are mutually exclusive. When streams or groups of streams are mutually exclusive, only one of them is read at a time, they are not read concurrently.
A common example of mutual exclusion is a set of streams that each include the same content encoded at a different bit rate. The stream that is used is determined by the available bandwidth to the reader.
An
An ASF profile object can support multiple mutual exclusions. Each must be configured using a separate ASF mutual exclusion object.
Retrieves the type of mutual exclusion represented by the Advanced Systems Format (ASF) mutual exclusion object.
A variable that receives the type identifier. For a list of predefined mutual exclusion type constants, see ASF Mutual Exclusion Type GUIDs.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sometimes, content must be made mutually exclusive in more than one way. For example, a video file might contain audio streams of several bit rates for each of several languages. To handle this type of complex mutual exclusion, you must configure more than one ASF mutual exclusion object. For more information, see
Sets the type of mutual exclusion that is represented by the Advanced Systems Format (ASF) mutual exclusion object.
The type of mutual exclusion that is represented by the ASF mutual exclusion object. For a list of predefined mutual exclusion type constants, see ASF Mutual Exclusion Type GUIDs.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sometimes, content must be made mutually exclusive in more than one way. For example, a video file might contain audio streams in several bit rates for each of several languages. To handle this type of complex mutual exclusion, you must configure more than one ASF mutual exclusion object. For more information, see
Retrieves the number of records in the Advanced Systems Format mutual exclusion object.
Receives the count of records.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Each record includes one or more streams. Every stream in a record is mutually exclusive of streams in every other record.
Use this method in conjunction with
Retrieves the stream numbers contained in a record in the Advanced Systems Format mutual exclusion object.
The number of the record for which to retrieve the stream numbers.
An array that receives the stream numbers. Set to
On input, the number of elements in the array referenced by pwStreamNumArray. On output, the method sets this value to the count of stream numbers in the record. You can call GetStreamsForRecord with pwStreamNumArray set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Adds a stream number to a record in the Advanced Systems Format mutual exclusion object.
The record number to which the stream is added. A record number is set by the
The stream number to add to the record.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified stream number is already associated with the record. |
?
Each record includes one or more streams. Every stream in a record is mutually exclusive of all streams in every other record.
Removes a stream number from a record in the Advanced Systems Format mutual exclusion object.
The record number from which to remove the stream number.
The stream number to remove from the record.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream number is not listed for the specified record. |
?
Removes a record from the Advanced Systems Format (ASF) mutual exclusion object.
The index of the record to remove.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When a record is removed, the ASF mutual exclusion object indexes the remaining records so that they are sequential starting with zero. You should enumerate the records to ensure that you have the correct index for each record. If the record removed is the one with the highest index, removing it has no effect on the other indexes.
Adds a record to the mutual exclusion object. A record specifies streams that are mutually exclusive with the streams in all other records.
Receives the index assigned to the new record. Record indexes are zero-based and sequential.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
A record can include one or more stream numbers. All of the streams in a record are mutually exclusive with all the streams in all other records in the ASF mutual exclusion object.
You can use records to create complex mutual exclusion scenarios by using multiple ASF mutual exclusion objects.
Creates a copy of the Advanced Systems Format mutual exclusion object.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The cloned object is a new object, completely independent of the object from which it was cloned.
Retrieves the number of streams in the profile.
Adds a stream to the profile or reconfigures an existing stream.
If the stream number in the ASF stream configuration object is already included in the profile, the information in the new object replaces the old one. If the profile does not contain a stream for the stream number, the ASF stream configuration object is added as a new stream.
Retrieves the number of streams in the profile.
Receives the number of streams in the profile.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a stream from the profile by stream index, and/or retrieves the stream number for a stream index.
The index of the stream to retrieve. Stream indexes are sequential and zero-based. You can get the number of streams that are in the profile by calling the
Receives the stream number of the requested stream. Stream numbers are one-based and are not necessarily sequential. This parameter can be set to
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method does not create a copy of the stream configuration object. The reference that is retrieved points to the object within the profile object. You must not make any changes to the stream configuration object using this reference, because doing so can affect the profile object in unexpected ways.
To change the configuration of the stream configuration object in the profile, you must first clone the stream configuration object by calling
Retrieves an Advanced Systems Format (ASF) stream configuration object for a stream in the profile. This method references the stream by stream number instead of stream index.
The stream number for which to obtain the interface reference.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method does not create a copy of the stream configuration object. The reference that is retrieved points to the object within the profile object. You must not make any changes to the stream configuration object using this reference, because doing so can affect the profile object in unexpected ways.
To change the configuration of the stream configuration object in the profile, you must first clone the stream configuration object by calling
Adds a stream to the profile or reconfigures an existing stream.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the stream number in the ASF stream configuration object is already included in the profile, the information in the new object replaces the old one. If the profile does not contain a stream for the stream number, the ASF stream configuration object is added as a new stream.
Removes a stream from the Advanced Systems Format (ASF) profile object.
Stream number of the stream to remove.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
After a stream is removed, the ASF profile object reassigns stream indexes so that the index values are sequential starting from zero. Any previously stored stream index numbers are no longer valid after deleting a stream.
Creates an Advanced Systems Format (ASF) stream configuration object.
Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| ppIStream is |
| stream configuration object could not be created due to insufficient memory. |
?
The ASF stream configuration object created by this method is not included in the profile. To include the stream, you must first configure the stream configuration and then call
Retrieves the number of Advanced Systems Format (ASF) mutual exclusion objects that are associated with the profile.
Receives the number of mutual exclusion objects.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Multiple mutual exclusion objects may be required for streams that are mutually exclusive in more than one way. For more information, see
Retrieves an Advanced Systems Format (ASF) mutual exclusion object from the profile.
Index of the mutual exclusion object in the profile.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method does not create a copy of the mutual exclusion object. The returned reference refers to the mutual exclusion contained in the profile object. You must not make any changes to the mutual exclusion object using this reference, because doing so can affect the profile object in unexpected ways.
To change the configuration of the mutual exclusion object in the profile, you must first clone the mutual exclusion object by calling
Adds a configured Advanced Systems Format (ASF) mutual exclusion object to the profile.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You can create a mutual exclusion object by calling the
Removes an Advanced Systems Format (ASF) mutual exclusion object from the profile.
The index of the mutual exclusion object to remove from the profile.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When a mutual exclusion object is removed from the profile, the ASF profile object reassigns the mutual exclusion indexes so that they are sequential starting with zero. Any previously stored indexes are no longer valid after calling this method.
Creates a new Advanced Systems Format (ASF) mutual exclusion object. Mutual exclusion objects can be added to a profile by calling the AddMutualExclusion method.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The ASF mutual exclusion object created by this method is not associated with the profile. Call
Reserved.
If this method succeeds, it returns
Reserved.
If this method succeeds, it returns
If this method succeeds, it returns
Reserved.
Returns E_NOTIMPL.
Creates a copy of the Advanced Systems Format profile object.
Receives a reference to the
If this method succeeds, it returns
The cloned object is completely independent of the original.
Retrieves the option flags that are set on the ASF splitter.
Resets the Advanced Systems Format (ASF) splitter and configures it to parse data from an ASF data section.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pIContentInfo parameter is |
?
Sets option flags on the Advanced Systems Format (ASF) splitter.
A bitwise combination of zero or more members of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The splitter is not initialized. |
| The dwFlags parameter does not contain a valid flag. |
| The |
?
This method can only be called after the splitter is initialized.
Retrieves the option flags that are set on the ASF splitter.
Receives the option flags. This value is a bitwise OR of zero or more members of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pdwFlags is |
?
Sets the streams to be parsed by the Advanced Systems Format (ASF) splitter.
An array of variables containing the list of stream numbers to select.
The number of valid elements in the stream number array.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pwStreamNumbers is |
| Invalid stream number was passed in the array. |
?
Calling this method supersedes any previous stream selections; only the streams specified in the pwStreamNumbers array will be selected.
By default, no streams are selected by the splitter.
You can obtain a list of the currently selected streams by calling the
Gets a list of currently selected streams.
The address of an array of WORDs. This array receives the stream numbers of the selected streams. This parameter can be
On input, points to a variable that contains the number of elements in the pwStreamNumbers array. Set the variable to zero if pwStreamNumbers is
On output, receives the number of elements that were copied into pwStreamNumbers. Each element is the identifier of a selected stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The pwStreamNumbers array is smaller than the number of selected streams. See Remarks. |
?
To get the number of selected streams, set pwStreamNumbers to *pwNumStreams
equal to the number of selected streams. Then allocate an array of that size and call the method again, passing the array in the pwStreamNumbers parameter.
The following code shows these steps:
DisplaySelectedStreams( *pSplitter) { WORD count = 0; hr = pSplitter->GetSelectedStreams( null , &count); if (hr ==) { WORD *pStreamIds = new (std::nothrow) WORD[count]; if (pStreamIds) { hr = pSplitter->GetSelectedStreams(pStreamIds, &count); if (SUCCEEDED(hr)) { for (WORD i = 0; i < count; i++) { printf("Selected stream ID: %d\n", pStreamIds[i]); } } delete [] pStreamIds; } else { hr = E_OUTOFMEMORY; } } return hr; }
Alternatively, you can allocate an array that is equal to the total number of streams and pass that to pwStreamNumbers.
Before calling this method, initialize *pwNumStreams
to the number of elements in pwStreamNumbers. If pwStreamNumbers is *pwNumStreams
to zero.
By default, no streams are selected by the splitter. Select streams by calling the
Sends packetized Advanced Systems Format (ASF) data to the ASF splitter for processing.
Pointer to the
The offset into the data buffer where the splitter should begin parsing. This value is typically set to 0.
The length, in bytes, of the data to parse. This value is measured from the offset specified by cbBufferOffset. Set to 0 to process to the end of the buffer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pIBuffer parameter is The specified offset value in cbBufferOffset is greater than the length of the buffer. The total value of cbBufferOffset and cbLength is greater than the length of the buffer. |
| The |
| The splitter cannot process more input at this time. |
?
After using this method to parse data, you must call
If your ASF data contains variable-sized packets, you must set the
If the method returns ME_E_NOTACCEPTING, call GetNextSample to get the output samples, or call
The splitter might hold a reference count on the input buffer. Therefore, do not write over the valid data in the buffer after calling this method.
Retrieves a sample from the Advanced Systems Format (ASF) splitter after the data has been parsed.
Receives one of the following values.
Value | Meaning |
---|---|
More samples are ready to be retrieved. Call GetNextSample in a loop until the pdwStatusFlags parameter receives the value zero. | |
| No additional samples are ready. Call |
?
If the method returns a sample in the ppISample parameter, this parameter receives the number of the stream to which the sample belongs.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ASF data in the buffer is invalid. |
| There is a gap in the ASF data. |
?
Before calling this method, call
The ASF splitter skips samples for unselected streams. To select streams, call
Resets the Advanced Systems Format (ASF) splitter and releases all pending samples.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Any samples waiting to be retrieved when Flush is called are lost.
Retrieves the send time of the last sample received.
Receives the send time of the last sample received.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pdwLastSendTime is |
?
Retrieves information about an existing payload extension.
Retrieves the stream number of the stream.
Retrieves the media type of the stream.
To reduce unnecessary copying, the method returns a reference to the media type that is stored internally by the object. Do not modify the returned media type, as the results are not defined.
Gets the major media type of the stream.
Receives the major media type for the stream. For a list of possible values, see Major Media Types.
If this method succeeds, it returns
Retrieves the stream number of the stream.
The method returns the stream number.
Assigns a stream number to the stream.
The number to assign to the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Stream numbers start from 1 and do not need to be sequential.
Retrieves the media type of the stream.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To reduce unnecessary copying, the method returns a reference to the media type that is stored internally by the object. Do not modify the returned media type, as the results are not defined.
Sets the media type for the Advanced Systems Format (ASF) stream configuration object.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Some validation of the media type is performed by this method. However, a media type can be successfully set, but cause an error when the stream is added to the profile.
Retrieves the number of payload extensions that are configured for the stream.
Receives the number of payload extensions.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves information about an existing payload extension.
The payload extension index. Valid indexes range from 0, to one less than the number of extensions obtained by calling
Receives a
Receives the number of bytes added to each sample for the extension.
Pointer to a buffer that receives information about this extension system. This information is the same for all samples and is stored in the content header (not in each sample). This parameter can be
On input, specifies the size of the buffer pointed to by pbExtensionSystemInfo. On output, receives the required size of the pbExtensionSystemInfo buffer in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The buffer specified in pbExtensionSystemInfo is too small. |
| The wPayloadExtensionNumber parameter is out of range. |
?
Configures a payload extension for the stream.
Pointer to a
Number of bytes added to each sample for the extension.
A reference to a buffer that contains information about this extension system. This information is the same for all samples and is stored in the content header (not with each sample). This parameter can be
Amount of data, in bytes, that describes this extension system. If this value is 0, then pbExtensionSystemInfo can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Removes all payload extensions that are configured for the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
None.
Creates a copy of the Advanced Systems Format (ASF) stream configuration object.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The cloned object is completely independent of the original.
Note??This interface is not implemented in this version of Media Foundation.?
Adds a stream to the stream priority list.
The stream priority list is built by appending entries to the list with each call to AddStream. The list is evaluated in descending order of importance. The most important stream should be added first, and the least important should be added last.
Note??This interface is not implemented in this version of Media Foundation.?
Retrieves the number of entries in the stream priority list.
Receives the number of streams in the stream priority list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Note??This interface is not implemented in this version of Media Foundation.?
Retrieves the stream number of a stream in the stream priority list.
Zero-based index of the entry to retrieve from the stream priority list. To get the number of entries in the priority list, call
Receives the stream number of the stream priority entry.
Receives a Boolean value. If TRUE, the stream is mandatory.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
?
Note??This interface is not implemented in this version of Media Foundation.?
Adds a stream to the stream priority list.
Stream number of the stream to add.
If TRUE, the stream is mandatory.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. |
?
The stream priority list is built by appending entries to the list with each call to AddStream. The list is evaluated in descending order of importance. The most important stream should be added first, and the least important should be added last.
Note??This interface is not implemented in this version of Media Foundation.?
Removes a stream from the stream priority list.
Index of the entry in the stream priority list to remove. Values range from zero, to one less than the stream count retrieved by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When a stream is removed from the stream priority list, the index values of all streams that follow it in the list are decremented.
Note??This interface is not implemented in this version of Media Foundation.?
Creates a copy of the ASF stream prioritization object.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The new object is completely independent of the original.
Retrieves the number of bandwidth steps that exist for the content. This method is used for multiple bit rate (MBR) content.
Bandwidth steps are bandwidth levels used for multiple bit rate (MBR) content. If you stream MBR content, you can choose the bandwidth step that matches the network conditions to avoid interruptions during playback.
Sets options for the stream selector.
Retrieves the number of streams that are in the Advanced Systems Format (ASF) content.
Receives the number of streams in the content.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the number of outputs for the Advanced Systems Format (ASF) content.
Receives the number of outputs.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Outputs are streams in the ASF data section that will be parsed.
Retrieves the number of streams associated with an output.
The output number for which to retrieve the stream count.
Receives the number of streams associated with the output.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid output number. |
?
An output is a stream in an ASF data section that will be parsed. If mutual exclusion is used, mutually exclusive streams share the same output.
Retrieves the stream numbers for all of the streams that are associated with an output.
The output number for which to retrieve stream numbers.
Address of an array that receives the stream numbers associated with the output. The caller allocates the array. The array size must be at least as large as the value returned by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid output number. |
?
An output is a stream in an ASF data section that will be parsed. If mutual exclusion is used, mutually exclusive streams share the same output.
Retrieves the output number associated with a stream.
The stream number for which to retrieve an output number.
Receives the output number.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. |
?
Outputs are streams in the ASF data section that will be parsed.
Retrieves the manual output override selection that is set for a stream.
Stream number for which to retrieve the output override selection.
Receives the output override selection. The value is a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the selection status of an output, overriding other selection criteria.
Output number for which to set selection.
Member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the number of mutual exclusion objects associated with an output.
Output number for which to retrieve the count of mutually exclusive relationships.
Receives the number of mutual exclusions.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a mutual exclusion object for an output.
Output number for which to retrieve a mutual exclusion object.
Mutual exclusion number. This is an index of mutually exclusive relationships associated with the output. Set to a number between 0, and 1 less than the number of mutual exclusion objects retrieved by calling
Receives a reference to the mutual exclusion object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Outputs are streams in the ASF data section that will be parsed.
Selects a mutual exclusion record to use for a mutual exclusion object associated with an output.
The output number for which to set a stream.
Index of the mutual exclusion for which to select.
Record of the specified mutual exclusion to select.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
An output is a stream in an Advanced Systems Format (ASF) data section that will be parsed. If mutual exclusion is used, mutually exclusive streams share the same output.
An ASF file can contain multiple mutually exclusive relationships, such as a file with both language based and bit-rate based mutual exclusion. If an output is involved in multiple mutually exclusive relationships, a record from each must be selected.
Retrieves the number of bandwidth steps that exist for the content. This method is used for multiple bit rate (MBR) content.
Receives the number of bandwidth steps.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Bandwidth steps are bandwidth levels used for multiple bit rate (MBR) content. If you stream MBR content, you can choose the bandwidth step that matches the network conditions to avoid interruptions during playback.
Retrieves the stream numbers that apply to a bandwidth step. This method is used for multiple bit rate (MBR) content.
Bandwidth step number for which to retrieve information. Set this value to a number between 0, and 1 less than the number of bandwidth steps returned by
Receives the bit rate associated with the bandwidth step.
Address of an array that receives the stream numbers. The caller allocates the array. The array size must be at least as large as the value returned by the
Address of an array that receives the selection status of each stream, as an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Bandwidth steps are bandwidth levels used for MBR content. If you stream MBR content, you can choose the bandwidth step that matches the network conditions to avoid interruptions during playback.
Retrieves the index of a bandwidth step that is appropriate for a specified bit rate. This method is used for multiple bit rate (MBR) content.
The bit rate to find a bandwidth step for.
Receives the step number. Use this number to retrieve information about the step by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
In a streaming multiple bit rate (MBR) scenario, call this method with the current data rate of the network connection to determine the correct step to use. You can also call this method periodically throughout streaming to ensure that the best step is used.
Sets options for the stream selector.
Bitwise OR of zero or more members of the MFASF_STREAMSELECTOR_FLAGS enumeration specifying the options to use.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
[
Represents a description of an audio format.
Windows Server?2008 and Windows?Vista:??If the major type of a media type is
To convert an audio media type into a
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[GetAudioFormat is no longer available for use as of Windows?7. Instead, use the media type attributes to get the properties of the audio format.]
Returns a reference to a
If you need to convert the media type into a
There are no guarantees about how long the returned reference is valid.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[GetAudioFormat is no longer available for use as of Windows?7. Instead, use the media type attributes to get the properties of the audio format.]
Returns a reference to a
This method returns a reference to a
If you need to convert the media type into a
There are no guarantees about how long the returned reference is valid.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Configures the audio session that is associated with the streaming audio renderer (SAR). Use this interface to change how the audio session appears in the Windows volume control.
The SAR exposes this interface as a service. To get a reference to the interface, call
Retrieves the group of sessions to which this audio session belongs.
If two or more audio sessions share the same group, the Windows volume control displays one slider control for the entire group. Otherwise, it displays a slider for each session. For more information, see IAudioSessionControl::SetGroupingParam in the core audio API documentation.
Assigns the audio session to a group of sessions.
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If two or more audio sessions share the same group, the Windows volume control displays one slider control for the entire group. Otherwise, it displays a slider for each session. For more information, see IAudioSessionControl::SetGroupingParam in the core audio API documentation.
Retrieves the group of sessions to which this audio session belongs.
Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If two or more audio sessions share the same group, the Windows volume control displays one slider control for the entire group. Otherwise, it displays a slider for each session. For more information, see IAudioSessionControl::SetGroupingParam in the core audio API documentation.
Sets the display name of the audio session. The Windows volume control displays this name.
A null-terminated wide-character string that contains the display name.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application does not set a display name, Windows creates one.
Retrieves the display name of the audio session. The Windows volume control displays this name.
Receives a reference to the display name string. The caller must free the memory allocated for the string by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application does not set a display name, Windows creates one.
Sets the icon resource for the audio session. The Windows volume control displays this icon.
A wide-character string that specifies the icon. See Remarks.
If this method succeeds, it returns
The icon path has the format "path,index" or "path,-id", where path is the fully qualified path to a DLL, executable file, or icon file; index is the zero-based index of the icon within the file; and id is a resource identifier. Note that resource identifiers are preceded by a minus sign (-) to distinguish them from indexes. The path can contain environment variables, such as "%windir%". For more information, see IAudioSessionControl::SetIconPath in the Windows SDK.
Retrieves the icon resource for the audio session. The Windows volume control displays this icon.
Receives a reference to a wide-character string that specifies a shell resource. The format of the string is described in the topic
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application did not set an icon path, the method returns an empty string ("").
For more information, see IAudioSessionControl::GetIconPath in the core audio API documentation.
Controls the volume levels of individual audio channels.
The streaming audio renderer (SAR) exposes this interface as a service. To get a reference to the interface, call
If your application does not require channel-level volume control, you can use the
Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation). For each channel, the attenuation level is the product of:
For example, if the master volume is 0.8 and the channel volume is 0.5, the attenuation for that channel is 0.8 ? 0.5 = 0.4. Volume levels can exceed 1.0 (positive gain), but the audio engine clips any audio samples that exceed zero decibels.
Use the following formula to convert the volume level to the decibel (dB) scale:
Attenuation (dB) = 20 * log10(Level)
For example, a volume level of 0.50 represents 6.02 dB of attenuation.
Retrieves the number of channels in the audio stream.
Retrieves the number of channels in the audio stream.
Receives the number of channels in the audio stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the volume level for a specified channel in the audio stream.
Zero-based index of the audio channel. To get the number of channels, call
Volume level for the channel.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the volume level for a specified channel in the audio stream.
Zero-based index of the audio channel. To get the number of channels, call
Receives the volume level for the channel.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the individual volume levels for all of the channels in the audio stream.
Number of elements in the pfVolumes array. The value must equal the number of channels. To get the number of channels, call
Address of an array of size dwCount, allocated by the caller. The array specifies the volume levels for all of the channels. Before calling the method, set each element of the array to the desired volume level for the channel.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the volume levels for all of the channels in the audio stream.
Number of elements in the pfVolumes array. The value must equal the number of channels. To get the number of channels, call
Address of an array of size dwCount, allocated by the caller. The method fills the array with the volume level for each channel in the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Represents a buffer that contains a two-dimensional surface, such as a video frame.
To get a reference to this interface, call QueryInterface on the media buffer.
To use a 2-D buffer, it is important to know the stride, which is the number of bytes needed to go from one row of pixels to the next. The stride may be larger than the image width, because the surface may contain padding bytes after each row of pixels. Stride can also be negative, if the pixels are oriented bottom-up in memory. For more information, see Image Stride.
Every video format defines a contiguous or packed representation. This representation is compatible with the standard layout of a DirectX surface in system memory, with no additional padding. For RGB video, the contiguous representation has a pitch equal to the image width in bytes, rounded up to the nearest DWORD boundary. For YUV video, the layout of the contiguous representation depends on the YUV format. For planar YUV formats, the Y plane might have a different pitch than the U and V planes.
If a media buffer supports the
Call the Lock2D method to access the 2-D buffer in its native format. The native format might not be contiguous. The buffer's
For uncompressed images, the amount of valid data in the buffer is determined by the width, height, and pixel layout of the image. For this reason, if you call Lock2D to access the buffer, do not rely on the values returned by
Queries whether the buffer is contiguous in its native format.
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Retrieves the number of bytes needed to store the contents of the buffer in contiguous format.
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Gives the caller access to the memory in the buffer.
Receives a reference to the first byte of the top row of pixels in the image. The top row is defined as the top row when the image is presented to the viewer, and might not be the first row in memory.
Receives the surface stride, in bytes. The stride might be negative, indicating that the image is oriented from the bottom up in memory.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Cannot lock the Direct3D surface. |
| The buffer cannot be locked at this time. |
?
If p is a reference to the first byte in a row of pixels, p + (*plPitch) points to the first byte in the next row of pixels. A buffer might contain padding after each row of pixels, so the stride might be wider than the width of the image in bytes. Do not access the memory that is reserved for padding bytes, because it might not be read-accessible or write-accessible. For more information, see Image Stride.
The reference returned in pbScanline0 remains valid as long as the caller holds the lock. When you are done accessing the memory, call
The values returned by the
The
When the underlying buffer is a Direct3D surface, the method fails if the surface is not lockable.
Unlocks a buffer that was previously locked. Call this method once for each call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a reference to the buffer memory and the surface stride.
Receives a reference to the first byte of the top row of pixels in the image.
Receives the stride, in bytes. For more information, see Image Stride.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| You must lock the buffer before calling this method. |
?
Before calling this method, you must lock the buffer by calling
Queries whether the buffer is contiguous in its native format.
Receives a Boolean value. The value is TRUE if the buffer is contiguous, and
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Retrieves the number of bytes needed to store the contents of the buffer in contiguous format.
Receives the number of bytes needed to store the contents of the buffer in contiguous format.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Copies this buffer into the caller's buffer, converting the data to contiguous format.
Pointer to the destination buffer where the data will be copied. The caller allocates the buffer.
Size of the destination buffer, in bytes. To get the required size, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid size specified in pbDestBuffer. |
?
If the original buffer is not contiguous, this method converts the contents into contiguous format during the copy. For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Copies data to this buffer from a buffer that has a contiguous format.
Pointer to the source buffer. The caller allocates the buffer.
Size of the source buffer, in bytes. To get the maximum size of the buffer, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method copies the contents of the source buffer into the buffer that is managed by this
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in the
Represents a buffer that contains a two-dimensional surface, such as a video frame.
This interface extends the
Gives the caller access to the memory in the buffer.
A member of the
Receives a reference to the first byte of the top row of pixels in the image. The top row is defined as the top row when the image is presented to the viewer, and might not be the first row in memory.
Receives the surface stride, in bytes. The stride might be negative, indicating that the image is oriented from the bottom up in memory.
Receives a reference to the start of the accessible buffer in memory.
Receives the length of the buffer, in bytes.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. The buffer might already be locked with an incompatible locking flag. See Remarks. |
| There is insufficient memory to complete the operation. |
?
When you are done accessing the memory, call
This method is equivalent to the
The ppbBufferStart and pcbBufferLength parameters receive the bounds of the buffer memory. Use these values to guard against buffer overruns. Use the values of ppbScanline0 and plPitch to access the image data. If the image is bottom-up in memory, ppbScanline0 will point to the last scan line in memory and plPitch will be negative. For more information, see Image Stride.
The lockFlags parameter specifies whether the buffer is locked for read-only access, write-only access, or read/write access.
When possible, use a read-only or write-only lock, and avoid locking the buffer for read/write access. If the buffer represents a DirectX Graphics Infrastructure (DXGI) surface, a read/write lock can cause an extra copy between CPU memory and GPU memory.
Copies the buffer to another 2D buffer object.
A reference to the
If this method succeeds, it returns
The destination buffer must be at least as large as the source buffer.
Enables
Indicates that a
Indicates that a
Controls how a byte stream buffers data from a network.
To get a reference to this interface, call QueryInterface on the byte stream object.
If a byte stream implements this interface, a media source can use it to control how the byte stream buffers data. This interface is designed for byte streams that read data from a network.
A byte stream that implements this interface should also implement the
The byte stream must send a matching
After the byte stream sends an
The byte stream should not send any more buffering events after it reaches the end of the file.
If buffering is disabled, the byte stream does not send any buffering events. Internally, however, it might still buffer data while it waits for I/O requests to complete. Therefore,
If the byte stream is buffering data internally and the media source calls EnableBuffering with the value TRUE, the byte stream can send
After the presentation has started, the media source should forward and
Sets the buffering parameters.
Sets the buffering parameters.
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Enables or disables buffering.
Specifies whether the byte stream buffers data. If TRUE, buffering is enabled. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before calling this method, call
Stops any buffering that is in progress.
The method returns an
Return code | Description |
---|---|
| The byte stream successfully stopped buffering. |
| No buffering was in progress. |
?
If the byte stream is currently buffering data, it stops and sends an
Controls how a network byte stream transfers data to a local cache. Optionally, this interface is exposed by byte streams that read data from a network, for example, through HTTP.
To get a reference to this interface, call QueryInterface on the byte stream object.
Stops the background transfer of data to the local cache.
If this method succeeds, it returns
The byte stream resumes transferring data to the cache if the application does one of the following:
Controls how a network byte stream transfers data to a local cache. This interface extends the
Byte streams object in Microsoft Media Foundation can optionally implement this interface. To get a reference to this interface, call QueryInterface on the byte stream object.
Limits the cache size.
Queries whether background transfer is active.
Background transfer might stop because the cache limit was reached (see
Gets the ranges of bytes that are currently stored in the cache.
Receives the number of ranges returned in the ppRanges array.
Receives an array of
If this method succeeds, it returns
Limits the cache size.
The maximum number of bytes to store in the cache, or ULONGLONG_MAX for no limit. The default value is no limit.
If this method succeeds, it returns
Queries whether background transfer is active.
Receives the value TRUE if background transfer is currently active, or
If this method succeeds, it returns
Background transfer might stop because the cache limit was reached (see
Creates a media source from a byte stream.
Applications do not use this interface directly. This interface is exposed by byte-stream handlers, which are used by the source resolver. When the byte-stream handler is given a byte stream, it parses the stream and creates a media source. Byte-stream handlers are registered by file name extension or MIME type.
Retrieves the maximum number of bytes needed to create the media source or determine that the byte stream handler cannot parse this stream.
Begins an asynchronous request to create a media source from a byte stream.
Pointer to the byte stream's
String that contains the original URL of the byte stream. This parameter can be
Bitwise OR of zero or more flags. See Source Resolver Flags.
Pointer to the
Receives an
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Unable to parse the byte stream. |
?
The dwFlags parameter must contain the
The byte-stream handler is responsible for parsing the stream and validating the contents. If the stream is not valid or the byte stream handler cannot parse the stream, the handler should return a failure code. The byte stream is not guaranteed to match the type of stream that the byte handler is designed to parse.
If the pwszURL parameter is not
When the operation completes, the byte-stream handler calls the
Completes an asynchronous request to create a media source.
Pointer to the
Receives a member of the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation was canceled. See |
| Unable to parse the byte stream. |
?
Call this method from inside the
Cancels the current request to create a media source.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You can use this method to cancel a previous call to BeginCreateObject. Because that method is asynchronous, however, it might be completed before the operation can be canceled. Therefore, your callback might still be invoked after you call this method.
Retrieves the maximum number of bytes needed to create the media source or determine that the byte stream handler cannot parse this stream.
Receives the maximum number of bytes that are required.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a proxy to a byte stream. The proxy enables a media source to read from a byte stream in another process.
Creates a proxy to a byte stream. The proxy enables a media source to read from a byte stream in another process.
A reference to the
Reserved. Set to
The interface identifer (IID) of the interface being requested.
Receives a reference to the interface. The caller must release the interface.
If this method succeeds, it returns
Seeks a byte stream by time position.
A byte stream can implement this interface if it supports time-based seeking. For example, a byte stream that reads data from a server might implement the interface. Typically, a local file-based byte stream would not implement it.
To get a reference to this interface, call QueryInterface on the byte stream object.
Queries whether the byte stream supports time-based seeking.
Queries whether the byte stream supports time-based seeking.
Receives the value TRUE if the byte stream supports time-based seeking, or
If this method succeeds, it returns
Seeks to a new position in the byte stream.
The new position, in 100-nanosecond units.
If this method succeeds, it returns
If the byte stream reads from a server, it might cache the seek request until the next read request. Therefore, the byte stream might not send a request to the server immediately.
Gets the result of a time-based seek.
Receives the new position after the seek, in 100-nanosecond units.
Receives the stop time, in 100-nanosecond units. If the stop time is unknown, the value is zero.
Receives the total duration of the file, in 100-nanosecond units. If the duration is unknown, the value is ?1.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The byte stream does not support time-based seeking, or no data is available. |
?
This method returns the server response from a previous time-based seek.
Note??This method normally cannot be invoked until some data is read from the byte stream, because theExtends the
Dynamically sets the output media type of the record sink or preview sink.
The stream index to change the output media type on.
The new output media type.
The new encoder attributes. This can be null.
The method returns an
Return code | Description |
---|---|
| The method succeeded |
| The sink does not support the media type. |
?
This is an asynchronous call. Listen to the MF_CAPTURE_ENGINE_OUTPUT_MEDIA_TYPE_SET event to be notified when the output media type has been set.
Controls the capture source object. The capture source manages the audio and video capture devices.
To get a reference to the capture source, call
Gets the number of device streams.
Gets the current capture device's
If this method succeeds, it returns
Gets the current capture device's
If this method succeeds, it returns
Gets a reference to the underlying Source Reader object.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid argument. |
| The capture source was not initialized. Possibly there is no capture device on the system. |
?
Adds an effect to a capture stream.
The capture stream. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to one of the following:
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| No compatible media type could be found. |
| The dwSourceStreamIndex parameter is invalid. |
?
The effect must be implemented as a Media Foundation Transform (MFT). The pUnknown parameter can point to an instance of the MFT, or to an activation object for the MFT. For more information, see Activation Objects.
The effect is applied to the stream before the data reaches the capture sinks.
Removes an effect from a capture stream.
The capture stream. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. Possibly the specified effect could not be found. |
| The dwSourceStreamIndex parameter is invalid. |
?
This method removes an effect that was previously added using the
Removes all effects from a capture stream.
The capture stream. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
?
Gets a format that is supported by one of the capture streams.
The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the media type to retrieve.
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
| The dwMediaTypeIndex parameter is out of range. |
?
To enumerate all of the available formats on a stream, call this method in a loop while incrementing dwMediaTypeIndex, until the method returns
Some cameras might support a range of frame rates. The minimum and maximum frame rates are stored in the
Sets the output format for a capture stream.
The capture stream to set. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
?
This method sets the native output type on the capture device. The device must support the specified format. To get the list of available formats, call
Gets the current media type for a capture stream.
Specifies which stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
?
Gets the number of device streams.
Receives the number of device streams.
If this method succeeds, it returns
Gets the stream category for the specified source stream index.
The index of the source stream.
Receives the
If this method succeeds, it returns
Gets the current mirroring state of the video preview stream.
The zero-based index of the stream.
Receives the value TRUE if mirroring is enabled, or
If this method succeeds, it returns
Enables or disables mirroring of the video preview stream.
The zero-based index of the stream.
If TRUE, mirroring is enabled; if
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The device stream does not have mirroring capability. |
| The source is not initialized. |
?
Gets the actual device stream index translated from a friendly stream name.
The friendly name. Can be one of the following:
Receives the value of the stream index that corresponds to the friendly name.
If this method succeeds, it returns
Used to enable the client to notify the Content Decryption Module (CDM) when global resources should be brought into a consistent state prior to suspending.
Indicates that the suspend process is starting and resources should be brought into a consistent state.
If this method succeeds, it returns
The actual suspend is about to occur and no more calls will be made into the Content Decryption Module (CDM).
If this method succeeds, it returns
Provides timing information from a clock in Microsoft Media Foundation.
Clocks and some media sinks expose this interface through QueryInterface.
The
Retrieves the characteristics of the clock.
Retrieves the clock's continuity key. (Not supported.)
Continuity keys are currently not supported in Media Foundation. Clocks must return the value zero in the pdwContinuityKey parameter.
Retrieves the properties of the clock.
Retrieves the characteristics of the clock.
Receives a bitwise OR of values from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the last clock time that was correlated with system time.
Reserved, must be zero.
Receives the last known clock time, in units of the clock's frequency.
Receives the system time that corresponds to the clock time returned in pllClockTime, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The clock does not have a time source. |
?
At some fixed interval, a clock correlates its internal clock ticks with the system time. (The system time is the time returned by the high-resolution performance counter.) This method returns:
The clock time is returned in the pllClockTime parameter and is expressed in units of the clock's frequency. If the clock's
The system time is returned in the phnsSystemTime parameter, and is always expressed in 100-nanosecond units.
To find out how often the clock correlates its clock time with the system time, call GetProperties. The correlation interval is given in the qwCorrelationRate member of the
Some clocks support rate changes through the
For the presentation clock, the clock time is the presentation time, and is always relative to the starting time specified in
Retrieves the clock's continuity key. (Not supported.)
Receives the continuity key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Continuity keys are currently not supported in Media Foundation. Clocks must return the value zero in the pdwContinuityKey parameter.
Retrieves the current state of the clock.
Reserved, must be zero.
Receives the clock state, as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the properties of the clock.
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a media source or a byte stream from a URL.
Applications do not use this interface. This interface is exposed by scheme handlers, which are used by the source resolver. A scheme handler is designed to parse one type of URL scheme. When the scheme handler is given a URL, it parses the resource that is located at that URL and creates either a media source or a byte stream.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called by the media pipeline to provide the app with an instance of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The ICollection interface is the base interface for classes in the System.Collections namespace.
The ICollection interface extends IEnumerable; IDictionary and IList are more specialized interfaces that extend ICollection. An IDictionary implementation is a collection of key/value pairs, like the Hashtable class. An IList implementation is a collection of values and its members can be accessed by index, like the ArrayList class.
Some collections that limit access to their elements, such as the Queue class and the Stack class, directly implement the ICollection interface.
If neither the IDictionary interface nor the IList interface meet the requirements of the required collection, derive the new collection class from the ICollection interface instead for more flexibility.
For the generic version of this interface, see System.Collections.Generic.ICollection.
Windows 98, Windows Server 2000 SP4, Windows CE, Windows Millennium Edition, Windows Mobile for Pocket PC, Windows Mobile for Smartphone, Windows Server 2003, Windows XP Media Center Edition, Windows XP Professional x64 Edition, Windows XP SP2, Windows XP Starter Edition
The Microsoft .NET Framework 3.0 is supported on Windows Vista, Microsoft Windows XP SP2, and Windows Server 2003 SP1. .NET FrameworkSupported in: 3.0, 2.0, 1.1, 1.0.NET Compact FrameworkSupported in: 2.0, 1.0XNA FrameworkSupported in: 1.0ReferenceICollection MembersSystem.Collections NamespaceIDictionaryIListSystem.Collections.Generic.ICollection
Retrieves the number of objects in the collection.
Retrieves the number of objects in the collection.
Receives the number of objects in the collection.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves an object in the collection.
Zero-based index of the object to retrieve. Objects are indexed in the order in which they were added to the collection.
Receives a reference to the object's
This method does not remove the object from the collection. To remove an object, call
Adds an object to the collection.
Pointer to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If pUnkElement is
Removes an object from the collection.
Zero-based index of the object to remove. Objects are indexed in the order in which they were added to the collection.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Adds an object at the specified index in the collection.
The zero-based index where the object will be added to the collection.
The object to insert.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Removes all items from the collection.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Allows a decryptor to manage hardware keys and decrypt hardware samples.
Allows the display driver to return IHV-specific information used when initializing a new hardware key.
The number of bytes in the buffer that InputPrivateData specifies.
The contents of this parameter are defined by the implementation of the protection system that runs in the security processor. The contents may contain data about license or stream properties.
The return data is also defined by the implementation of the protection system implementation that runs in the security processor. The contents may contain data associated with the underlying hardware key.
If this method succeeds, it returns
Implements one step that must be performed for the user to access media content. For example, the steps might be individualization followed by license acquisition. Each of these steps would be encapsulated by a content enabler object that exposes the
Retrieves the type of operation that this content enabler performs.
The following GUIDs are defined for the pType parameter.
Value | Description |
---|---|
MFENABLETYPE_MF_RebootRequired | The user must reboot his or her computer. |
MFENABLETYPE_MF_UpdateRevocationInformation | Update revocation information. |
MFENABLETYPE_MF_UpdateUntrustedComponent | Update untrusted components. |
MFENABLETYPE_WMDRMV1_LicenseAcquisition | License acquisition for Windows Media Digital Rights Management (DRM) version 1. |
MFENABLETYPE_WMDRMV7_Individualization | Individualization. |
MFENABLETYPE_WMDRMV7_LicenseAcquisition | License acquisition for Windows Media DRM version 7 or later. |
?
Queries whether the content enabler can perform all of its actions automatically.
If this method returns TRUE in the pfAutomatic parameter, call the
If this method returns
Retrieves the type of operation that this content enabler performs.
Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The following GUIDs are defined for the pType parameter.
Value | Description |
---|---|
MFENABLETYPE_MF_RebootRequired | The user must reboot his or her computer. |
MFENABLETYPE_MF_UpdateRevocationInformation | Update revocation information. |
MFENABLETYPE_MF_UpdateUntrustedComponent | Update untrusted components. |
MFENABLETYPE_WMDRMV1_LicenseAcquisition | License acquisition for Windows Media Digital Rights Management (DRM) version 1. |
MFENABLETYPE_WMDRMV7_Individualization | Individualization. |
MFENABLETYPE_WMDRMV7_LicenseAcquisition | License acquisition for Windows Media DRM version 7 or later. |
?
Retrieves a URL for performing a manual content enabling action.
Receives a reference to a buffer that contains the URL. The caller must release the memory for the buffer by calling CoTaskMemFree.
Receives the number of characters returned in ppwszURL, including the terminating
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No URL is available. |
?
If the enabling action can be performed by navigating to a URL, this method returns the URL. If no such URL exists, the method returns a failure code.
The purpose of the URL depends on the content enabler type, which is obtained by calling
Enable type | Purpose of URL |
---|---|
Individualization | Not applicable. |
License acquisition | URL to obtain the license. Call |
Revocation | URL to a webpage where the user can download and install an updated component. |
?
Retrieves the data for a manual content enabling action.
Receives a reference to a buffer that contains the data. The caller must free the buffer by calling CoTaskMemFree.
Receives the size of the ppbData buffer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No data is available. |
?
The purpose of the data depends on the content enabler type, which is obtained by calling
Enable type | Purpose of data |
---|---|
Individualization | Not applicable. |
License acquisition | HTTP POST data. |
Revocation | |
?
Queries whether the content enabler can perform all of its actions automatically.
Receives a Boolean value. If TRUE, the content enabler can perform the enabing action automatically.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If this method returns TRUE in the pfAutomatic parameter, call the
If this method returns
Performs a content enabling action without any user interaction.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is asynchronous. When the operation is complete, the content enabler sends an
To find out whether the content enabler supports this method, call
Requests notification when the enabling action is completed.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The method succeeded and no action was required. |
?
If you use a manual enabling action, call this method to be notified when the operation completes. If this method returns
You do not have to call MonitorEnable when you use automatic enabling by calling
Cancels a pending content enabling action.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The content enabler sends an
Gets the required number of bytes that need to be prepended to the input and output buffers when you call the security processor through the InvokeFunction method. When you specify this number of bytes, the Media Foundation transform (MFT) decryptor can allocate the total amount of bytes and can avoid making copies of the data when the decrytor moves the data to the security processor.
Calls into the implementation of the protection system in the security processor.
The identifier of the function that you want to run. This identifier is defined by the implementation of the protection system.
The number of bytes of in the buffer that InputBuffer specifies, including private data.
A reference to the data that you want to provide as input.
Pointer to a value that specifies the length in bytes of the data that the function wrote to the buffer that OutputBuffer specifies, including the private data.
Pointer to the buffer where you want the function to write its output.
If this method succeeds, it returns
Gets the required number of bytes that need to be prepended to the input and output buffers when you call the security processor through the InvokeFunction method. When you specify this number of bytes, the Media Foundation transform (MFT) decryptor can allocate the total amount of bytes and can avoid making copies of the data when the decrytor moves the data to the security processor.
If this method succeeds, it returns
Enables playback of protected content by providing the application with a reference to a content enabler object.
Applications that play protected content should implement this interface.
A content enabler is an object that performs some action that is required to play a piece of protected content. For example, the action might be obtaining a DRM license. Content enablers expose the
To use this interface, do the following:
Implement the interface in your application.
Create an attribute store by calling
Set the
Call
If the content requires a content enabler, the application's BeginEnableContent method is called. Usually this method called during the
Many content enablers send machine-specific data to the network, which can have privacy implications. One of the purposes of the
Begins an asynchronous request to perform a content enabling action.
This method requests the application to perform a specific step needed to acquire rights to the content, using a content enabler object.
Pointer to the
Pointer to the
Pointer to the
Reserved. Currently this parameter is always
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Do not block within this callback method. Instead, perform the content enabling action asynchronously on another thread. When the operation is finished, notify the protected media path (PMP) through the pCallback parameter.
If you return a success code from this method, you must call Invoke on the callback. Conversely, if you return an error code from this method, you must not call Invoke. If the operation fails after the method returns a success code, use status code on the
After the callback is invoked, the PMP will call the application's
This method is not necessarily called every time the application plays protected content. Generally, the method will not be called if the user has a valid, up-to-date license for the content. Internally, the input trust authority (ITA) determines whether BeginEnableContent is called, based on the content provider's DRM policy. For more information, see Protected Media Path.
Ends an asynchronous request to perform a content enabling action. This method is called by the protected media path (PMP) to complete an asynchronous call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When the BeginEnableContent method completes asynchronously, the application notifies the PMP by invoking the asynchronous callback. The PMP calls EndEnableContent on the application to get the result code. This method is called on the application's thread from inside the callback method. Therefore, it must not block the thread that invoked the callback.
The application must return the success or failure code of the asynchronous processing that followed the call to BeginEnableContent.
Enables the presenter for the enhanced video renderer (EVR) to request a specific frame from the video mixer.
The sample objects created by the
Called by the mixer to get the time and duration of the sample requested by the presenter.
Receives the desired sample time that should be mixed.
Receives the sample duration that should be mixed.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time stamp was set for this sample. See |
?
Called by the presenter to set the time and duration of the sample that it requests from the mixer.
The time of the requested sample.
The duration of the requested sample.
This value should be set prior to passing the buffer to the mixer for a Mix operation. The mixer sets the actual start and duration times on the sample before sending it back.
Clears the time stamps previously set by a call to
After this method is called, the
This method also clears the time stamp and duration and removes all attributes from the sample.
The SetInputStreamState method sets the Device MFT input stream state and media type.
Stream ID of the input stream where the state and media type needs to be changed.
Preferred media type for the input stream is passed in through this parameter. Device MFT should change the media type only if the incoming media type is different from the current media type.
Specifies the DeviceStreamState which the input stream should transition to.
When
The method returns an
Return code | Description |
---|---|
| Initialization succeeded |
| Device MFT could not support the request at this time. |
| An invalid stream ID was passed. |
| The requested stream transition is not possible. |
?
This interface function helps to transition the input stream to a specified state with a specified media type set on the input stream. This will be used by device transform manager (DTM) when the Device MFT requests a specific input stream?s state and media type to be changed. Device MFT would need to request such a change when one of the Device MFT's output changes.
As an example, consider a Device MFT that has two input streams and three output streams. Let Output 1 and Output 2 source from Input 1 and stream at 720p. Now, if Output 2?s media type changes to 1080p, Device MFT has to change Input 1's media type to 1080p. To achieve this, Device MFT should request DTM to call this method using the
The SetOutputStreamState method sets the Device MFT output stream state and media type.
Stream ID of the input stream where the state and media type needs to be changed.
Preferred media type for the input stream is passed in through this parameter. Device MFT should change the media type only if the incoming media type is different from the current media type.
Specifies the DeviceStreamState which the input stream should transition to.
Must be zero.
The method returns an
Return code | Description |
---|---|
| Transitioning the stream state succeeded. |
| Device MFT could not support the request at this time. |
| An invalid stream ID was passed. |
| The requested stream transition is not possible. |
?
This interface method helps to transition the output stream to a specified state with specified media type set on the output stream. This will be used by the DTM when the Device Source requests a specific output stream?s state and media type to be changed. Device MFT should change the specified output stream?s media type and state to the requested media type.
If the incoming media type and stream state are same as the current media type and stream state the method return
If the incoming media type and current media type of the stream are the same, Device MFT must change the stream?s state to the requested value and return the appropriate
When a change in the output stream?s media type requires a corresponding change in the input then Device MFT must post the
As an example, consider a Device MFT that has two input streams and three output streams. Let Output 1 and Output 2 source from Input 1 and stream at 720p. Now, let us say Output 2?s media type changes to 1080p. To satisfy this request, Device MFT must change the Input 1 media type to 1080p, by posting
Initializes the Digital Living Network Alliance (DLNA) media sink.
The DLNA media sink exposes this interface. To get a reference to this interface, call CoCreateInstance. The CLSID is CLSID_MPEG2DLNASink.
Initializes the Digital Living Network Alliance (DLNA) media sink.
Pointer to a byte stream. The DLNA media sink writes data to this byte stream. The byte stream must be writable.
If TRUE, the DLNA media sink accepts PAL video formats. Otherwise, it accepts NTSC video formats.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The method was already called. |
| The media sink's |
?
Configures Windows Media Digital Rights Management (DRM) for Network Devices on a network sink.
The Advanced Systems Format (ASF) streaming media sink exposes this interface. To get a reference to the
For more information, see Remarks.
To stream protected content over a network, the ASF streaming media sink provides an output trust authority (OTA) that supports Windows Media DRM for Network Devices and implements the
The application gets a reference to
To stream the content, the application does the following:
To stream DRM-protected content over a network from a server to a client, an application must use the Microsoft Media Foundation Protected Media Path (PMP). The media sink and the application-provided HTTP byte stream exist in mfpmp.exe. Therefore, the byte stream must expose the
When the clock starts for the first time or restarts , the encrypter that is used for encrypting samples is retrieved, and the license response is cached.
Gets the license response for the specified request.
Pointer to a byte array that contains the license request.
Size, in bytes, of the license request.
Receives a reference to a byte array that contains the license response. The caller must free the array by calling CoTaskMemFree.
Receives the size, in bytes, of the license response.
Receives the key identifier. The caller must release the string by calling SysFreeString.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink was shut down. |
?
Not implemented in this release.
Receives a reference to a byte array that contains the license response. The caller must free the array by calling CoTaskMemFree.
Receives the size, in bytes, of the license response.
The method returns E_NOTIMPL.
Represents a buffer that contains a Microsoft DirectX Graphics Infrastructure (DXGI) surface.
To create a DXGI media buffer, first create the DXGI surface. Then call
Gets the index of the subresource that is associated with this media buffer.
The subresource index is specified when you create the media buffer object. See
For more information about texture subresources, see
Queries the Microsoft DirectX Graphics Infrastructure (DXGI) surface for an interface.
The interface identifer (IID) of the interface being requested.
Receives a reference to the interface. The caller must release the interface.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The object does not support the specified interface. |
| Invalid request. |
?
You can use this method to get a reference to the
Gets the index of the subresource that is associated with this media buffer.
Receives the zero-based index of the subresource.
If this method succeeds, it returns
The subresource index is specified when you create the media buffer object. See
For more information about texture subresources, see
Gets an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The object does not support the specified interface. |
| The specified key was not found. |
?
Stores an arbitrary
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| An item already exists with this key. |
?
To retrieve the reference from the object, call
Provides functionality for getting the
Gets the
Gets the
If this method succeeds, it returns
Enables an application to use a Media Foundation transform (MFT) that has restrictions on its use.
If you register an MFT that requires unlocking, include the
Unlocks a Media Foundation transform (MFT) so that the application can use it.
A reference to the
If this method succeeds, it returns
This method authenticates the caller, using a private communication channel between the MFT and the object that implements the
Retrieves the number of input pins on the EVR filter. The EVR filter always has at least one input pin, which corresponds to the reference stream.
Retrieves the number of input pins on the EVR filter. The EVR filter always has at least one input pin, which corresponds to the reference stream.
Sets the number of input pins on the EVR filter.
Specifies the total number of input pins on the EVR filter. This value includes the input pin for the reference stream, which is created by default. For example, to mix one substream plus the reference stream, set this parameter to 2.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid number of streams. The minimum is one, and the maximum is 16. |
| This method has already been called, or at least one pin is already connected. |
?
After this method has been called, it cannot be called a second time on the same instance of the EVR filter. Also, the method fails if any input pins are connected.
Retrieves the number of input pins on the EVR filter. The EVR filter always has at least one input pin, which corresponds to the reference stream.
Receives the number of streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Configures the DirectShow Enhanced Video Renderer (EVR) filter. To get a reference to this interface, call QueryInterface on the EVR filter.
Gets or sets the configuration parameters for the Microsoft DirectShow Enhanced Video Renderer Filter filter.
Sets the configuration parameters for the Microsoft DirectShow Enhanced Video Renderer Filter (EVR).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
Gets the configuration parameters for the Microsoft DirectShow Enhanced Video Renderer Filter filter.
If this method succeeds, it returns
Optionally supported by media sinks to perform required tasks before shutdown. This interface is typically exposed by archive sinks?that is, media sinks that write to a file. It is used to perform tasks such as flushing data to disk or updating a file header.
To get a reference to this interface, call QueryInterface on the media sink.
If a media sink exposes this interface, the Media Session will call BeginFinalize on the sink before the session closes.
Notifies the media sink to asynchronously take any steps it needs to finish its tasks.
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Many archive media sinks have steps they need to do at the end of archiving to complete their file operations, such as updating the header (for some formats) or flushing all pending writes to disk. In some cases, this may include expensive operations such as indexing the content. BeginFinalize is an asynchronous way to initiate final tasks.
When the finalize operation is complete, the callback object's
Completes an asynchronous finalize operation.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method after the
Implemented by the Microsoft Media Foundation sink writer object.
To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Enables a media source in the application process to create objects in the protected media path (PMP) process.
This interface is used when a media source resides in the application process but the Media Session resides in a PMP process. The media source can use this interface to create objects in the PMP process. For example, to play DRM-protected content, the media source typically must create an input trust authority (ITA) in the PMP process.
To use this interface, the media source implements the
You can also get a reference to this interface by calling
Applications implement this interface in order to provide custom a custom HTTP or HTTPS download implementation. Use the
Applications implement this interface in order to provide custom a custom HTTP or HTTPS download implementation. Use the
Callback interface to notify the application when an asynchronous method completes.
For more information about asynchronous methods in Microsoft Media Foundation, see Asynchronous Callback Methods.
This interface is also used to perform a work item in a Media Foundation work-queue. For more information, see Work Queues.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
If the work queue is not compatible with the value returned in pdwFlags, the Media Foundation platform returns
Applies to: desktop apps | Metro style apps
Called when an asynchronous operation is completed.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Within your implementation of Invoke, call the corresponding End... method.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Provides logging information about the parent object the async callback is associated with.
Media sources are objects that generate media data in the Media Foundation pipeline. This section describes the media source APIs in detail. Read this section if you are implementing a custom media source, or using a media source outside of the Media Foundation pipeline.
If your application uses the control layer, it needs to use only a limited subset of the media source APIs. For information, see the topic Using Media Sources with the Media Session.
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The
The following functions return
A byte stream for a media souce can be opened with read access. A byte stream for an archive media sink should be opened with both read and write access. (Read access may be required, because the archive sink might need to read portions of the file as it writes.)
Some implementations of this interface also expose one or more of the following interfaces:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the characteristics of the byte stream.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the length of the stream.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the current read or write position in the stream.
The methods that update the current position are Read, BeginRead, Write, BeginWrite, SetCurrentPosition, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Queries whether the current position has reached the end of the stream.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Reads data from the stream.
Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
This method reads at most cb bytes from the current position in the stream and copies them into the buffer provided by the caller. The number of bytes that were read is returned in the pcbRead parameter. The method does not return an error code on reaching the end of the file, so the application should check the value in pcbRead after the method returns.
This method is synchronous. It blocks until the read operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous read operation from the stream.
Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been read into the buffer, the callback object's
Do not read from, write to, free, or reallocate the buffer while an asynchronous read is pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous read operation.
Pointer to the
Call this method after the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Writes data to the stream.
Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
If this method succeeds, it returns
This method writes the contents of the pb buffer to the stream, starting at the current stream position. The number of bytes that were written is returned in the pcbWritten parameter.
This method is synchronous. It blocks until the write operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous write operation to the stream.
Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been written to the stream, the callback object's
Do not reallocate, free, or write to the buffer while an asynchronous write is still pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous write operation.
Pointer to the
Call this method when the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Moves the current position in the stream by a specified offset.
Specifies the origin of the seek as a member of the
Specifies the new position, as a byte offset from the seek origin.
Specifies zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| All pending I/O requests are canceled after the seek request completes successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
If this method succeeds, it returns
If the byte stream is read-only, this method has no effect.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
The GetCurrentOperationMode
method retrieves the optimization features in effect.
Zero-based index of an output stream on the DMO.
Pointer to a variable that receives the current features. The returned value is a bitwise combination of zero or more flags from the DMO_VIDEO_OUTPUT_STREAM_FLAGS enumeration.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| |
| Success |
?
The GetCurrentSampleRequirements
method retrieves the optimization features required to process the next sample, given the features already agreed to by the application.
Zero-based index of an output stream on the DMO.
Pointer to a variable that receives the required features. The returned value is a bitwise combination of zero or more flags from the DMO_VIDEO_OUTPUT_STREAM_FLAGS enumeration.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| |
| Success |
?
After an application calls the
Before processing a sample, the application can call this method. If the DMO does not require a given feature in order to process the next sample, it omits the corresponding flag from the pdwRequestedFeatures parameter. For the next sample only, the application can ignore the feature. The results of this method are valid only for the next call to the
The DMO will return only the flags that were agreed to in the SetOperationMode method. In other words, you cannot dynamically enable new features with this method.
The Next
method retrieves a specified number of items in the enumeration sequence.
Number of items to retrieve.
Array of size cItemsToFetch that is filled with the CLSIDs of the enumerated DMOs.
Array of size cItemsToFetch that is filled with the friendly names of the enumerated DMOs.
Pointer to a variable that receives the actual number of items retrieved. Can be
Returns an
Return code | Description |
---|---|
| Invalid argument. |
| Insufficient memory. |
| |
| Retrieved fewer items than requested. |
| Retrieved the requested number of items. |
?
If the method succeeds, the arrays given by the pCLSID and Names parameters are filled with CLSIDs and wide-character strings. The value of *pcItemsFetched specifies the number of items returned in these arrays.
The method returns
The caller must free the memory allocated for each string returned in the Names parameter, using the CoTaskMemFree function.
The Reset
method resets the enumeration sequence to the beginning.
Returns
The
interface provides methods for manipulating a data buffer. Buffers passed to the
The
interface provides methods for manipulating a Microsoft DirectX Media Object (DMO).
The GetOutputStreamInfo
method retrieves information about an output stream; for example, whether the stream is discardable, and whether it uses a fixed sample size. This information never changes.
Zero-based index of an output stream on the DMO.
Pointer to a variable that receives a bitwise combination of zero or more DMO_OUTPUT_STREAM_INFO_FLAGS flags.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| |
| Success |
?
The GetInputType
method retrieves a preferred media type for a specified input stream.
Zero-based index of an input stream on the DMO.
Zero-based index on the set of acceptable media types.
Pointer to a
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Type index is out of range. |
| Insufficient memory. |
| |
| Success. |
?
Call this method to enumerate an input stream's preferred media types. The DMO assigns each media type an index value in order of preference. The most preferred type has an index of zero. To enumerate all the types, make successive calls while incrementing the type index until the method returns DMO_E_NO_MORE_ITEMS. The DMO is not guaranteed to enumerate every media type that it supports.
The format block in the returned type might be
If the method succeeds, call MoFreeMediaType to free the format block. (This function is also safe to call when the format block is
To set the media type, call the
To test whether a particular media type is acceptable, call SetInputType with the
To test whether the dwTypeIndex parameter is in range, set pmt to
The SetInputType
method sets the media type on an input stream, or tests whether a media type is acceptable.
Zero-based index of an input stream on the DMO.
Pointer to a
Bitwise combination of zero or more flags from the DMO_SET_TYPE_FLAGS enumeration.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| Media type was not accepted |
| Media type is not acceptable |
| Media type was set successfully, or is acceptable |
?
Call this method to test, set, or clear the media type on an input stream:
The media types that are currently set on other streams can affect whether the media type is acceptable.
The GetInputCurrentType
method retrieves the media type that was set for an input stream, if any.
Zero-based index of an input stream on the DMO.
Pointer to a
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Media type was not set. |
| Insufficient memory. |
| Success. |
?
The caller must set the media type for the stream before calling this method. To set the media type, call the
If the method succeeds, call MoFreeMediaType to free the format block.
The GetInputSizeInfo
method retrieves the buffer requirements for a specified input stream.
Zero-based index of an input stream on the DMO.
Pointer to a variable that receives the minimum size of an input buffer for this stream, in bytes.
Pointer to a variable that receives the maximum amount of data that the DMO will hold for lookahead, in bytes. If the DMO does not perform lookahead on the stream, the value is zero.
Pointer to a variable that receives the required buffer alignment, in bytes. If the input stream has no alignment requirement, the value is 1.
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Media type was not set. |
| Success. |
?
The buffer requirements may depend on the media types of the various streams. Before calling this method, set the media type of each stream by calling the
If the DMO performs lookahead on the input stream, it returns the
A buffer is aligned if the buffer's start address is a multiple of *pcbAlignment. The alignment must be a power of two. Depending on the microprocessor, reads and writes to an aligned buffer might be faster than to an unaligned buffer. Also, some microprocessors do not support unaligned reads and writes.
The Flush
method flushes all internally buffered data.
Returns
The DMO performs the following actions when this method is called:
Media types, maximum latency, and locked state do not change.
When the method returns, every input stream accepts data. Output streams cannot produce any data until the application calls the
The Discontinuity
method signals a discontinuity on the specified input stream.
Zero-based index of an input stream on the DMO.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| The DMO is not accepting input. |
| The input and output types have not been set. |
| Success |
?
A discontinuity represents a break in the input. A discontinuity might occur because no more data is expected, the format is changing, or there is a gap in the data. After a discontinuity, the DMO does not accept further input on that stream until all pending data has been processed. The application should call the
This method might fail if it is called before the client sets the input and output types on the DMO.
The ProcessInput
method delivers a buffer to the specified input stream.
Zero-based index of an input stream on the DMO.
Pointer to the buffer's
Bitwise combination of zero or more flags from the DMO_INPUT_DATA_BUFFER_FLAGS enumeration.
Time stamp that specifies the start time of the data in the buffer. If the buffer has a valid time stamp, set the
Reference time specifying the duration of the data in the buffer. If this value is valid, set the
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Data cannot be accepted. |
| No output to process. |
| Success. |
?
The input buffer specified in the pBuffer parameter is read-only. The DMO will not modify the data in this buffer. All write operations occur on the output buffers, which are given in a separate call to the
If the DMO does not process all the data in the buffer, it keeps a reference count on the buffer. It releases the buffer once it has generated all the output, unless it needs to perform lookahead on the data. (To determine whether a DMO performs lookahead, call the
If this method returns DMO_E_NOTACCEPTING, call ProcessOutput until the input stream can accept more data. To determine whether the stream can accept more data, call the
If the method returns S_FALSE, no output was generated from this input and the application does not need to call ProcessOutput. However, a DMO is not required to return S_FALSE in this situation; it might return
The ProcessOutput
method generates output from the current input data.
Bitwise combination of zero or more flags from the DMO_PROCESS_OUTPUT_FLAGS enumeration.
Number of output buffers.
Pointer to an array of
Pointer to a variable that receives a reserved value (zero). The application should ignore this value.
Returns an
Return code | Description |
---|---|
| Failure |
| Invalid argument |
| |
| No output was generated |
| Success |
?
The pOutputBuffers parameter points to an array of
Each
When the application calls ProcessOutput
, the DMO processes as much input data as possible. It writes the output data to the output buffers, starting from the end of the data in each buffer. (To find the end of the data, call the
If the DMO fills an entire output buffer and still has input data to process, the DMO returns the
If the method returns S_FALSE, no output was generated. However, a DMO is not required to return S_FALSE in this situation; it might return
Discarding data:
You can discard data from a stream by setting the
For each stream in which pBuffer is
To check whether a stream is discardable or optional, call the
The Lock
method acquires or releases a lock on the DMO. Call this method to keep the DMO serialized when performing multiple operations.
Value that specifies whether to acquire or release the lock. If the value is non-zero, a lock is acquired. If the value is zero, the lock is released.
Returns an
Return code | Description |
---|---|
| Failure |
| Success |
?
This method prevents other threads from calling methods on the DMO. If another thread calls a method on the DMO, the thread blocks until the lock is released.
If you are using the Active Template Library (ATL) to implement a DMO, the name of the Lock method conflicts with the CComObjectRootEx::Lock method. To work around this problem, define the preprocessor symbol FIX_LOCK_NAME before including the header file Dmo.h:
#define FIX_LOCK_NAME #include <dmo.h>
This directive causes the preprocessor to rename the
The GetLatency
method retrieves the latency introduced by this DMO.
This method returns the average time required to process each buffer. This value usually depends on factors in the run-time environment, such as the processor speed and the CPU load. One possible way to implement this method is for the DMO to keep a running average based on historical data.
The Clone
method creates a copy of the DMO in its current state.
Address of a reference to receive the new DMO's
Returns
If the method succeeds, the
The GetLatency
method retrieves the latency introduced by this DMO.
Pointer to a variable that receives the latency, in 100-nanosecond units.
Returns
This method returns the average time required to process each buffer. This value usually depends on factors in the run-time environment, such as the processor speed and the CPU load. One possible way to implement this method is for the DMO to keep a running average based on historical data.
Enables other components in the protected media path (PMP) to use the input protection system provided by an input trust authorities (ITA). An ITA is a component that implements an input protection system for media content. ITAs expose the
An ITA translates policy from the content's native format into a common format that is used by other PMP components. It also provides a decrypter, if one is needed to decrypt the stream.
The topology contains one ITA instance for every protected stream in the media source. The ITA is obtained from the media source by calling
Retrieves a decrypter transform.
Interface identifier (IID) of the interface being requested. Currently this value must be IID_IMFTransform, which requests the
Receives a reference to the interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The decrypter does not support the requested interface. |
| This input trust authority (ITA) does not provide a decrypter. |
?
The decrypter should be created in a disabled state, where any calls to
An ITA is not required to provide a decrypter. If the source content is not encrypted, the method should return
The ITA must create a new instance of its decrypter for each call to GetDecrypter. Do not return multiple references to the same decrypter. They must be separate instances because the Media Session might place them in two different branches of the topology.
Requests permission to perform a specified action on the stream.
The requested action, specified as a member of the
Receives the value
The method returns an
Return code | Description |
---|---|
| The user has permission to perform this action. |
| The user must individualize the application. |
| The user must obtain a license. |
?
This method verifies whether the user has permission to perform a specified action on the stream. The ITA does any work needed to verify the user's right to perform the action, such as checking licenses.
To verify the user's rights, the ITA might need to perform additional steps that require interaction with the user or consent from the user. For example, it might need to acquire a new license or individualize a DRM component. In that case, the ITA creates an activation object for a content enabler and returns the activation object's
The Media Session returns the
The application calls
The application calls
The Media Session calls RequestAccess again.
The return value signals whether the user has permission to perform the action:
If the user already has permission to perform the action, the method returns
If the user does not have permission, the method returns a failure code and sets *ppContentEnablerActivate to
If the ITA must perform additional steps that require interaction with the user, the method returns a failure code and returns the content enabler's
The Media Session will not allow the action unless this method returns
A stream can go to multiple outputs, so this method might be called multiple times with different actions, once for every output.
Retrieves the policy that defines which output protection systems are allowed for this stream, and the configuration data for each protection system.
The action that will be performed on this stream, specified as a member of the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Notifies the input trust authority (ITA) that a requested action is about to be performed.
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before calling this method, the Media Session calls
Notifies the input trust authority (ITA) when the number of output trust authorities (OTAs) that will perform a specified action has changed.
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The ITA can update its internal state if needed. If the method returns a failure code, the Media Session cancels the action.
Resets the input trust authority (ITA) to its initial state.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When this method is called, the ITA should disable any decrypter that was returned in the
Registers Media Foundation transforms (MFTs) in the caller's process.
The Media Session exposes this interface as a service. To obtain a reference to this interface, call the
This interface requires the Media Session. If you are not using the Media Session for playback, call one of the following functions instead:
Registers one or more Media Foundation transforms (MFTs) in the caller's process.
A reference to an array of
The number of elements in the pMFTs array.
If this method succeeds, it returns
This method is similar to the
Unlike
Provides a generic way to store key/value pairs on an object. The keys are
For a list of predefined attribute
To create an empty attribute store, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of attributes that are set on this object.
To enumerate all of the attributes, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the value associated with a key.
A
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified key was not found. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the data type of the value associated with a key.
Receives a member of the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether a stored attribute value equals to a specified
Receives a Boolean value indicating whether the attribute matches the value given in Value. See Remarks. This parameter must not be
The method sets pbResult to
No attribute is found whose key matches the one given in guidKey.
The attribute's
The attribute value does not match the value given in Value.
The method fails.
Otherwise, the method sets pbResult to TRUE.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Compares the attributes on this object with the attributes on another object.
Pointer to the
Member of the
Receives a Boolean value. The value is TRUE if the two sets of attributes match in the way specified by the MatchType parameter. Otherwise, the value is
If pThis is the object whose Compare method is called, and pTheirs is the object passed in as the pTheirs parameter, the following comparisons are defined by MatchType.
Match type | Returns TRUE if and only if |
---|---|
For every attribute in pThis, an attribute with the same key and value exists in pTheirs. | |
For every attribute in pTheirs, an attribute with the same key and value exists in pThis. | |
The key/value pairs are identical in both objects. | |
Take the intersection of the keys in pThis and the keys in pTheirs. The values associated with those keys are identical in both pThis and pTheirs. | |
Take the object with the smallest number of attributes. For every attribute in that object, an attribute with the same key and value exists in the other object. |
?
The pTheirs and pbResult parameters must not be
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a UINT32 value associated with a key.
Receives a UINT32 value. If the key is found and the data type is UINT32, the method copies the value into this parameter. Otherwise, the original value of this parameter is not changed.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a UINT64 value associated with a key.
Receives a UINT64 value. If the key is found and the data type is UINT64, the method copies the value into this parameter. Otherwise, the original value of this parameter is not changed.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a double value associated with a key.
Receives a double value. If the key is found and the data type is double, the method copies the value into this parameter. Otherwise, the original value of this parameter is not changed.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a
Receives a
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of a string value associated with a key.
If the key is found and the value is a string type, this parameter receives the number of characters in the string, not including the terminating
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a wide-character string associated with a key.
Pointer to a wide-character array allocated by the caller. The array must be large enough to hold the string, including the terminating
The size of the pwszValue array, in characters. This value includes the terminating
Receives the number of characters in the string, excluding the terminating
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The length of the string is too large to fit in a UINT32 value. |
| The buffer is not large enough to hold the string. |
| The specified key was not found. |
| The attribute value is not a string. |
?
You can also use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets a wide-character string associated with a key. This method allocates the memory for the string.
A
If the key is found and the value is a string type, this parameter receives a copy of the string. The caller must free the memory for the string by calling CoTaskMemFree.
Receives the number of characters in the string, excluding the terminating
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified key was not found. |
| The attribute value is not a string. |
?
To copy a string value into a caller-allocated buffer, use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of a byte array associated with a key.
If the key is found and the value is a byte array, this parameter receives the size of the array, in bytes.
To get the byte array, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a byte array associated with a key. This method copies the array into a caller-allocated buffer.
Pointer to a buffer allocated by the caller. If the key is found and the value is a byte array, the method copies the array into this buffer. To find the required size of the buffer, call
The size of the pBuf buffer, in bytes.
Receives the size of the byte array. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer is not large enough to the array. |
| The specified key was not found. |
| The attribute value is not a byte array. |
?
You can also use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Provides a generic way to store key/value pairs on an object. The keys are
For a list of predefined attribute
To create an empty attribute store, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an interface reference associated with a key.
Interface identifier (IID) of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The attribute value is an |
| The specified key was not found. |
| The attribute value is not an |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Adds an attribute value with a specified key.
A
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
| Invalid attribute type. |
?
This method checks whether the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes a key/value pair from the object's attribute list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the specified key does not exist, the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes all key/value pairs from the object's attribute list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a UINT32 value with a key.
New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the UINT32 value, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a UINT64 value with a key.
New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the UINT64 value, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a double value with a key.
New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the double value, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a
New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
?
To retrieve the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a wide-character string with a key.
Null-terminated wide-character string to associate with this key. The method stores a copy of the string.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the string, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a byte array with a key.
Pointer to a byte array to associate with this key. The method stores a copy of the array.
Size of the array, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the byte array, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the
It is not an error to call SetUnknown with pUnknown equal to
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Locks the attribute store so that no other thread can access it. If the attribute store is already locked by another thread, this method blocks until the other thread unlocks the object. After calling this method, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method can cause a deadlock if a thread that calls LockStore waits on a thread that calls any other
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Unlocks the attribute store after a call to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of attributes that are set on this object.
Receives the number of attributes. This parameter must not be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To enumerate all of the attributes, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an attribute at the specified index.
Index of the attribute to retrieve. To get the number of attributes, call
Receives the
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid index. |
?
To enumerate all of an object's attributes in a thread-safe way, do the following:
Call
Call
Call GetItemByIndex to get each attribute by index.
Call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Copies all of the attributes from this object into another attribute store.
A reference to the
If this method succeeds, it returns
This method deletes all of the attributes originally stored in pDest.
Note??When you call CopyAllItems on an
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Attributes are used throughout Microsoft Media Foundation to configure objects, describe media formats, query object properties, and other purposes. For more information, see Attributes in Media Foundation.
For a complete list of all the defined attribute GUIDs in Media Foundation, see Media Foundation Attributes.
Applies to: desktop apps | Metro style apps
Retrieves an attribute at the specified index.
Index of the attribute to retrieve. To get the number of attributes, call
Receives the
To enumerate all of an object's attributes in a thread-safe way, do the following:
Call
Call
Call GetItemByIndex to get each attribute by index.
Call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Adds an attribute value with a specified key.
A
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
| Invalid attribute type. |
?
This method checks whether the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Adds an attribute value with a specified key.
A
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
| Invalid attribute type. |
?
This method checks whether the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents a block of memory that contains media data. Use this interface to access the data in the buffer.
If the buffer contains 2-D image data (such as an uncompressed video frame), you should query the buffer for the
To get a buffer from a media sample, call one of the following
To create a new buffer object, use one of the following functions.
Function | Description |
---|---|
| Creates a buffer and allocates system memory. |
| Creates a media buffer that wraps an existing media buffer. |
| Creates a buffer that manages a DirectX surface. |
| Creates a buffer and allocates system memory with a specified alignment. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of the valid data in the buffer.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the allocated size of the buffer.
The buffer might or might not contain any valid data, and if there is valid data in the buffer, it might be smaller than the buffer's allocated size. To get the length of the valid data, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gives the caller access to the memory in the buffer, for reading or writing
Receives the maximum amount of data that can be written to the buffer. This parameter can be
Receives the length of the valid data in the buffer, in bytes. This parameter can be
Receives a reference to the start of the buffer.
This method gives the caller access to the entire buffer, up to the maximum size returned in the pcbMaxLength parameter. The value returned in pcbCurrentLength is the size of any valid data already in the buffer, which might be less than the total buffer size.
The reference returned in ppbBuffer is guaranteed to be valid, and can safely be accessed across the entire buffer for as long as the lock is held. When you are done accessing the buffer, call
Locking the buffer does not prevent other threads from calling Lock, so you should not rely on this method to synchronize threads.
This method does not allocate any memory, or transfer ownership of the memory to the caller. Do not release or free the memory; the media buffer will free the memory when the media buffer is destroyed.
If you modify the contents of the buffer, update the current length by calling
If the buffer supports the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Unlocks a buffer that was previously locked. Call this method once for every call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| For Direct3D surface buffers, an error occurred when unlocking the surface. |
?
It is an error to call Unlock if you did not call Lock previously.
After calling this method, do not use the reference returned by the Lock method. It is no longer guaranteed to be valid.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of the valid data in the buffer.
Receives the length of the valid data, in bytes. If the buffer does not contain any valid data, the value is zero.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the length of the valid data in the buffer.
Length of the valid data, in bytes. This value cannot be greater than the allocated size of the buffer, which is returned by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified length is greater than the maximum size of the buffer. |
?
Call this method if you write data into the buffer.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the allocated size of the buffer.
Receives the allocated size of the buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The buffer might or might not contain any valid data, and if there is valid data in the buffer, it might be smaller than the buffer's allocated size. To get the length of the valid data, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Enables an application to play audio or video files.
The Media Engine implements this interface. To create an instance of the Media Engine, call
This interface is extended with
Gets the most recent error status.
This method returns the last error status, if any, that resulted from loading the media source. If there has not been an error, ppError receives the value
This method corresponds to the error attribute of the HTMLMediaElement interface in HTML5.
Sets the current error code.
Sets a list of media sources.
This method corresponds to adding a list of source elements to a media element in HTML5.
The Media Engine tries to load each item in the pSrcElements list, until it finds one that loads successfully. After this method is called, the application can use the
This method completes asynchronously. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load a URL, it sends an
For more information about event handling in the Media Engine, see
If the application also calls
Gets the current network state of the media engine.
This method corresponds to the networkState attribute of the HTMLMediaElement interface in HTML5.
Gets or sets the preload flag.
This method corresponds to the preload attribute of the HTMLMediaElement interface in HTML5. The value is a hint to the user-agent whether to preload the media resource.
Queries how much resource data the media engine has buffered.
This method corresponds to the buffered attribute of the HTMLMediaElement interface in HTML5.
The returned
Gets the ready state, which indicates whether the current media resource can be rendered.
This method corresponds to the readyState attribute of the HTMLMediaElement interface in HTML5.
Queries whether the Media Engine is currently seeking to a new playback position.
This method corresponds to the seeking attribute of the HTMLMediaElement interface in HTML5.
Gets or sets the current playback position.
This method corresponds to the currentTime attribute of the HTMLMediaElement interface in HTML5.
Gets the initial playback position.
This method corresponds to the initialTime attribute of the HTMLMediaElement interface in HTML5.
Gets the duration of the media resource.
This method corresponds to the duration attribute of the HTMLMediaElement interface in HTML5.
If the duration changes, the Media Engine sends an
Queries whether playback is currently paused.
This method corresponds to the paused attribute of the HTMLMediaElement interface in HTML5.
Gets or sets the default playback rate.
This method corresponds to getting the defaultPlaybackRate attribute of the HTMLMediaElement interface in HTML5.
The default playback rate is used for the next call to the
Gets or sets the current playback rate.
This method corresponds to getting the playbackRate attribute of the HTMLMediaElement interface in HTML5.
Gets the time ranges that have been rendered.
This method corresponds to the played attribute of the HTMLMediaElement interface in HTML5.
Gets the time ranges to which the Media Engine can currently seek.
This method corresponds to the seekable attribute of the HTMLMediaElement interface in HTML5.
To find out whether the media source supports seeking, call
Queries whether playback has ended.
This method corresponds to the ended attribute of the HTMLMediaElement interface in HTML5.
Queries whether the Media Engine automatically begins playback.
This method corresponds to the autoplay attribute of the HTMLMediaElement interface in HTML5.
If this method returns TRUE, playback begins automatically after the
Queries whether the Media Engine will loop playback.
This method corresponds to getting the loop attribute of the HTMLMediaElement interface in HTML5.
If looping is enabled, the Media Engine seeks to the start of the content when playback reaches the end.
Queries whether the audio is muted.
Gets or sets the audio volume level.
Gets the most recent error status.
Receives either a reference to the
If this method succeeds, it returns
This method returns the last error status, if any, that resulted from loading the media source. If there has not been an error, ppError receives the value
This method corresponds to the error attribute of the HTMLMediaElement interface in HTML5.
Sets the current error code.
The error code, as an
If this method succeeds, it returns
Sets a list of media sources.
A reference to the
If this method succeeds, it returns
This method corresponds to adding a list of source elements to a media element in HTML5.
The Media Engine tries to load each item in the pSrcElements list, until it finds one that loads successfully. After this method is called, the application can use the
This method completes asynchronously. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load a URL, it sends an
For more information about event handling in the Media Engine, see
If the application also calls
Sets the URL of a media resource.
The URL of the media resource.
If this method succeeds, it returns
This method corresponds to setting the src attribute of the HTMLMediaElement interface in HTML5.
The URL specified by this method takes precedence over media resources specified in the
This method asynchronously loads the URL. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load the URL, the Media Engine sends an
For more information about event handling in the Media Engine, see
Gets the URL of the current media resource, or an empty string if no media resource is present.
Receives a BSTR that contains the URL of the current media resource. If there is no media resource, ppUrl receives an empty string. The caller must free the BSTR by calling SysFreeString.
If this method succeeds, it returns
This method corresponds to the currentSrc attribute of the HTMLMediaElement interface in HTML5.
Initially, the current media resource is empty. It is updated when the Media Engine performs the resource selection algorithm.
Gets the current network state of the media engine.
Returns an
This method corresponds to the networkState attribute of the HTMLMediaElement interface in HTML5.
Gets the preload flag.
Returns an
This method corresponds to the preload attribute of the HTMLMediaElement interface in HTML5. The value is a hint to the user-agent whether to preload the media resource.
Sets the preload flag.
An
If this method succeeds, it returns
This method corresponds to setting the preload attribute of the HTMLMediaElement interface in HTML5. The value is a hint to the user-agent whether to preload the media resource.
Queries how much resource data the media engine has buffered.
Receives a reference to the
If this method succeeds, it returns
This method corresponds to the buffered attribute of the HTMLMediaElement interface in HTML5.
The returned
Loads the current media source.
If this method succeeds, it returns
The main purpose of this method is to reload a list of source elements after updating the list. For more information, see SetSourceElements. Otherwise, calling this method is generally not required. To load a new media source, call
The Load method explictly invokes the Media Engine's media resource loading algorithm. Before calling this method, you must set the media resource by calling
This method completes asynchronously. When the Load operation starts, the Media Engine sends an
If the Media Engine is unable to load the file, the Media Engine sends an
For more information about event handling in the Media Engine, see
This method corresponds to the load method of the HTMLMediaElement interface in HTML5.
Queries how likely it is that the Media Engine can play a specified type of media resource.
A string that contains a MIME type with an optional codecs parameter, as defined in RFC 4281.
Receives an
If this method succeeds, it returns
This method corresponds to the canPlayType attribute of the HTMLMediaElement interface in HTML5.
The canPlayType attribute defines the following values.
Value | Description |
---|---|
"" (empty string) | The user-agent cannot play the resource, or the resource type is "application/octet-stream". |
"probably" | The user-agent probably can play the resource. |
"maybe" | Neither of the previous values applies. |
?
The value "probably" is used because a MIME type for a media resource is generally not a complete description of the resource. For example, "video/mp4" specifies an MP4 file with video, but does not describe the codec. Even with the optional codecs parameter, the MIME type omits some information, such as the actual coded bit rate. Therefore, it is usually impossible to be certain that playback is possible until the actual media resource is opened.
Gets the ready state, which indicates whether the current media resource can be rendered.
Returns an
This method corresponds to the readyState attribute of the HTMLMediaElement interface in HTML5.
Queries whether the Media Engine is currently seeking to a new playback position.
Returns TRUE if the Media Engine is seeking, or
This method corresponds to the seeking attribute of the HTMLMediaElement interface in HTML5.
Gets the current playback position.
Returns the playback position, in seconds.
This method corresponds to the currentTime attribute of the HTMLMediaElement interface in HTML5.
Seeks to a new playback position.
The new playback position, in seconds.
If this method succeeds, it returns
This method corresponds to setting the currentTime attribute of the HTMLMediaElement interface in HTML5.
The method completes asynchronously. When the seek operation starts, the Media Engine sends an
Gets the initial playback position.
Returns the initial playback position, in seconds.
This method corresponds to the initialTime attribute of the HTMLMediaElement interface in HTML5.
Gets the duration of the media resource.
Returns the duration, in seconds. If no media data is available, the method returns not-a-number (NaN). If the duration is unbounded, the method returns an infinite value.
This method corresponds to the duration attribute of the HTMLMediaElement interface in HTML5.
If the duration changes, the Media Engine sends an
Queries whether playback is currently paused.
Returns TRUE if playback is paused, or
This method corresponds to the paused attribute of the HTMLMediaElement interface in HTML5.
Gets the default playback rate.
Returns the default playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
This method corresponds to getting the defaultPlaybackRate attribute of the HTMLMediaElement interface in HTML5.
The default playback rate is used for the next call to the
Sets the default playback rate.
The default playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
If this method succeeds, it returns
This method corresponds to setting the defaultPlaybackRate attribute of the HTMLMediaElement interface in HTML5.
Gets the current playback rate.
Returns the playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
This method corresponds to getting the playbackRate attribute of the HTMLMediaElement interface in HTML5.
Sets the current playback rate.
The playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
If this method succeeds, it returns
This method corresponds to setting the playbackRate attribute of the HTMLMediaElement interface in HTML5.
Gets the time ranges that have been rendered.
Receives a reference to the
If this method succeeds, it returns
This method corresponds to the played attribute of the HTMLMediaElement interface in HTML5.
Gets the time ranges to which the Media Engine can currently seek.
Receives a reference to the
If this method succeeds, it returns
This method corresponds to the seekable attribute of the HTMLMediaElement interface in HTML5.
To find out whether the media source supports seeking, call
Queries whether playback has ended.
Returns TRUE if the direction of playback is forward and playback has reached the end of the media resource. Returns
This method corresponds to the ended attribute of the HTMLMediaElement interface in HTML5.
Queries whether the Media Engine automatically begins playback.
Returns TRUE if the Media Engine automatically begins playback, or
This method corresponds to the autoplay attribute of the HTMLMediaElement interface in HTML5.
If this method returns TRUE, playback begins automatically after the
Specifies whether the Media Engine automatically begins playback.
If TRUE, the Media Engine automatically begins playback after it loads a media source. Otherwise, playback does not begin until the application calls
If this method succeeds, it returns
This method corresponds to setting the autoplay attribute of the HTMLMediaElement interface in HTML5.
Queries whether the Media Engine will loop playback.
Returns TRUE if looping is enabled, or
This method corresponds to getting the loop attribute of the HTMLMediaElement interface in HTML5.
If looping is enabled, the Media Engine seeks to the start of the content when playback reaches the end.
Specifies whether the Media Engine loops playback.
Specify TRUE to enable looping, or
If this method succeeds, it returns
If Loop is TRUE, playback loops back to the beginning when it reaches the end of the source.
This method corresponds to setting the loop attribute of the HTMLMediaElement interface in HTML5.
Starts playback.
If this method succeeds, it returns
This method corresponds to the play method of the HTMLMediaElement interface in HTML5.
The method completes asynchronously. When the operation starts, the Media Engine sends an
Pauses playback.
If this method succeeds, it returns
This method corresponds to the pause method of the HTMLMediaElement interface in HTML5.
The method completes asynchronously. When the transition to paused is complete, the Media Engine sends an
Queries whether the audio is muted.
Returns TRUE if the audio is muted, or
Mutes or unmutes the audio.
Specify TRUE to mute the audio, or
If this method succeeds, it returns
Gets the audio volume level.
Returns the volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
Sets the audio volume level.
The volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
If this method succeeds, it returns
When the audio balance changes, the Media Engine sends an
Queries whether the current media resource contains a video stream.
Returns TRUE if the current media resource contains a video stream. Returns
Queries whether the current media resource contains an audio stream.
Returns TRUE if the current media resource contains an audio stream. Returns
Gets the size of the video frame, adjusted for aspect ratio.
Receives the width in pixels.
Receives the height in pixels.
If this method succeeds, it returns
This method adjusts for the correct picture aspect ratio. For example, if the encoded frame is 720 ? 420 and the picture aspect ratio is 4:3, the method will return a size equal to 640 ? 480 pixels.
Gets the picture aspect ratio of the video stream.
Receives the x component of the aspect ratio.
Receives the y component of the aspect ratio.
If this method succeeds, it returns
The Media Engine automatically converts the pixel aspect ratio to 1:1 (square pixels).
Shuts down the Media Engine and releases the resources it is using.
If this method succeeds, it returns
Copies the current video frame to a DXGI surface or WIC bitmap.
A reference to the
A reference to an
A reference to a
A reference to an
If this method succeeds, it returns
In frame-server mode, call this method to blit the video frame to a DXGI or WIC surface. The application can call this method at any time after the Media Engine loads a video resource. Typically, however, the application calls
The Media Engine scales and letterboxes the video to fit the destination rectangle. It fills the letterbox area with the border color.
For protected content, call the
Queries the Media Engine to find out whether a new video frame is ready.
If a new frame is ready, receives the presentation time of the frame.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded, but the Media Engine does not have a new frame. |
| A new video frame is ready for display. |
?
In frame-server mode, the application should call this method whenever a vertical blank occurs in the display device. If the method returns
Do not call this method in rendering mode or audio-only mode.
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Queries the Media Engine to find out whether a new video frame is ready.
If a new frame is ready, receives the presentation time of the frame.
In frame-server mode, the application should call this method whenever a vertical blank occurs in the display device. If the method returns
Do not call this method in rendering mode or audio-only mode.
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Sets the URL of a media resource.
The URL of the media resource.
If this method succeeds, it returns
This method corresponds to setting the src attribute of the HTMLMediaElement interface in HTML5.
The URL specified by this method takes precedence over media resources specified in the
This method asynchronously loads the URL. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load the URL, the Media Engine sends an
For more information about event handling in the Media Engine, see
Creates a new instance of the Media Engine.
Before calling this method, call
The Media Engine supports three distinct modes:
Mode | Description |
---|---|
Frame-server mode | In this mode, the Media Engine delivers uncompressed video frames to the application. The application is responsible for displaying each frame, using Microsoft Direct3D or any other rendering technique. The Media Engine renders the audio; the application is not responsible for audio rendering. Frame-server mode is the default mode. |
Rendering mode | In this mode, the Media Engine renders both audio and video. The video is rendered to a window or Microsoft DirectComposition visual provided by the application. To enable rendering mode, set either the |
Audio mode | In this mode, the Media Engine renders audio only, with no video. To enable audio mode, set the |
?
Creates a new instance of the Media Engine.
A bitwise OR of zero or more flags from the
A reference to the
This parameter specifies configuration attributes for the Media Engine. Call
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| A required attribute was missing from pAttr, or an invalid combination of attributes was used. |
?
Before calling this method, call
The Media Engine supports three distinct modes:
Mode | Description |
---|---|
Frame-server mode | In this mode, the Media Engine delivers uncompressed video frames to the application. The application is responsible for displaying each frame, using Microsoft Direct3D or any other rendering technique. The Media Engine renders the audio; the application is not responsible for audio rendering. Frame-server mode is the default mode. |
Rendering mode | In this mode, the Media Engine renders both audio and video. The video is rendered to a window or Microsoft DirectComposition visual provided by the application. To enable rendering mode, set either the |
Audio mode | In this mode, the Media Engine renders audio only, with no video. To enable audio mode, set the |
?
Creates a time range object.
Receives a reference to the
If this method succeeds, it returns
Creates a media error object.
Receives a reference to the
If this method succeeds, it returns
Creates an instance of the
Creates a media keys object based on the specified key system.
The media key system.
Points to the default file location for the store Content Decryption Module (CDM) data.
Points to a the inprivate location for the store Content Decryption Module (CDM) data. Specifying this path allows the CDM to comply with the application?s privacy policy by putting personal information in the file location indicated by this path.
Receives the media keys.
If this method succeeds, it returns
Gets a value that indicates if the specified key system supports the specified media type.
Creates an instance of
If this method succeeds, it returns
Creates a media keys object based on the specified key system.
The media keys system.
Points to a location to store Content Decryption Module (CDM) data which might be locked by multiple process and so might be incompatible with store app suspension.
The media keys.
If this method succeeds, it returns
Checks if keySystem is a supported key system and creates the related Content Decryption Module (CDM).
Gets a value that indicates if the specified key system supports the specified media type.
The MIME type to check support for.
The key system to check support for.
true if type is supported by keySystem; otherwise, false.
If this method succeeds, it returns
Implemented by the media engine to add encrypted media extensions methods.
Gets the media keys object associated with the media engine or null if there is not a media keys object.
Sets the media keys object to use with the media engine.
Gets the media keys object associated with the media engine or null if there is not a media keys object.
The media keys object associated with the media engine or null if there is not a media keys object.
If this method succeeds, it returns
Sets the media keys object to use with the media engine.
The media keys.
If this method succeeds, it returns
Extends the
The
Gets or sets the audio balance.
Gets various flags that describe the media resource.
Gets the number of streams in the media resource.
Queries whether the media resource contains protected content.
Gets or sets the time of the next timeline marker, if any.
Queries whether the media resource contains stereoscopic 3D video.
For stereoscopic 3D video, gets the layout of the two views within a video frame.
For stereoscopic 3D video, queries how the Media Engine renders the 3D video content.
Gets a handle to the windowless swap chain.
To enable windowless swap-chain mode, call
Gets or sets the audio stream category used for the next call to SetSource or Load.
For information on audio stream categories, see
Gets or sets the audio device endpoint role used for the next call to SetSource or Load.
For information on audio endpoint roles, see ERole enumeration.
Gets or sets the real time mode used for the next call to SetSource or Load.
Opens a media resource from a byte stream.
A reference to the
The URL of the byte stream.
If this method succeeds, it returns
Gets a playback statistic from the Media Engine.
A member of the
A reference to a
If this method succeeds, it returns
Updates the source rectangle, destination rectangle, and border color for the video.
A reference to an
A reference to a
A reference to an
If this method succeeds, it returns
In rendering mode, call this method to reposition the video, update the border color, or repaint the video frame. If all of the parameters are
In frame-server mode, this method has no effect.
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
Gets the audio balance.
Returns the balance. The value can be any number in the following range (inclusive).
Return value | Description |
---|---|
| The left channel is at full volume; the right channel is silent. |
| The right channel is at full volume; the left channel is silent. |
?
If the value is zero, the left and right channels are at equal volumes. The default value is zero.
Sets the audio balance.
The audio balance. The value can be any number in the following range (inclusive).
Value | Meaning |
---|---|
| The left channel is at full volume; the right channel is silent. |
| The right channel is at full volume; the left channel is silent. |
?
If the value is zero, the left and right channels are at equal volumes. The default value is zero.
If this method succeeds, it returns
When the audio balance changes, the Media Engine sends an
Queries whether the Media Engine can play at a specified playback rate.
The requested playback rate.
Returns TRUE if the playback rate is supported, or
Playback rates are expressed as a ratio of the current rate to the normal rate. For example, 1.0 is normal playback speed, 0.5 is half speed, and 2.0 is 2? speed. Positive values mean forward playback, and negative values mean reverse playback.
The results of this method can vary depending on the media resource that is currently loaded. Some media formats might support faster playback rates than others. Also, some formats might not support reverse play.
Steps forward or backward one frame.
Specify TRUE to step forward or
If this method succeeds, it returns
The frame-step direction is independent of the current playback direction.
This method completes asynchronously. When the operation completes, the Media Engine sends an
Gets various flags that describe the media resource.
Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Gets a presentation attribute from the media resource.
The attribute to query. For a list of presentation attributes, see Presentation Descriptor Attributes.
A reference to a
If this method succeeds, it returns
Gets the number of streams in the media resource.
Receives the number of streams.
If this method succeeds, it returns
Gets a stream-level attribute from the media resource.
The zero-based index of the stream. To get the number of streams, call
The attribute to query. Possible values are listed in the following topics:
A reference to a
If this method succeeds, it returns
Queries whether a stream is selected to play.
The zero-based index of the stream. To get the number of streams, call
Receives a Boolean value.
Value | Meaning |
---|---|
| The stream is selected. During playback, this stream will play. |
The stream is not selected. During playback, this stream will not play. |
?
If this method succeeds, it returns
Selects or deselects a stream for playback.
The zero-based index of the stream. To get the number of streams, call
Specifies whether to select or deselect the stream.
Value | Meaning |
---|---|
| The stream is selected. During playback, this stream will play. |
The stream is not selected. During playback, this stream will not play. |
?
If this method succeeds, it returns
Applies the stream selections from previous calls to SetStreamSelection.
If this method succeeds, it returns
Queries whether the media resource contains protected content.
Receives the value TRUE if the media resource contains protected content, or
If this method succeeds, it returns
Inserts a video effect.
One of the following:
Specifies whether the effect is optional.
Value | Meaning |
---|---|
| The effect is optional. If the Media Engine cannot add the effect, it ignores the effect and continues playback. |
The effect is required. If the Media Engine object cannot add the effect, a playback error occurs. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The maximum number of video effects was reached. |
?
The effect is applied when the next media resource is loaded.
Inserts an audio effect.
One of the following:
Specifies whether the effect is optional.
Value | Meaning |
---|---|
| The effect is optional. If the Media Engine cannot add the effect, it ignores the effect and continues playback. |
The effect is required. If the Media Engine object cannot add the effect, a playback error occurs. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The maximum number of audio effects was reached. |
?
The effect is applied when the next media resource is loaded.
Removes all audio and video effects.
If this method succeeds, it returns
Call this method to remove all of the effects that were added with the InsertAudioEffect and InsertVideoEffect methods.
Specifies a presentation time when the Media Engine will send a marker event.
The presentation time for the marker event, in seconds.
If this method succeeds, it returns
When playback reaches the time specified by timeToFire, the Media Engine sends an
If the application seeks past the marker point, the Media Engine cancels the marker and does not send the event.
During forward playback, set timeToFire to a value greater than the current playback position. During reverse playback, set timeToFire to a value less than the playback position.
To cancel a marker, call
Gets the time of the next timeline marker, if any.
Receives the marker time, in seconds. If no marker is set, this parameter receives the value NaN.
If this method succeeds, it returns
Cancels the next pending timeline marker.
If this method succeeds, it returns
Call this method to cancel the
Queries whether the media resource contains stereoscopic 3D video.
Returns TRUE if the media resource contains 3D video, or
For stereoscopic 3D video, gets the layout of the two views within a video frame.
Receives a member of the
If this method succeeds, it returns
For stereoscopic 3D video, sets the layout of the two views within a video frame.
A member of the
If this method succeeds, it returns
For stereoscopic 3D video, queries how the Media Engine renders the 3D video content.
Receives a member of the
If this method succeeds, it returns
For stereoscopic 3D video, specifies how the Media Engine renders the 3D video content.
A member of the
If this method succeeds, it returns
Enables or disables windowless swap-chain mode.
If TRUE, windowless swap-chain mode is enabled.
If this method succeeds, it returns
In windowless swap-chain mode, the Media Engine creates a windowless swap chain and presents video frames to the swap chain. To render the video, call
Gets a handle to the windowless swap chain.
Receives a handle to the swap chain.
If this method succeeds, it returns
To enable windowless swap-chain mode, call
Enables or disables mirroring of the video.
If TRUE, the video is mirrored horizontally. Otherwise, the video is displayed normally.
If this method succeeds, it returns
Gets the audio stream category used for the next call to SetSource or Load.
If this method succeeds, it returns
For information on audio stream categories, see
Sets the audio stream category for the next call to SetSource or Load.
If this method succeeds, it returns
For information on audio stream categories, see
Gets the audio device endpoint role used for the next call to SetSource or Load.
If this method succeeds, it returns
For information on audio endpoint roles, see ERole enumeration.
Sets the audio device endpoint used for the next call to SetSource or Load.
If this method succeeds, it returns
For information on audio endpoint roles, see ERole enumeration.
Gets the real time mode used for the next call to SetSource or Load.
If this method succeeds, it returns
Sets the real time mode used for the next call to SetSource or Load.
If this method succeeds, it returns
Seeks to a new playback position using the specified
If this method succeeds, it returns
Enables or disables the time update timer.
If TRUE, the update timer is enabled. Otherwise, the timer is disabled.
If this method succeeds, it returns
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Opens a media resource from a byte stream.
A reference to the
The URL of the byte stream.
If this method succeeds, it returns
Enables an application to load media resources in the Media Engine.
To use this interface, set the
Queries whether the object can load a specified type of media resource.
If TRUE, the Media Engine is set to audio-only mode. Otherwise, the Media Engine is set to audio-video mode.
A string that contains a MIME type with an optional codecs parameter, as defined in RFC 4281.
Receives a member of the
If this method succeeds, it returns
Implement this method if your Media Engine extension supports one or more MIME types.
Begins an asynchronous request to create either a byte stream or a media source.
The URL of the media resource.
A reference to the
If the type parameter equals
If type equals
A member of the
Value | Meaning |
---|---|
Create a byte stream. The byte stream must support the | |
Create a media source. The media source must support the |
?
Receives a reference to the
The caller must release the interface. This parameter can be
A reference to the
A reference to the
If this method succeeds, it returns
This method requests the object to create either a byte stream or a media source, depending on the value of the type parameter:
The method is performed asynchronously. The Media Engine calls the
Cancels the current request to create an object.
The reference that was returned in the the ppIUnknownCancelCookie parameter of the
If this method succeeds, it returns
This method attempts to cancel a previous call to BeginCreateObject. Because that method is asynchronous, however, it might complete before the operation can be canceled.
Completes an asynchronous request to create a byte stream or media source.
A reference to the
Receives a reference to the
If this method succeeds, it returns
The Media Engine calls this method to complete the
Represents a callback to the media engine to notify key request data.
Notifies the application that a key or keys are needed along with any initialization data.
The initialization data.
The count in bytes of initData.
Callback interface for the
To set the callback reference on the Media Engine, set the
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Notifies the application when a playback event occurs.
A member of the
The first event parameter. The meaning of this parameter depends on the event code.
The second event parameter. The meaning of this parameter depends on the event code.
If this method succeeds, it returns
Provides methods for getting information about the Output Protection Manager (OPM).
To get a reference to this interface, call QueryInterface on the Media Engine.
The
Gets status information about the Output Protection Manager (OPM).
The method returns an
Return code | Description |
---|---|
| The method succeeded |
| If any of the parameters are |
?
Copies a protected video frame to a DXGI surface.
For protected content, call this method instead of the
Gets the content protections that must be applied in frame-server mode.
Specifies the window that should receive output link protections.
In frame-server mode, call this method to specify the destination window for protected video content. The Media Engine uses this window to set link protections, using the Output Protection Manager (OPM).
Sets the content protection manager (CPM).
The Media Engine uses the CPM to handle events related to protected content, such as license acquisition.
Enables the Media Engine to access protected content while in frame-server mode.
A reference to the Direct3D?11 device content. The Media Engine queries this reference for the
If this method succeeds, it returns
In frame-server mode, this method enables the Media Engine to share protected content with the Direct3D?11 device.
Gets the content protections that must be applied in frame-server mode.
Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Specifies the window that should receive output link protections.
A handle to the window.
If this method succeeds, it returns
In frame-server mode, call this method to specify the destination window for protected video content. The Media Engine uses this window to set link protections, using the Output Protection Manager (OPM).
Copies a protected video frame to a DXGI surface.
A reference to the
A reference to an
A reference to a
A reference to an
Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
For protected content, call this method instead of the
Sets the content protection manager (CPM).
A reference to the
If this method succeeds, it returns
The Media Engine uses the CPM to handle events related to protected content, such as license acquisition.
Sets the application's certificate.
A reference to a buffer that contains the certificate in X.509 format, followed by the application identifier signed with a SHA-256 signature using the private key from the certificate.
The size of the pbBlob buffer, in bytes.
If this method succeeds, it returns
Call this method to access protected video content in frame-server mode.
Provides the Media Engine with a list of media resources.
The
This interface enables the application to provide the same audio/video content in several different encoding formats, such as H.264 and Windows Media Video. If a particular codec is not present on the user's computer, the Media Engine will try the next URL in the list. To use this interface, do the following:
Gets the number of source elements in the list.
Gets the number of source elements in the list.
Returns the number of source elements.
Gets the URL of an element in the list.
The zero-based index of the source element. To get the number of source elements, call
Receives a BSTR that contains the URL of the source element. The caller must free the BSTR by calling SysFreeString. If no URL is set, this parameter receives the value
If this method succeeds, it returns
Gets the MIME type of an element in the list.
The zero-based index of the source element. To get the number of source elements, call
Receives a BSTR that contains the MIME type. The caller must free the BSTR by calling SysFreeString. If no MIME type is set, this parameter receives the value
If this method succeeds, it returns
Gets the intended media type of an element in the list.
The zero-based index of the source element. To get the number of source elements, call
Receives a BSTR that contains a media-query string. The caller must free the BSTR by calling SysFreeString. If no media type is set, this parameter receives the value
If this method succeeds, it returns
The string returned in pMedia should be a media-query string that conforms to the W3C Media Queries specification.
Adds a source element to the end of the list.
The URL of the source element, or
The MIME type of the source element, or
A media-query string that specifies the intended media type, or
If this method succeeds, it returns
Any of the parameters to this method can be
This method allocates copies of the BSTRs that are passed in.
Removes all of the source elements from the list.
If this method succeeds, it returns
Extends the
Provides an enhanced version of
If this method succeeds, it returns
Gets the key system for the given source element index.
The source element index.
The MIME type of the source element.
If this method succeeds, it returns
Enables the media source to be transferred between the media engine and the sharing engine for Play To.
Specifies wether or not the source should be transferred.
true if the source should be transferred; otherwise, false.
If this method succeeds, it returns
Detaches the media source.
Receives the byte stream.
Receives the media source.
Receives the media source extension.
If this method succeeds, it returns
Attaches the media source.
Specifies the byte stream.
Specifies the media source.
Specifies the media source extension.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables playback of web audio.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a value indicating if the connecting to Web audio should delay the page's load event.
True if connection to Web audio should delay the page's load event; otherwise, false.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Connects web audio to Media Engine using the specified sample rate.
The sample rate of the web audio.
The sample rate of the web audio.
Returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Disconnects web audio from the Media Engine
Returns
Provides the current error status for the Media Engine.
The
To get a reference to this interface, call
Gets or sets the extended error code.
Gets the error code.
Returns a value from the
Gets the extended error code.
Returns an
Sets the error code.
The error code, specified as an
If this method succeeds, it returns
Sets the extended error code.
An
If this method succeeds, it returns
Represents an event generated by a Media Foundation object. Use this interface to get information about the event.
To get a reference to this interface, call
If you are implementing an object that generates events, call the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the event type. The event type indicates what happened to trigger the event. It also defines the meaning of the event value.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the extended type of the event.
To define a custom event, create a new extended-type
Some standard Media Foundation events also use the extended type to differentiate between types of event data.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the value associated with the event, if any. The value is retrieved as a
Before calling this method, call PropVariantInit to initialize the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the event type. The event type indicates what happened to trigger the event. It also defines the meaning of the event value.
Receives the event type. For a list of event types, see Media Foundation Events.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the extended type of the event.
Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To define a custom event, create a new extended-type
Some standard Media Foundation events also use the extended type to differentiate between types of event data.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an
Receives the event status. If the operation that generated the event was successful, the value is a success code. A failure code means that an error condition triggered the event.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the value associated with the event, if any. The value is retrieved as a
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before calling this method, call PropVariantInit to initialize the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves events from any Media Foundation object that generates events.
An object that supports this interface maintains a queue of events. The client of the object can retrieve the events either synchronously or asynchronously. The synchronous method is GetEvent. The asynchronous methods are BeginGetEvent and EndGetEvent.
Retrieves the next event in the queue. This method is synchronous.
Specifies one of the following values.
Value | Meaning |
---|---|
| The method blocks until the event generator queues an event. |
| The method returns immediately. |
?
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| There is a pending request. |
| There are no events in the queue. |
| The object was shut down. |
?
This method executes synchronously.
If the queue already contains an event, the method returns
If dwFlags is 0, the method blocks indefinitely until a new event is queued, or until the event generator is shut down.
If dwFlags is MF_EVENT_FLAG_NO_WAIT, the method fails immediately with the return code
This method returns
Begins an asynchronous request for the next event in the queue.
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| There is a pending request with the same callback reference and a different state object. |
| There is a pending request with a different callback reference. |
| The object was shut down. |
| There is a pending request with the same callback reference and state object. |
?
When a new event is available, the event generator calls the
Do not call BeginGetEvent a second time before calling EndGetEvent. While the first call is still pending, additional calls to the same object will fail. Also, the
Completes an asynchronous request for the next event in the queue.
Pointer to the
Receives a reference to the
Call this method from inside your application's
Puts a new event in the object's queue.
Specifies the event type. The event type is returned by the event's
The extended type. If the event does not have an extended type, use the value GUID_NULL. The extended type is returned by the event's
A success or failure code indicating the status of the event. This value is returned by the event's
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object was shut down. |
?
Applies to: desktop apps | Metro style apps
Retrieves the next event in the queue. This method is synchronous.
This method executes synchronously.
If the queue already contains an event, the method returns
If dwFlags is 0, the method blocks indefinitely until a new event is queued, or until the event generator is shut down.
If dwFlags is MF_EVENT_FLAG_NO_WAIT, the method fails immediately with the return code
This method returns
Applies to: desktop apps | Metro style apps
Begins an asynchronous request for the next event in the queue.
Pointer to the
When a new event is available, the event generator calls the
Do not call BeginGetEvent a second time before calling EndGetEvent. While the first call is still pending, additional calls to the same object will fail. Also, the
Provides an event queue for applications that need to implement the
This interface is exposed by a helper object that implements an event queue. If you are writing a component that implements the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the next event in the queue. This method is synchronous.
Call this method inside your implementation of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous request for the next event in the queue.
Call this method inside your implementation of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Completes an asynchronous request for the next event in the queue.
Call this method inside your implementation of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Puts an event in the queue.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
Call this method when your component needs to raise an event that contains attributes. To create the event object, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an event, sets a
Call this method inside your implementation of
You can also call this method when your component needs to raise an event that does not contain attributes. If the event data is an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an event, sets an
Specifies the event type of the event to be added to the queue. The event type is returned by the event's
The extended type of the event. If the event does not have an extended type, use the value GUID_NULL. The extended type is returned by the event's
A success or failure code indicating the status of the event. This value is returned by the event's
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
Call this method when your component needs to raise an event that contains an
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Shuts down the event queue.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when your component shuts down. After this method is called, all
This method removes all of the events from the queue.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents a media keys used for decrypting media data using a Digital Rights Management (DRM) key system.
Gets the suspend notify interface of the Content Decryption Module (CDM).
Creates a media key session object using the specified initialization data and custom data. .
The MIME type of the media container used for the content.
The initialization data for the key system.
The count in bytes of initData.
Custom data sent to the key system.
The count in bytes of cbCustomData.
notify
The media key session.
If this method succeeds, it returns
Gets the key system string the
If this method succeeds, it returns
If this method succeeds, it returns
Shutdown should be called by the application before final release. The Content Decryption Module (CDM) reference and any other resources is released at this point. However, related sessions are not freed or closed.
Gets the suspend notify interface of the Content Decryption Module (CDM).
The suspend notify interface of the Content Decryption Module (CDM).
If this method succeeds, it returns
Represents a session with the Digital Rights Management (DRM) key system.
Gets the error state associated with the media key session.
The error code.
Platform specific error information.
If this method succeeds, it returns
Gets the name of the key system name the media keys object was created with.
The name of the key system.
If this method succeeds, it returns
Gets a unique session id created for this session.
The media key session id.
If this method succeeds, it returns
Passes in a key value with any associated data required by the Content Decryption Module for the given key system.
The count in bytes of key.
If this method succeeds, it returns
Closes the media key session and must be called before the key session is released.
If this method succeeds, it returns
Provides a mechanism for notifying the app about information regarding the media key session.
Passes information to the application so it can initiate a key acquisition.
The URL to send the message to.
The message to send to the application.
The length in bytes of message.
Notifies the application that the key has been added.
KeyAdded can also be called if the keys requested for the session have already been acquired.
Notifies the application that an error occurred while processing the key.
Provides playback controls for protected and unprotected content. The Media Session and the protected media path (PMP) session objects expose this interface. This interface is the primary interface that applications use to control the Media Foundation pipeline.
To obtain a reference to this interface, call
Retrieves the Media Session's presentation clock.
The application can query the returned
Retrieves the capabilities of the Media Session, based on the current presentation.
Sets a topology on the Media Session.
Bitwise OR of zero or more flags from the
Pointer to the topology object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
| The topology has invalid values for one or more of the following attributes: |
| Protected content cannot be played while debugging. |
?
If pTopology is a full topology, set the
If the Media Session is currently paused or stopped, the SetTopology method does not take effect until the next call to
If the Media Session is currently running, or on the next call to Start, the SetTopology method does the following:
This method is asynchronous. If the method returns
Clears all of the presentations that are queued for playback in the Media Session.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
?
This method is asynchronous. When the operation completes, the Media Session sends an
This method does not clear the current topology; it only removes topologies that are placed in the queue, waiting for playback. To remove the current topology, call
Starts the Media Session.
Pointer to a
The following time format GUIDs are defined:
Value | Meaning |
---|---|
| Presentation time. The pvarStartPosition parameter must have one of the following
All media sources support this time format. |
| Segment offset. This time format is supported by the Sequencer Source. The starting time is an offset within a segment. Call the |
| Note??Requires Windows?7 or later. ? Skip to a playlist entry. The pvarStartPosition parameter specifies the index of the playlist entry, relative to the current entry. For example, the value 2 skips forward two entries. To skip backward, pass a negative value. The If a media source supports this time format, the |
?
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
?
When this method is called, the Media Session starts the presentation clock and begins to process media samples.
This method is asynchronous. When the method completes, the Media Session sends an
Pauses the Media Session.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
| The Media Session cannot pause while stopped. |
?
This method pauses the presentation clock.
This method is asynchronous. When the operation completes, the Media Session sends an
This method fails if the Media Session is stopped.
Stops the Media Session.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
?
This method is asynchronous. When the operation completes, the Media Session sends an
Closes the Media Session and releases all of the resources it is using.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Media Session has been shut down. |
?
This method is asynchronous. When the operation completes, the Media Session sends an
After the Close method is called, the only valid methods on the Media Session are the following:
All other methods return
Shuts down the Media Session and releases all the resources used by the Media Session.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when you are done using the Media Session, before the final call to IUnknown::Release. Otherwise, your application will leak memory.
After this method is called, other
Retrieves the Media Session's presentation clock.
Receives a reference to the presentation clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Media Session does not have a presentation clock. |
| The Media Session has been shut down. |
?
The application can query the returned
Retrieves the capabilities of the Media Session, based on the current presentation.
Receives a bitwise OR of zero or more of the following flags.
Value | Meaning |
---|---|
| The Media Session can be paused. |
| The Media Session supports forward playback at rates faster than 1.0. |
| The Media Session supports reverse playback. |
| The Media Session can be seeked. |
| The Media Session can be started. |
?
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| The Media Session has been shut down. |
?
Gets a topology from the Media Session.
This method can get the current topology or a queued topology.
Bitwise OR of zero or more flags from the
The identifier of the topology. This parameter is ignored if the dwGetFullTopologyFlags parameter contains the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Media Session has been shut down. |
?
If the
This method can be used to retrieve the topology for the current presentation or any pending presentations. It cannot be used to retrieve a topology that has already ended.
The topology returned in ppFullTopo is a full topology, not a partial topology.
Implemented by media sink objects. This interface is the base interface for all Media Foundation media sinks. Stream sinks handle the actual processing of data on each stream.
Gets the characteristics of the media sink.
The characteristics of a media sink are fixed throughout the life time of the sink.
Gets the number of stream sinks on this media sink.
Gets the presentation clock that was set on the media sink.
Gets the characteristics of the media sink.
Receives a bitwise OR of zero or more flags. The following flags are defined:
Value | Meaning |
---|---|
| The media sink has a fixed number of streams. It does not support the |
| The media sink cannot match rates with an external clock. For best results, this media sink should be used as the time source for the presentation clock. If any other time source is used, the media sink cannot match rates with the clock, with poor results (for example, glitching). This flag should be used sparingly, because it limits how the pipeline can be configured. For more information about the presentation clock, see Presentation Clock. |
| The media sink is rateless. It consumes samples as quickly as possible, and does not synchronize itself to a presentation clock. Most archiving sinks are rateless. |
| The media sink requires a presentation clock. The presentation clock is set by calling the media sink's This flag is obsolete, because all media sinks must support the SetPresentationClock method, even if the media sink ignores the clock (as in a rateless media sink). |
| The media sink can accept preroll samples before the presentation clock starts. The media sink exposes the |
| The first stream sink (index 0) is a reference stream. The reference stream must have a media type before the media types can be set on the other stream sinks. |
?
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
?
The characteristics of a media sink are fixed throughout the life time of the sink.
Adds a new stream sink to the media sink.
Identifier for the new stream. The value is arbitrary but must be unique.
Pointer to the
Receives a reference to the new stream sink's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified stream identifier is not valid. |
| The media sink's Shutdown method has been called. |
| There is already a stream sink with the same stream identifier. |
| This media sink has a fixed set of stream sinks. New stream sinks cannot be added. |
?
Not all media sinks support this method. If the media sink does not support this method, the
If pMediaType is
Removes a stream sink from the media sink.
Identifier of the stream to remove. The stream identifier is defined when you call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This particular stream sink cannot be removed. |
| The stream number is not valid. |
| The media sink has not been initialized. |
| The media sink's Shutdown method has been called. |
| This media sink has a fixed set of stream sinks. Stream sinks cannot be removed. |
?
After this method is called, the corresponding stream sink object is no longer valid. The
Not all media sinks support this method. If the media sink does not support this method, the
In some cases, the media sink supports this method but does not allow every stream sink to be removed. (For example, it might not allow stream 0 to be removed.)
Gets the number of stream sinks on this media sink.
Receives the number of stream sinks.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
?
Gets a stream sink, specified by index.
Zero-based index of the stream. To get the number of streams, call
Receives a reference to the stream's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid index. |
| The media sink's Shutdown method has been called. |
?
Enumerating stream sinks is not a thread-safe operation, because stream sinks can be added or removed between calls to this method.
Gets a stream sink, specified by stream identifier.
Stream identifier of the stream sink.
Receives a reference to the stream's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream identifier is not valid. |
| The media sink's Shutdown method has been called. |
?
If you add a stream sink by calling the
To enumerate the streams by index number instead of stream identifier, call
Sets the presentation clock on the media sink.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The presentation clock does not have a time source. Call SetTimeSource on the presentation clock. |
| The media sink's Shutdown method has been called. |
?
During streaming, the media sink attempts to match rates with the presentation clock. Ideally, the media sink presents samples at the correct time according to the presentation clock and does not fall behind. Rateless media sinks are an exception to this rule, as they consume samples as quickly as possible and ignore the clock. If the sink is rateless, the
The presentation clock must have a time source. Before calling this method, call
If pPresentationClock is non-
All media sinks must support this method.
Gets the presentation clock that was set on the media sink.
Receives a reference to the presentation clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No clock has been set. To set the presentation clock, call |
| The media sink's Shutdown method has been called. |
?
Shuts down the media sink and releases the resources it is using.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink was shut down. |
?
If the application creates the media sink, it is responsible for calling Shutdown to avoid memory or resource leaks. In most applications, however, the application creates an activation object for the media sink, and the Media Session uses that object to create the media sink. In that case, the Media Session ? not the application ? shuts down the media sink. (For more information, see Activation Objects.)
After this method returns, all methods on the media sink return
Enables a media sink to receive samples before the presentation clock is started.
To get a reference to this interface, call QueryInterface on the media sink.
Media sinks can implement this interface to support seamless playback and transitions. If a media sink exposes this interface, it can receive samples before the presentation clock starts. It can then pre-process the samples, so that rendering can begin immediately when the clock starts. Prerolling helps to avoid glitches during playback.
If a media sink supports preroll, the media sink's
Notifies the media sink that the presentation clock is about to start.
The upcoming start time for the presentation clock, in 100-nanosecond units. This time is the same value that will be given to the
If this method succeeds, it returns
After this method is called, the media sink sends any number of
During preroll, the media sink can prepare the samples that it receives, so that they are ready to be rendered. It does not actually render any samples until the clock starts.
Implemented by media source objects.
Media sources are objects that generate media data. For example, the data might come from a video file, a network stream, or a hardware device, such as a camera. Each media source contains one or more streams, and each stream delivers data of one type, such as audio or video.
In Windows?8, this interface is extended with
Retrieves the characteristics of the media source.
The characteristics of a media source can change at any time. If this happens, the source sends an
Retrieves the characteristics of the media source.
Receives a bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
The characteristics of a media source can change at any time. If this happens, the source sends an
Retrieves a copy of the media source's presentation descriptor. Applications use the presentation descriptor to select streams and to get information about the source content.
Receives a reference to the presentation descriptor's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
The presentation descriptor contains the media source's default settings for the presentation. The application can change these settings by selecting or deselecting streams, or by changing the media type on a stream. Do not modify the presentation descriptor unless the source is stopped. The changes take affect when the source's
Starts, seeks, or restarts the media source by specifying where to start playback.
Pointer to the
Pointer to a
Specifies where to start playback. The units of this parameter are indicated by the time format given in pguidTimeFormat. If the time format is GUID_NULL, the variant type must be VT_I8 or VT_EMPTY. Use VT_I8 to specify a new starting position, in 100-nanosecond units. Use VT_EMPTY to start from the current position. Other time formats might use other
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The start position is past the end of the presentation (ASF media source). |
| A hardware device was unable to start streaming. This error code can be returned by a media source that represents a hardware device, such as a camera. For example, if the camera is already being used by another application, the method might return this error code. |
| The start request is not valid. For example, the start position is past the end of the presentation. |
| The media source's Shutdown method has been called. |
| The media source does not support the time format specified in pguidTimeFormat. |
?
This method is asynchronous. If the operation succeeds, the media source sends the following events:
If the start operation fails asynchronously (after the method returns
A call to Start results in a seek if the previous state was started or paused, and the new starting position is not VT_EMPTY. Not every media source can seek. If a media source can seek, the
Events from the media source are not synchronized with events from the media streams. If you seek a media source, therefore, you can still receive samples from the earlier position after getting the
Stops all active streams in the media source.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
This method is asynchronous. When the operation completes, the media source sends and
When a media source is stopped, its current position reverts to zero. After that, if the Start method is called with VT_EMPTY for the starting position, playback starts from the beginning of the presentation.
While the source is stopped, no streams produce data.
Pauses all active streams in the media source.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid state transition. The media source must be in the started state. |
| The media source's Shutdown method has been called. |
?
This method is asynchronous. When the operation completes, the media source sends and
The media source must be in the started state. The method fails if the media source is paused or stopped.
While the source is paused, calls to
Not every media source can pause. If a media source can pause, the
Shuts down the media source and releases the resources it is using.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application creates the media source, either directly or through the source resolver, the application is responsible for calling Shutdown to avoid memory or resource leaks.
After this method is called, methods on the media source and all of its media streams return
Extends the
To get a reference to this interface, call QueryInterface on the media source.
Implementations of this interface can return E_NOTIMPL for any methods that are not required by the media source.
Gets an attribute store for the media source.
Use the
Sets a reference to the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager on the media source.
Gets an attribute store for the media source.
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The media source does not support source-level attributes. |
?
Use the
Gets an attribute store for a stream on the media source.
The identifier of the stream. To get the identifier, call
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The media source does not support stream-level attributes. |
| Invalid stream identifier. |
?
Use the
Sets a reference to the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager on the media source.
A reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The media source does not support source-level attributes. |
?
Provides functionality for the Media Source Extension (MSE).
Media Source Extensions (MSE) is a World Wide Web Consortium (W3C) standard that extends the HTML5 media elements to enable dynamically changing the media stream without the use of plug-ins. The
The MSE media source keeps track of the ready state of the of the source as well as a list of
Gets the collection of source buffers associated with this media source.
Gets the source buffers that are actively supplying media data to the media source.
Gets the ready state of the media source.
Gets or sets the duration of the media source in 100-nanosecond units.
Indicate that the end of the media stream has been reached.
Gets the collection of source buffers associated with this media source.
The collection of source buffers.
Gets the source buffers that are actively supplying media data to the media source.
The list of active source buffers.
Gets the ready state of the media source.
The ready state of the media source.
Gets the duration of the media source in 100-nanosecond units.
The duration of the media source in 100-nanosecond units.
Sets the duration of the media source in 100-nanosecond units.
The duration of the media source in 100-nanosecond units.
If this method succeeds, it returns
Adds a
If this method succeeds, it returns
Removes the specified source buffer from the collection of source buffers managed by the
If this method succeeds, it returns
Indicate that the end of the media stream has been reached.
Used to pass error information.
If this method succeeds, it returns
Gets a value that indicates if the specified MIME type is supported by the media source.
The media type to check support for.
true if the media type is supported; otherwise, false.
Gets the
The source buffer.
Provides functionality for raising events associated with
Used to indicate that the media source has opened.
Used to indicate that the media source has ended.
Used to indicate that the media source has closed.
Notifies the source when playback has reached the end of a segment. For timelines, this corresponds to reaching a mark-out point.
Notifies the source when playback has reached the end of a segment. For timelines, this corresponds to reaching a mark-out point.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Enables an application to get a topology from the sequencer source. This interface is exposed by the sequencer source object.
Returns a topology for a media source that builds an internal topology.
A reference to the
Receives a reference to the topology's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. For example, a |
?
Represents one stream in a media source.
Streams are created when a media source is started. For each stream, the media source sends an
Retrieves a reference to the media source that created this media stream.
Retrieves a stream descriptor for this media stream.
Do not modify the stream descriptor. To change the presentation, call
Retrieves a reference to the media source that created this media stream.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
Retrieves a stream descriptor for this media stream.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
Do not modify the stream descriptor. To change the presentation, call
Requests a sample from the media source.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The end of the stream was reached. |
| The media source is stopped. |
| The source's Shutdown method has been called. |
?
If pToken is not
When the next sample is available, the media stream stream does the following:
If the media stream cannot fulfill the caller's request for a sample, it simply releases the token object and skips steps 2 and 3.
The caller should monitor the reference count on the request token. If the media stream sends an
Because the Media Foundation pipeline is multithreaded, the source's RequestSample method might get called after the source has stopped. If the media source is stopped, the method should return
If the media source is paused, the method succeeds, but the stream does not deliver the sample until the source is started again.
If a media source enounters an error asynchronously while processing data, it should signal the error in one of the following ways (but not both):
Represents a request for a sample from a MediaStreamSource.
MFMediaStreamSourceSampleRequest is implemented by the Windows.Media.Core.MediaStreamSourceSampleRequest runtime class.
Sets the sample for the media stream source.
Sets the sample for the media stream source.
The sample for the media stream source.
If this method succeeds, it returns
Represents a list of time ranges, where each range is defined by a start and end time.
The
Several
Gets the number of time ranges contained in the object.
This method corresponds to the TimeRanges.length attribute in HTML5.
Gets the number of time ranges contained in the object.
Returns the number of time ranges.
This method corresponds to the TimeRanges.length attribute in HTML5.
Gets the start time for a specified time range.
The zero-based index of the time range to query. To get the number of time ranges, call
Receives the start time, in seconds.
If this method succeeds, it returns
This method corresponds to the TimeRanges.start method in HTML5.
Gets the end time for a specified time range.
The zero-based index of the time range to query. To get the number of time ranges, call
Receives the end time, in seconds.
If this method succeeds, it returns
This method corresponds to the TimeRanges.end method in HTML5.
Queries whether a specified time falls within any of the time ranges.
The time, in seconds.
Returns TRUE if any time range contained in this object spans the value of the time parameter. Otherwise, returns
This method returns TRUE if the following condition holds for any time range in the list:
Adds a new range to the list of time ranges.
The start time, in seconds.
The end time, in seconds.
If this method succeeds, it returns
If the new range intersects a range already in the list, the two ranges are combined. Otherwise, the new range is added to the list.
Clears the list of time ranges.
If this method succeeds, it returns
Represents a description of a media format.
To create a new media type, call
All of the information in a media type is stored as attributes. To clone a media type, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major type of the format.
This method is equivalent to getting the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the media type is a temporally compressed format. Temporal compression uses information from previously decoded samples when decompressing the current sample.
This method returns
If the method returns TRUE in pfCompressed, it is a hint that the format has temporal compression applied to it. If the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major type of the format.
Receives the major type
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The major type is not set. |
?
This method is equivalent to getting the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the media type is a temporally compressed format. Temporal compression uses information from previously decoded samples when decompressing the current sample.
Receives a Boolean value. The value is TRUE if the format uses temporal compression, or
If this method succeeds, it returns
This method returns
If the method returns TRUE in pfCompressed, it is a hint that the format has temporal compression applied to it. If the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Compares two media types and determines whether they are identical. If they are not identical, the method indicates how the two formats differ.
Pointer to the
Receives a bitwise OR of zero or more flags, indicating the degree of similarity between the two media types. The following flags are defined.
Value | Meaning |
---|---|
| The major types are the same. The major type is specified by the |
| The subtypes are the same, or neither media type has a subtype. The subtype is specified by the |
| The attributes in one of the media types are a subset of the attributes in the other, and the values of these attributes match, excluding the value of the Specifically, the method takes the media type with the smaller number of attributes and checks whether each attribute from that type is present in the other media type and has the same value (not including To perform other comparisons, use the |
| The user data is identical, or neither media type contains user data. User data is specified by the |
?
The method returns an
Return code | Description |
---|---|
| The types are not equal. Examine the pdwFlags parameter to determine how the types differ. |
| The types are equal. |
| One or both media types are invalid. |
?
Both of the media types must have a major type, or the method returns E_INVALIDARG.
If the method succeeds and all of the comparison flags are set in pdwFlags, the return value is
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an alternative representation of the media type. Currently only the DirectShow
Value | Meaning |
---|---|
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
?
Receives a reference to a structure that contains the representation. The method allocates the memory for the structure. The caller must release the memory by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The details of the media type do not match the requested representation. |
| The media type is not valid. |
| The media type does not support the requested representation. |
?
If you request a specific format structure in the guidRepresentation parameter, such as
You can also use the MFInitAMMediaTypeFromMFMediaType function to convert a Media Foundation media type into a DirectShow media type.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an alternative representation of the media type. Currently only the DirectShow
Value | Meaning |
---|---|
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
?
Receives a reference to a structure that contains the representation. The method allocates the memory for the structure. The caller must release the memory by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The details of the media type do not match the requested representation. |
| The media type is not valid. |
| The media type does not support the requested representation. |
?
If you request a specific format structure in the guidRepresentation parameter, such as
You can also use the MFInitAMMediaTypeFromMFMediaType function to convert a Media Foundation media type into a DirectShow media type.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
The media type is created without any attributes.
Applies to: desktop apps | Metro style apps
Converts a Media Foundation audio media type to a
Receives the size of the
Contains a flag from the
If the wFormatTag member of the returned structure is
Gets and sets media types on an object, such as a media source or media sink.
This interface is exposed by media-type handlers.
If you are implementing a custom media source or media sink, you can create a simple media-type handler by calling
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of media types in the object's list of supported media types.
To get the supported media types, call
For a media source, the media type handler for each stream must contain at least one supported media type. For media sinks, the media type handler for each stream might contain zero media types. In that case, the application must provide the media type. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the current media type of the object.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major media type of the object.
The major type identifies what kind of data is in the stream, such as audio or video. To get the specific details of the format, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the object supports a specified media type.
Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support this media type. |
?
If the object supports the media type given in pMediaType, the method returns
The ppMediaType parameter is optional. If the method fails, the object might use ppMediaType to return a media type that the object does support, and which closely matches the one given in pMediaType. The method is not guaranteed to return a media type in ppMediaType. If no type is returned, this parameter receives a
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of media types in the object's list of supported media types.
Receives the number of media types in the list.
If this method succeeds, it returns
To get the supported media types, call
For a media source, the media type handler for each stream must contain at least one supported media type. For media sinks, the media type handler for each stream might contain zero media types. In that case, the application must provide the media type. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type from the object's list of supported media types.
Zero-based index of the media type to retrieve. To get the number of media types in the list, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwIndex parameter is out of range. |
?
Media types are returned in the approximate order of preference. The list of supported types is not guaranteed to be complete. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the object's media type.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid request. |
?
For media sources, setting the media type means the source will generate data that conforms to that media type. For media sinks, setting the media type means the sink can receive data that conforms to that media type.
Any implementation of this method should check whether pMediaType differs from the object's current media type. If the types are identical, the method should return
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the current media type of the object.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No media type is set. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major media type of the object.
Receives a
If this method succeeds, it returns
The major type identifies what kind of data is in the stream, such as audio or video. To get the specific details of the format, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type from the object's list of supported media types.
Zero-based index of the media type to retrieve. To get the number of media types in the list, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwIndex parameter is out of range. |
?
Media types are returned in the approximate order of preference. The list of supported types is not guaranteed to be complete. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Manages metadata for an object. Metadata is information that describes a media file, stream, or other content. Metadata consists of individual properties, where each property contains a descriptive name and a value. A property may be associated with a particular language.
To get this interface from a media source, use the
Gets a list of the languages in which metadata is available.
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages".
To set the current language, call
Gets a list of all the metadata property names on this object.
Sets the language for setting and retrieving metadata.
Pointer to a null-terminated string containing an RFC 1766-compliant language tag.
If this method succeeds, it returns
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages".
Gets the current language setting.
Receives a reference to a null-terminated string containing an RFC 1766-compliant language tag. The caller must release the string by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The metadata provider does not support multiple languages. |
| No language was set. |
?
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages."
The
Gets a list of the languages in which metadata is available.
A reference to a
The returned
If this method succeeds, it returns
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages".
To set the current language, call
Sets the value of a metadata property.
Pointer to a null-terminated string containing the name of the property.
Pointer to a
If this method succeeds, it returns
Gets the value of a metadata property.
A reference to a null-terminated string that containings the name of the property. To get the list of property names, call
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The requested property was not found. |
?
Deletes a metadata property.
Pointer to a null-terminated string containing the name of the property.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The property was not found. |
?
For a media source, deleting a property from the metadata collection does not change the original content.
Gets a list of all the metadata property names on this object.
Pointer to a
If this method succeeds, it returns
Gets metadata from a media source or other object.
If a media source supports this interface, it must expose the interface as a service. To get a reference to this interface from a media source, call
Use this interface to get a reference to the
Gets a collection of metadata, either for an entire presentation, or for one stream in the presentation.
Pointer to the
If this parameter is zero, the method retrieves metadata that applies to the entire presentation. Otherwise, this parameter specifies a stream identifier, and the method retrieves metadata for that stream. To get the stream identifier for a stream, call
Reserved. Must be zero.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No metadata is available for the requested stream or presentation. |
?
Contains data that is needed to implement the
Any custom implementation of the
Receives state-change notifications from the presentation clock.
To receive state-change notifications from the presentation clock, implement this interface and call
This interface must be implemented by:
Presentation time sources. The presentation clock uses this interface to request change states from the time source.
Media sinks. Media sinks use this interface to get notifications when the presentation clock changes.
Other objects that need to be notified can implement this interface.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The stream specified substream index is invalid. Call GetStreamCount to get the number of substreams managed by the multiplexed media source. |
?
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The
The following functions return
A byte stream for a media souce can be opened with read access. A byte stream for an archive media sink should be opened with both read and write access. (Read access may be required, because the archive sink might need to read portions of the file as it writes.)
Some implementations of this interface also expose one or more of the following interfaces:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the ability to retrieve
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the ability to retrieve
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the ability to retrieve
Retrieves the user name.
If the user name is not available, the method might succeed and set *pcbData to zero.
Sets the user name.
Pointer to a buffer that contains the user name. If fDataIsEncrypted is
Size of pbData, in bytes. If fDataIsEncrypted is
If TRUE, the user name is encrypted. Otherwise, the user name is not encrypted.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the password.
Pointer to a buffer that contains the password. If fDataIsEncrypted is
Size of pbData, in bytes. If fDataIsEncrypted is
If TRUE, the password is encrypted. Otherwise, the password is not encrypted.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the user name.
Pointer to a buffer that receives the user name. To find the required buffer size, set this parameter to
On input, specifies the size of the pbData buffer, in bytes. On output, receives the required buffer size. If fEncryptData is
If TRUE, the method returns an encrypted string. Otherwise, the method returns an unencrypted string.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the user name is not available, the method might succeed and set *pcbData to zero.
Retrieves the password.
Pointer to a buffer that receives the password. To find the required buffer size, set this parameter to
On input, specifies the size of the pbData buffer, in bytes. On output, receives the required buffer size. If fEncryptData is
If TRUE, the method returns an encrypted string. Otherwise, the method returns an unencrypted string.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the password is not available, the method might succeed and set *pcbData to zero.
Queries whether logged-on credentials should be used.
Receives a Boolean value. If logged-on credentials should be used, the value is TRUE. Otherwise, the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets credentials from the credential cache.
This interface is implemented by the credential cache object. Applications that implement the
Retrieves the credential object for the specified URL.
A null-terminated wide-character string containing the URL for which the credential is needed.
A null-terminated wide-character string containing the realm for the authentication.
Bitwise OR of zero or more flags from the
Receives a reference to the
Receives a bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Reports whether the credential object provided successfully passed the authentication challenge.
Pointer to the
TRUE if the credential object succeeded in the authentication challenge; otherwise,
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is called by the network source into the credential manager.
Specifies how user credentials are stored.
Pointer to the
Bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If no flags are specified, the credentials are cached in memory. This method can be implemented by the credential manager and called by the network source.
Implemented by applications to provide user credentials for a network source.
To use this interface, implement it in your application. Then create a property store object and set the MFNETSOURCE_CREDENTIAL_MANAGER property. The value of the property is a reference to your application's
Media Foundation does not provide a default implementation of this interface. Applications that support authentication must implement this interface.
Begins an asynchronous request to retrieve the user's credentials.
Pointer to an
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Completes an asynchronous request to retrieve the user's credentials.
Pointer to an
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Specifies whether the user's credentials succeeded in the authentication challenge. The network source calls this method to informs the application whether the user's credentials were authenticated.
Pointer to the
Boolean value. The value is TRUE if the credentials succeeded in the authentication challenge. Otherwise, the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Determines the proxy to use when connecting to a server. The network source uses this interface.
Applications can create the proxy locator configured by the application by implementing the
To create the default proxy locator, call
Initializes the proxy locator object.
Null-terminated wide-character string containing the hostname of the destination server.
Null-terminated wide-character string containing the destination URL.
Reserved. Set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Determines the next proxy to use.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There are no more proxy objects. |
?
Keeps a record of the success or failure of using the current proxy.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the current proxy information including hostname and port.
Pointer to a buffer that receives a null-terminated string containing the proxy hostname and port. This parameter can be
On input, specifies the number of elements in the pszStr array. On output, receives the required size of the buffer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer specified in pszStr is too small. |
?
Creates a new instance of the default proxy locator.
Receives a reference to the new proxy locator object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates an
Creates an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Notifies the application when a byte stream requests a URL, and enables the application to block URL redirection.
To set the callback interface:
Called when the byte stream redirects to a URL.
The URL to which the connection has been redirected.
To cancel the redirection, set this parameter to VARIANT_TRUE. To allow the redirection, set this parameter to VARIANT_FALSE.
If this method succeeds, it returns
Called when the byte stream requests a URL.
The URL that the byte stream is requesting.
If this method succeeds, it returns
Retrieves the number of protocols supported by the network scheme plug-in.
Retrieves the number of protocols supported by the network scheme plug-in.
Retrieves the number of protocols supported by the network scheme plug-in.
Receives the number of protocols.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a supported protocol by index
Zero-based index of the protocol to retrieve. To get the number of supported protocols, call
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The value passed in the nProtocolIndex parameter was greater than the total number of supported protocols, returned by GetNumberOfSupportedProtocols. |
?
Not implemented in this release.
This method returns
Marshals an interface reference to and from a stream.
Stream objects that support
Stores the data needed to marshal an interface across a process boundary.
Interface identifier of the interface to marshal.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Marshals an interface from data stored in the stream.
Interface identifier of the interface to marshal.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Encapsulates a usage policy from an input trust authority (ITA). Output trust authorities (OTAs) use this interface to query which protection systems they are required to enforce by the ITA.
Retrieives a
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
Retrieves the minimum version of the global revocation list (GRL) that must be enforced by the protected environment for this policy.
Retrieves a list of the output protection systems that the output trust authority (OTA) must enforce, along with configuration data for each protection system.
Describes the output that is represented by the OTA calling this method. This value is a bitwise OR of zero or more of the following flags.
Value | Meaning |
---|---|
| Hardware bus. |
| The output sends compressed data. If this flag is absent, the output sends uncompressed data. |
| Reserved. Do not use. |
| The output sends a digital signal. If this flag is absent, the output sends an analog signal. |
| Reserved. Do not use. |
| Reserved. Do not use. |
| The output sends video data. If this flag is absent, the output sends audio data. |
?
Indicates a specific family of output connectors that is represented by the OTA calling this method. Possible values include the following.
Value | Meaning |
---|---|
| AGP bus. |
| Component video. |
| Composite video. |
| Japanese D connector. (Connector conforming to the EIAJ RC-5237 standard.) |
| Embedded DisplayPort connector. |
| External DisplayPort connector. |
| Digital video interface (DVI) connector. |
| High-definition multimedia interface (HDMI) connector. |
| Low voltage differential signaling (LVDS) connector. A connector using the LVDS interface to connect internally to a display device. The connection between the graphics adapter and the display device is permanent and not accessible to the user. Applications should not enable High-Bandwidth Digital Content Protection (HDCP) for this connector. |
| PCI bus. |
| PCI Express bus. |
| PCI-X bus. |
| Audio data sent over a connector via S/PDIF. |
| Serial digital interface connector. |
| S-Video connector. |
| Embedded Unified Display Interface (UDI). |
| External UDI. |
| Unknown connector type. See Remarks. |
| VGA connector. |
| Miracast wireless connector. Supported in Windows?8.1 and later. |
?
Pointer to an array of
Number of elements in the rgGuidProtectionSchemasSupported array.
Receives a reference to the
If this method succeeds, it returns
The video OTA returns the MFCONNECTOR_UNKNOWN connector type unless the Direct3D device is in full-screen mode. (Direct3D windowed mode is not generally a secure video mode.) You can override this behavior by implementing a custom EVR presenter that implements the
Retrieives a
Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
Retrieves the minimum version of the global revocation list (GRL) that must be enforced by the protected environment for this policy.
Receives the minimum GRL version.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Encapsulates information about an output protection system and its corresponding configuration data.
If the configuration information for the output protection system does not require more than a DWORD of space, the configuration information is retrieved in the GetConfigurationData method. If more than a DWORD of configuration information is needed, it is stored using the
Retrieves the output protection system that is represented by this object. Output protection systems are identified by
Returns configuration data for the output protection system. The configuration data is used to enable or disable the protection system, and to set the protection levels.
Retrieves a
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
Retrieves the output protection system that is represented by this object. Output protection systems are identified by
Receives the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Returns configuration data for the output protection system. The configuration data is used to enable or disable the protection system, and to set the protection levels.
Receives the configuration data. The meaning of this data depends on the output protection system.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a
Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
Encapsulates the functionality of one or more output protection systems that a trusted output supports. This interface is exposed by output trust authority (OTA) objects. Each OTA represents a single action that the trusted output can perform, such as play, copy, or transcode. An OTA can represent more than one physical output if each output performs the same action.
Retrieves the action that is performed by this output trust authority (OTA).
Retrieves the action that is performed by this output trust authority (OTA).
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets one or more policy objects on the output trust authority (OTA).
The address of an array of
The number of elements in the ppPolicy array.
Receives either a reference to a buffer allocated by the OTA, or the value
Receives the size of the ppbTicket buffer, in bytes. If ppbTicket receives the value
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The policy was negotiated successfully, but the OTA will enforce it asynchronously. |
| The OTA does not support the requirements of this policy. |
?
If the method returns MF_S_WAIT_FOR_POLICY_SET, the OTA sends an
Sets one or more policy objects on the output trust authority (OTA).
The address of an array of
The number of elements in the ppPolicy array.
Receives either a reference to a buffer allocated by the OTA, or the value
Receives the size of the ppbTicket buffer, in bytes. If ppbTicket receives the value
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The policy was negotiated successfully, but the OTA will enforce it asynchronously. |
| The OTA does not support the requirements of this policy. |
?
If the method returns MF_S_WAIT_FOR_POLICY_SET, the OTA sends an
Sets one or more policy objects on the output trust authority (OTA).
The address of an array of
The number of elements in the ppPolicy array.
Receives either a reference to a buffer allocated by the OTA, or the value
Receives the size of the ppbTicket buffer, in bytes. If ppbTicket receives the value
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The policy was negotiated successfully, but the OTA will enforce it asynchronously. |
| The OTA does not support the requirements of this policy. |
?
If the method returns MF_S_WAIT_FOR_POLICY_SET, the OTA sends an
Controls how media sources and transforms are enumerated in Microsoft Media Foundation.
To get a reference to this interface, call
Media Foundation provides a set of built-in media sources and decoders. Applications can enumerate them as follows:
Applications might also enumerate these objects indirectly. For example, if an application uses the topology loader to resolve a partial topology, the topology loader calls
Third parties can implement their own custom media sources and decoders, and register them for enumeration so that other applications can use them.
To control the enumeration order, Media Foundation maintains two process-wide lists of CLSIDs: a preferred list and a blocked list. An object whose CLSID appears in the preferred list appears first in the enumeration order. An object whose CLSID appears on the blocked list is not enumerated.
The lists are initially populated from the registry. Applications can use the
The preferred list contains a set of key/value pairs, where the keys are strings and the values are CLSIDs. These key/value pairs are defined as follows:
The following examples show the various types of key:
To search the preferred list by key name, call the
The blocked list contains a list of CLSIDs. To enumerate the entire list, call the
Searches the preferred list for a class identifier (CLSID) that matches a specified key name.
Member of the
The key name to match. For more information about the format of key names, see the Remarks section of
Receives a CLSID from the preferred list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| No CLSID matching this key was found. |
?
Gets a class identifier (CLSID) from the preferred list, specified by index value.
Member of the
The zero-based index of the CLSID to retrieve.
Receives the key name associated with the CLSID. The caller must free the memory for the returned string by calling the CoTaskMemFree function. For more information about the format of key names, see the Remarks section of
Receives the CLSID at the specified index.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The index parameter is out of range. |
?
Adds a class identifier (CLSID) to the preferred list or removes a CLSID from the list.
Member of the
The key name for the CLSID. For more information about the format of key names, see the Remarks section of
The CLSID to add to the list. If this parameter is
If this method succeeds, it returns
The preferred list is global to the caller's process. Calling this method does not affect the list in other process.
Queries whether a class identifier (CLSID) appears in the blocked list.
Member of the
The CLSID to search for.
The method returns an
Return code | Description |
---|---|
| The specified CLSID appears in the blocked list. |
| Invalid argument. |
| The specified CLSID is not in the blocked list. |
?
Gets a class identifier (CLSID) from the blocked list.
Member of the
The zero-based index of the CLSID to retrieve.
Receives the CLSID at the specified index.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The index parameter is out of range. |
?
Adds a class identifier (CLSID) to the blocked list, or removes a CLSID from the list.
Member of the
The CLSID to add or remove.
Specifies whether to add or remove the CSLID. If the value is TRUE, the method adds the CLSID to the blocked list. Otherwise, the method removes it from the list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
The blocked list is global to the caller's process. Calling this method does not affect the list in other processes.
Controls how media sources and transforms are enumerated in Microsoft Media Foundation.
This interface extends the
To get a reference to this interface, call
Sets the policy for which media sources and transforms are enumerated.
Sets the policy for which media sources and transforms are enumerated.
A value from the
If this method succeeds, it returns
Note??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Represents a media item. A media item is an abstraction for a source of media data, such as a video file. Use this interface to get information about the source, or to change certain playback settings, such as the start and stop times. To get a reference to this interface, call one of the following methods:
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the MFPlay player object that created the media item.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the object that was used to create the media item.
The object reference is set if the application uses
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the application-defined value stored in the media item.
You can assign this value when you first create the media item, by specifying it in the dwUserData parameter of the
This method can be called after the player object is shut down.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains protected content.
Note??CurrentlyImportant??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the number of streams (audio, video, and other) in the media item.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets various flags that describe the media item.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a property store that contains metadata for the source, such as author or title.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the MFPlay player object that created the media item.
If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the URL that was used to create the media item.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No URL is associated with this media item. |
| The |
?
This method applies when the application calls
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the object that was used to create the media item.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media item was created from a URL, not from an object. |
| The |
?
The object reference is set if the application uses
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the application-defined value stored in the media item.
If this method succeeds, it returns
You can assign this value when you first create the media item, by specifying it in the dwUserData parameter of the
This method can be called after the player object is shut down.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Stores an application-defined value in the media item.
This method can return one of these values.
This method can be called after the player object is shut down.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the start and stop times for the media item.
If this method succeeds, it returns
The pguidStartPositionType and pguidStopPositionType parameters receive the units of time that are used. Currently, the only supported value is MFP_POSITIONTYPE_100NS.
Value | Description |
---|---|
MFP_POSITIONTYPE_100NS | 100-nanosecond units. The time parameter (pvStartValue or pvStopValue) uses the following data type:
|
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the start and stop time for the media item.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid start or stop time. Any of the following can cause this error:
|
?
By default, a media item plays from the beginning to the end of the file. This method adjusts the start time and/or the stop time:
The pguidStartPositionType and pguidStopPositionType parameters give the units of time that are used. Currently, the only supported value is MFP_POSITIONTYPE_100NS.
Value | Description |
---|---|
MFP_POSITIONTYPE_100NS | 100-nanosecond units. The time parameter (pvStartValue or pvStopValue) uses the following data type:
To clear a previously set time, use an empty |
?
The adjusted start and stop times are used the next time that
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains a video stream.
If this method succeeds, it returns
To select or deselect streams before playback starts, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains an audio stream.
If this method succeeds, it returns
To select or deselect streams before playback starts, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains protected content.
Note??CurrentlyIf this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the duration of the media item.
If this method succeeds, it returns
The method returns the total duration of the content, regardless of any values set through
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the number of streams (audio, video, and other) in the media item.
If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether a stream is selected to play.
If this method succeeds, it returns
To select or deselect a stream, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Selects or deselects a stream.
If this method succeeds, it returns
You can use this method to change which streams are selected. The change goes into effect the next time that
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries the media item for a stream attribute.
If this method succeeds, it returns
Stream attributes describe an individual stream (audio, video, or other) within the presentation. To get an attribute that applies to the entire presentation, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries the media item for a presentation attribute.
If this method succeeds, it returns
Presentation attributes describe the presentation as a whole. To get an attribute that applies to an individual stream within the presentation, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets various flags that describe the media item.
If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets a media sink for the media item. A media sink is an object that consumes the data from one or more streams.
If this method succeeds, it returns
By default, the MFPlay player object renders audio streams to the Streaming Audio Renderer (SAR) and video streams to the Enhanced Video Renderer (EVR). You can use the SetStreamSink method to provide a different media sink for an audio or video stream; or to support other stream types besides audio and video. You can also use it to configure the SAR or EVR before they are used.
Call this method before calling
To reset the media item to use the default media sink, set pMediaSink to
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a property store that contains metadata for the source, such as author or title.
If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains methods to play media files.
The MFPlay player object exposes this interface. To get a reference to this interface, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback rate.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback state of the MFPlay player object.
This method can be called after the player object has been shut down.
Many of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the current media item.
The
The previous remark also applies to setting the media item in the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio volume.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio balance.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the audio is muted.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the video source rectangle.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current aspect-ratio correction mode. This mode controls whether the aspect ratio of the video is preserved during playback.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the window where the video is displayed.
The video window is specified when you first call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current color of the video border. The border color is used to letterbox the video.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Starts playback.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Pauses playback. While playback is paused, the most recent video frame is displayed, and audio is silent.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Stops playback.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
The current media item is still valid. After playback stops, the playback position resets to the beginning of the current media item.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Steps forward one video frame.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Cannot frame step. Reasons for this error code include:
|
| The object's Shutdown method was called. |
| The media source does not support frame stepping, or the current playback rate is negative. |
?
This method completes asynchronously. When the operation completes, the application's
The player object does not support frame stepping during reverse playback (that is, while the playback rate is negative).
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the playback position.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The value of pvPositionValue is not valid. |
| No media item has been queued. |
| The object's Shutdown method was called. |
?
If you call this method while playback is stopped, the new position takes effect after playback resumes.
This method completes asynchronously. When the operation completes, the application's
If playback was started before SetPosition is called, playback resumes at the new position. If playback was paused, the video is refreshed to display the current frame at the new position.
If you make two consecutive calls to SetPosition with guidPositionType equal to MFP_POSITIONTYPE_100NS, and the second call is made before the first call has completed, the second call supersedes the first. The status code for the superseded call is set to S_FALSE in the event data for that call. This behavior prevents excessive latency from repeated calls to SetPosition, as each call may force the media source to perform a relatively lengthy seek operation.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback position.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| No media item has been queued. |
| The object's Shutdown method was called. |
?
The playback position is calculated relative to the start time of the media item, which can be specified by calling
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the playback duration of the current media item.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The media source does not have a duration. This error can occur with a live source, such as a video camera. |
| There is no current media item. |
?
This method calculates the playback duration, taking into account the start and stop times for the media item. To set the start and stop times, call
For example, suppose that you load a 30-second audio file and set the start time equal to 2 seconds and stop time equal to 10 seconds. The
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the playback rate.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The flRate parameter is zero. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
The method sets the nearest supported rate, which will depend on the underlying media source. For example, if flRate is 50 and the source's maximum rate is 8? normal rate, the method will set the rate to 8.0. The actual rate is indicated in the event data for the
To find the range of supported rates, call
This method does not support playback rates of zero, although Media Foundation defines a meaning for zero rates in some other contexts.
The new rate applies only to the current media item. Setting a new media item resets the playback rate to 1.0.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback rate.
If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the range of supported playback rates.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not support playback in the requested direction (either forward or reverse). |
?
Playback rates are expressed as a ratio of the current rate to the normal rate. For example, 1.0 indicates normal playback speed, 0.5 indicates half speed, and 2.0 indicates twice speed. Positive values indicate forward playback, and negative values indicate reverse playback.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback state of the MFPlay player object.
If this method succeeds, it returns
This method can be called after the player object has been shut down.
Many of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Creates a media item from a URL.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid request. This error can occur when fSync is |
| The object's Shutdown method was called. |
| Unsupported protocol. |
?
This method does not queue the media item for playback. To queue the item for playback, call
The CreateMediaItemFromURL method can be called either synchronously or asynchronously:
The callback interface is set when you first call
If you make multiple asynchronous calls to CreateMediaItemFromURL, they are not guaranteed to complete in the same order. Use the dwUserData parameter to match created media items with pending requests.
Currently, this method returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Creates a media item from an object.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid request. This error can occur when fSync is |
| The object's Shutdown method was called. |
?
The pIUnknownObj parameter must specify one of the following:
This method does not queue the media item for playback. To queue the item for playback, call
The CreateMediaItemFromObject method can be called either synchronously or asynchronously:
The callback interface is set when you first call
If you make multiple asynchronous calls to CreateMediaItemFromObject, they are not guaranteed to complete in the same order. Use the dwUserData parameter to match created media items with pending requests.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queues a media item for playback.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The media item contains protected content. MFPlay currently does not support protected content. |
| No audio playback device was found. This error can occur if the media source contains audio, but no audio playback devices are available on the system. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
To create a media item, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Clears the current media item.
Note??This method is currently not implemented.?If this method succeeds, it returns
This method stops playback and releases the player object's references to the current media item.
This method completes asynchronously. When the operation completes, the application's
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the current media item.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no current media item. |
| There is no current media item. |
| The object's Shutdown method was called. |
?
The
The previous remark also applies to setting the media item in the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio volume.
If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the audio volume.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The flVolume parameter is invalid. |
?
If you call this method before playback starts, the setting is applied after playback starts.
This method does not change the master volume level for the player's audio session. Instead, it adjusts the per-channel volume levels for audio stream(s) that belong to the current media item. Other streams in the audio session are not affected. For more information, see Managing the Audio Session.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio balance.
If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the audio balance.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The flBalance parameter is invalid. |
?
If you call this method before playback starts, the setting is applied when playback starts.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the audio is muted.
If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Mutes or unmutes the audio.
If this method succeeds, it returns
If you call this method before playback starts, the setting is applied after playback starts.
This method does not mute the entire audio session to which the player belongs. It mutes only the streams from the current media item. Other streams in the audio session are not affected. For more information, see Managing the Audio Session.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the size and aspect ratio of the video. These values are computed before any scaling is done to fit the video into the destination window.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
At least one parameter must be non-
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the range of video sizes that can be displayed without significantly degrading performance or image quality.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
At least one parameter must be non-
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the video source rectangle.
MFPlay clips the video to this rectangle and stretches the rectangle to fill the video window.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
MFPlay stretches the source rectangle to fill the entire video window. By default, MFPlay maintains the source's correct aspect ratio, letterboxing if needed. The letterbox color is controlled by the
This method fails if no media item is currently set, or if the current media item does not contain video.
To set the video position before playback starts, call this method inside your event handler for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the video source rectangle.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Specifies whether the aspect ratio of the video is preserved during playback.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
This method fails if no media item is currently set, or if the current media item does not contain video.
To set the aspect-ratio mode before playback starts, call this method inside your event handler for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current aspect-ratio correction mode. This mode controls whether the aspect ratio of the video is preserved during playback.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the window where the video is displayed.
If this method succeeds, it returns
The video window is specified when you first call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Updates the video frame.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Call this method when your application's video playback window receives either a WM_PAINT or WM_SIZE message. This method performs two functions:
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the color for the video border. The border color is used to letterbox the video.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
This method fails if no media item is currently set, or if the current media item does not contain video.
To set the border color before playback starts, call this method inside your event handler for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current color of the video border. The border color is used to letterbox the video.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Applies an audio or video effect to playback.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This effect was already added. |
?
The object specified in the pEffect parameter can implement either a video effect or an audio effect. The effect is applied to any media items set after the method is called. It is not applied to the current media item.
For each media item, the effect is applied to the first selected stream of the matching type (audio or video). If a media item has two selected streams of the same type, the second stream does not receive the effect. The effect is ignored if the media item does not contain a stream that matches the effect type. For example, if you set a video effect and play a file that contains just audio, the video effect is ignored, although no error is raised.
The effect is applied to all subsequent media items, until the application removes the effect. To remove an effect, call
If you set multiple effects of the same type (audio or video), they are applied in the same order in which you call InsertEffect.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Removes an effect that was added with the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The effect was not found. |
?
The change applies to the next media item that is set on the player. The effect is not removed from the current media item.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Removes all effects that were added with the
If this method succeeds, it returns
The change applies to the next media item that is set on the player. The effects are not removed from the current media item.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Shuts down the MFPlay player object and releases any resources the object is using.
If this method succeeds, it returns
After this method is called, most
The player object automatically shuts itself down when its reference count reaches zero. You can use the Shutdown method to shut down the player before all of the references have been released.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Callback interface for the
To set the callback, pass an
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Called by the MFPlay player object to notify the application of a playback event.
The specific type of playback event is given in the eEventType member of the
It is safe to call
Enables a media source to receive a reference to the
If a media source exposes this interface, the Protected Media Path (PMP) Media Session calls SetPMPHost with a reference to the
Provides a reference to the
The
Provides a reference to the
If this method succeeds, it returns
The
Provides a mechanism for a media source to implement content protection functionality in a Windows Store apps.
When to implement: A media source implements
Sets a reference to the
Sets a reference to the
If this method succeeds, it returns
Enables a media source in the application process to create objects in the protected media path (PMP) process.
This interface is used when a media source resides in the application process but the Media Session resides in a PMP process. The media source can use this interface to create objects in the PMP process. For example, to play DRM-protected content, the media source typically must create an input trust authority (ITA) in the PMP process.
To use this interface, the media source implements the
You can also get a reference to this interface by calling
Blocks the protected media path (PMP) process from ending.
If this method succeeds, it returns
When this method is called, it increments the lock count on the PMP process. For every call to this method, the application should make a corresponding call to
Decrements the lock count on the protected media path (PMP) process. Call this method once for each call to
If this method succeeds, it returns
Creates an object in the protect media path (PMP) process, from a CLSID.
The CLSID of the object to create.
A reference to the
The interface identifier (IID) of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
You can use the pStream parameter to initialize the object after it is created.
Allows a media source to create a Windows Runtime object in the Protected Media Path (PMP) process.
Blocks the protected media path (PMP) process from ending.
If this method succeeds, it returns
When this method is called, it increments the lock count on the PMP process. For every call to this method, the application should make a corresponding call to
Decrements the lock count on the protected media path (PMP) process. Call this method once for each call to
If this method succeeds, it returns
Creates a Windows Runtime object in the protected media path (PMP) process.
Id of object to create.
Data to be passed to the object by way of a IPersistStream.
The interface identifier (IID) of the interface to retrieve.
Receives a reference to the created object.
If this method succeeds, it returns
Enables two instances of the Media Session to share the same protected media path (PMP) process.
If your application creates more than one instance of the Media Session, you can use this interface to share the same PMP process among several instances. This can be more efficient than re-creating the PMP process each time.
Use this interface as follows:
Blocks the protected media path (PMP) process from ending.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When this method is called, it increments the lock count on the PMP process. For every call to this method, the application should make a corresponding call to
Decrements the lock count on the protected media path (PMP) process. Call this method once for each call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates an object in the protected media path (PMP) process.
CLSID of the object to create.
Interface identifier of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Represents a presentation clock, which is used to schedule when samples are rendered and to synchronize multiple streams.
To create a new instance of the presentation clock, call the
To get the presentation clock from the Media Session, call
Retrieves the clock's presentation time source.
Retrieves the latest clock time.
This method does not attempt to smooth out jitter or otherwise account for any inaccuracies in the clock time.
Sets the time source for the presentation clock. The time source is the object that drives the clock by providing the current time.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The time source does not have a frequency of 10 MHz. |
| The time source has not been initialized. |
?
The presentation clock cannot start until it has a time source.
The time source is automatically registered to receive state change notifications from the clock, through the time source's
This time source have a frequency of 10 MHz. See
Retrieves the clock's presentation time source.
Receives a reference to the time source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
?
Retrieves the latest clock time.
Receives the latest clock time, in 100-nanosecond units. The time is relative to when the clock was last started.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The clock does not have a presentation time source. Call |
?
This method does not attempt to smooth out jitter or otherwise account for any inaccuracies in the clock time.
Registers an object to be notified whenever the clock starts, stops, or pauses, or changes rate.
Pointer to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before releasing the object, call
Unregisters an object that is receiving state-change notifications from the clock.
Pointer to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Starts the presentation clock.
Initial starting time, in 100-nanosecond units. At the time the Start method is called, the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
?
This method is valid in all states (stopped, paused, or running).
If the clock is paused and restarted from the same position (llClockStartOffset is PRESENTATION_CURRENT_POSITION), the presentation clock sends an
The presentation clock initiates the state change by calling OnClockStart or OnClockRestart on the clock's time source. This call is made synchronously. If it fails, the state change does not occur. If the call succeeds, the state changes, and the clock notifies the other state-change subscribers by calling their OnClockStart or OnClockRestart methods. These calls are made asynchronously.
If the clock is already running, calling Start again has the effect of seeking the clock to the new StartOffset position.
Stops the presentation clock. While the clock is stopped, the clock time does not advance, and the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
| The clock is already stopped. |
?
This method is valid when the clock is running or paused.
The presentation clock initiates the state change by calling
Pauses the presentation clock. While the clock is paused, the clock time does not advance, and the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
| The clock is already paused. |
| The clock is stopped. This request is not valid when the clock is stopped. |
?
This method is valid when the clock is running. It is not valid when the clock is paused or stopped.
The presentation clock initiates the state change by calling
Describes the details of a presentation. A presentation is a set of related media streams that share a common presentation time.
Presentation descriptors are used to configure media sources and some media sinks. To get the presentation descriptor from a media source, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of stream descriptors in the presentation. Each stream descriptor contains information about one stream in the media source. To retrieve a stream descriptor, call the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of stream descriptors in the presentation. Each stream descriptor contains information about one stream in the media source. To retrieve a stream descriptor, call the
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a stream descriptor for a stream in the presentation. The stream descriptor contains information about the stream.
Zero-based index of the stream. To find the number of streams in the presentation, call the
Receives a Boolean value. The value is TRUE if the stream is currently selected, or
Receives a reference to the stream descriptor's
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Selects a stream in the presentation.
The stream number to select, indexed from zero. To find the number of streams in the presentation, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| dwDescriptorIndex is out of range. |
?
If a stream is selected, the media source will generate data for that stream. The media source will not generated data for deselected streams. To deselect a stream, call
To query whether a stream is selected, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Deselects a stream in the presentation.
The stream number to deselect, indexed from zero. To find the number of streams in the presentation, call the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| dwDescriptorIndex is out of range. |
?
If a stream is deselected, no data is generated for that stream. To select the stream again, call
To query whether a stream is selected, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a copy of this presentation descriptor.
Receives a reference to the
If this method succeeds, it returns
This method performs a shallow copy of the presentation descriptor. The stream descriptors are not cloned. Therefore, use caution when modifying the presentation presentation descriptor or its stream descriptors.
If the original presentation descriptor is from a media source, do not modify the presentation descriptor unless the source is stopped. If you use the presentation descriptor to configure a media sink, do not modify the presentation descriptor after the sink is configured.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a stream descriptor for a stream in the presentation. The stream descriptor contains information about the stream.
Zero-based index of the stream. To find the number of streams in the presentation, call the
Receives a Boolean value. The value is TRUE if the stream is currently selected, or
Receives a reference to the stream descriptor's
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Provides the clock times for the presentation clock.
This interface is implemented by presentation time sources. A presentation time source is an object that provides the clock time for the presentation clock. For example, the audio renderer is a presentation time source. The rate at which the audio renderer consumes audio samples determines the clock time. If the audio format is 44100 samples per second, the audio renderer will report that one second has passed for every 44100 audio samples it plays. In this case, the timing is provided by the sound card.
To set the presentation time source on the presentation clock, call
A presentation time source must also implement the
Media Foundation provides a presentation time source that is based on the system clock. To create this object, call the
Retrieves the underlying clock that the presentation time source uses to generate its clock times.
A presentation time source must support stopping, starting, pausing, and rate changes. However, in many cases the time source derives its clock times from a hardware clock or other device. The underlying clock is always running, and might not support rate changes.
Optionally, a time source can expose the underlying clock by implementing this method. The underlying clock is always running, even when the presentation time source is paused or stopped. (Therefore, the underlying clock returns the
The underlying clock is useful if you want to make decisions based on the clock times while the presentation clock is stopped or paused.
If the time source does not expose an underlying clock, the method returns
Retrieves the underlying clock that the presentation time source uses to generate its clock times.
Receives a reference to the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This time source does not expose an underlying clock. |
?
A presentation time source must support stopping, starting, pausing, and rate changes. However, in many cases the time source derives its clock times from a hardware clock or other device. The underlying clock is always running, and might not support rate changes.
Optionally, a time source can expose the underlying clock by implementing this method. The underlying clock is always running, even when the presentation time source is paused or stopped. (Therefore, the underlying clock returns the
The underlying clock is useful if you want to make decisions based on the clock times while the presentation clock is stopped or paused.
If the time source does not expose an underlying clock, the method returns
Provides a method that allows content protection systems to perform a handshake with the protected environment. This is needed because the CreateFile and DeviceIoControl APIs are not available to Windows Store apps.
See
Allows content protection systems to access the protected environment.
The length in bytes of the input data.
A reference to the input data.
The length in bytes of the output data.
A reference to the output data.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
See
Gets the Global Revocation List (GLR).
The length of the data returned in output.
Receives the contents of the global revocation list file.
If this method succeeds, it returns
Allows reading of the system Global Revocation List (GRL).
Enables the quality manager to adjust the audio or video quality of a component in the pipeline.
This interface is exposed by pipeline components that can adjust their quality. Typically it is exposed by decoders and stream sinks. For example, the enhanced video renderer (EVR) implements this interface. However, media sources can also implement this interface.
To get a reference to this interface from a media source, call
The quality manager typically obtains this interface when the quality manager's
Retrieves the current drop mode.
Retrieves the current quality level.
Sets the drop mode. In drop mode, a component drops samples, more or less aggressively depending on the level of the drop mode.
Requested drop mode, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The component does not support the specified mode or any higher modes. |
?
If this method is called on a media source, the media source might switch between thinned and non-thinned output. If that occurs, the affected streams will send an
Sets the quality level. The quality level determines how the component consumes or produces samples.
Requested quality level, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The component does not support the specified quality level or any levels below it. |
?
Retrieves the current drop mode.
Receives the drop mode, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the current quality level.
Receives the quality level, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Drops samples over a specified interval of time.
Amount of time to drop, in 100-nanosecond units. This value is always absolute. If the method is called multiple times, do not add the times from previous calls.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support this method. |
?
Ideally the quality manager can prevent a renderer from falling behind. But if this does occur, then simply lowering quality does not guarantee the renderer will ever catch up. As a result, audio and video might fall out of sync. To correct this problem, the quality manager can call DropTime to request that the renderer drop samples quickly over a specified time interval. After that period, the renderer stops dropping samples.
This method is primarily intended for the video renderer. Dropped audio samples cause audio glitching, which is not desirable.
If a component does not support this method, it should return
Enables a pipeline object to adjust its own audio or video quality, in response to quality messages.
This interface enables a pipeline object to respond to quality messages from the media sink. Currently, it is supported only for video decoders.
If a video decoder exposes
If the decoder exposes
The preceding remarks apply to the default implementation of the quality manager; custom quality managers can implement other behaviors.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Forwards an
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Queries an object for the number of quality modes it supports. Quality modes are used to adjust the trade-off between quality and speed when rendering audio or video.
The default presenter for the enhanced video renderer (EVR) implements this interface. The EVR uses the interface to respond to quality messages from the quality manager.
Gets the maximum drop mode. A higher drop mode means that the object will, if needed, drop samples more aggressively to match the presentation clock.
To get the current drop mode, call the
Gets the minimum quality level that is supported by the component.
To get the current quality level, call the
Gets the maximum drop mode. A higher drop mode means that the object will, if needed, drop samples more aggressively to match the presentation clock.
Receives the maximum drop mode, specified as a member of the
If this method succeeds, it returns
To get the current drop mode, call the
Gets the minimum quality level that is supported by the component.
Receives the minimum quality level, specified as a member of the
If this method succeeds, it returns
To get the current quality level, call the
Adjusts playback quality. This interface is exposed by the quality manager.
Media Foundation provides a default quality manager that is tuned for playback. Applications can provide a custom quality manager to the Media Session by setting the
Called when the Media Session is about to start playing a new topology.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
In a typical quality manager this method does the following:
Enumerates the nodes in the topology.
Calls
Queries for the
The quality manager can then use the
Called when the Media Session selects a presentation clock.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Called when the media processor is about to deliver an input sample to a pipeline component.
Pointer to the
Index of the input stream on the topology node.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is called for every sample passing through every pipeline component. Therefore, the method must return quickly to avoid introducing too much latency into the pipeline.
Called after the media processor gets an output sample from a pipeline component.
Pointer to the
Index of the output stream on the topology node.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is called for every sample passing through every pipeline component. Therefore, the method must return quickly to avoid introducing too much latency into the pipeline.
Called when a pipeline component sends an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Called when the Media Session is shutting down.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The quality manager should release all references to the Media Session when this method is called.
Gets or sets the playback rate.
Objects can expose this interface as a service. To obtain a reference to the interface, call
For more information, see About Rate Control.
To discover the playback rates that an object supports, use the
Sets the playback rate.
If TRUE, the media streams are thinned. Otherwise, the stream is not thinned. For media sources and demultiplexers, the object must thin the streams when this parameter is TRUE. For downstream transforms, such as decoders and multiplexers, this parameter is informative; it notifies the object that the input streams are thinned. For information, see About Rate Control.
The requested playback rate. Postive values indicate forward playback, negative values indicate reverse playback, and zero indicates scrubbing (the source delivers a single frame).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support reverse playback. |
| The object does not support thinning. |
| The object does not support the requested playback rate. |
| The object cannot change to the new rate while in the running state. |
?
The Media Session prevents some transitions between rate boundaries, depending on the current playback state:
Playback State | Forward/Reverse | Forward/Zero | Reverse/Zero |
---|---|---|---|
Running | No | No | No |
Paused | No | Yes | No |
Stopped | Yes | Yes | Yes |
?
If the transition is not supported, the method returns
When a media source completes a call to SetRate, it sends the
If a media source switches between thinned and non-thinned playback, the streams send an
When the Media Session completes a call to SetRate, it sends the
Gets the current playback rate.
Receives the current playback rate.
Receives the value TRUE if the stream is currently being thinned. If the object does not support thinning, this parameter always receives the value
Queries the range of playback rates that are supported, including reverse playback.
To get a reference to this interface, call
Applications can use this interface to discover the fastest and slowest playback rates that are possible, and to query whether a given playback rate is supported. Applications obtain this interface from the Media Session. Internally, the Media Session queries the objects in the pipeline. For more information, see How to Determine Supported Rates.
To get the current playback rate and to change the playback rate, use the
Playback rates are expressed as a ratio the normal playback rate. Reverse playback is expressed as a negative rate. Playback is either thinned or non-thinned. In thinned playback, some of the source data is skipped (typically delta frames). In non-thinned playback, all of the source data is rendered.
You might need to implement this interface if you are writing a pipeline object (media source, transform, or media sink). For more information, see Implementing Rate Control.
Retrieves the slowest playback rate supported by the object.
Specifies whether to query to the slowest forward playback rate or reverse playback rate. The value is a member of the
If TRUE, the method retrieves the slowest thinned playback rate. Otherwise, the method retrieves the slowest non-thinned playback rate. For information about thinning, see About Rate Control.
Receives the slowest playback rate that the object supports.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support reverse playback. |
| The object does not support thinning. |
?
The value returned in plfRate represents a lower bound. Playback at this rate is not guaranteed. Call
If eDirection is
Gets the fastest playback rate supported by the object.
Specifies whether to query to the fastest forward playback rate or reverse playback rate. The value is a member of the
If TRUE, the method retrieves the fastest thinned playback rate. Otherwise, the method retrieves the fastest non-thinned playback rate. For information about thinning, see About Rate Control.
Receives the fastest playback rate that the object supports.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support reverse playback. |
| The object does not support thinning. |
?
For some formats (such as ASF), thinning means dropping all frames that are not I-frames. If a component produces stream data, such as a media source or a demultiplexer, it should pay attention to the fThin parameter and return
If the component processes or receives a stream (most transforms or media sinks), it may ignore this parameter if it does not care whether the stream is thinned. In the Media Session's implementation of rate support, if the transforms do not explicitly support reverse playback, the Media Session will attempt to playback in reverse with thinning but not without thinning. Therefore, most applications will set fThin to TRUE when using the Media Session for reverse playback.
If eDirection is
Queries whether the object supports a specified playback rate.
If TRUE, the method queries whether the object supports the playback rate with thinning. Otherwise, the method queries whether the object supports the playback rate without thinning. For information about thinning, see About Rate Control.
The playback rate to query.
If the object does not support the playback rate given in flRate, this parameter receives the closest supported playback rate. If the method returns
The method returns an
Return code | Description |
---|---|
| The object supports the specified rate. |
| The object does not support reverse playback. |
| The object does not support thinning. |
| The object does not support the specified rate. |
?
Creates an instance of either the sink writer or the source reader.
To get a reference to this interface, call the CoCreateInstance function. The CLSID is CLSID_MFReadWriteClassFactory. Call the
As an alternative to using this interface, you can call any of the following functions:
Internally, these functions use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Creates an instance of the sink writer or source reader, given a URL.
The CLSID of the object to create.
Value | Meaning |
---|---|
| Create the sink writer. The ppvObject parameter receives an |
| Create the source reader. The ppvObject parameter receives an |
?
A null-terminated string that contains a URL. If clsid is CLSID_MFSinkWriter, the URL specifies the name of the output file. The sink writer creates a new file with this name. If clsid is CLSID_MFSourceReader, the URL specifies the input file for the source reader.
A reference to the
This parameter can be
The IID of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Creates an instance of the sink writer or source reader, given an
The CLSID of the object to create.
Value | Meaning |
---|---|
| Create the sink writer. The ppvObject parameter receives an |
| Create the source reader. The ppvObject parameter receives an |
?
A reference to the
Value | Meaning |
---|---|
Pointer to a byte stream. If clsid is CLSID_MFSinkWriter, the sink writer writes data to this byte stream. If clsid is CLSID_MFSourceReader, this byte stream provides the source data for the source reader. | |
Pointer to a media sink. Applies only when clsid is CLSID_MFSinkWriter. | |
Pointer to a media source. Applies only when clsid is CLSID_MFSourceReader. |
?
A reference to the
This parameter can be
The IID of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Notifies a pipeline object to register itself with the Multimedia Class Scheduler Service (MMCSS).
Any pipeline object that creates worker threads should implement this interface.
Media Foundation provides a mechanism for applications to associate branches in the topology with MMCSS tasks. A topology branch is defined by a source node in the topology and all of the nodes downstream from it. An application registers a topology branch with MMCSS by setting the
When the application registers a topology branch with MMCSS, the Media Session queries every pipeline object in that branch for the
When the application unregisters the topology branch, the Media Session calls UnregisterThreads.
If a pipeline object creates its own worker threads but does not implement this interface, it can cause priority inversions in the Media Foundation pipeline, because high-priority processing threads might be blocked while waiting for the component to process data on a thread with lower priority.
Pipeline objects that do not create worker threads do not need to implement this interface.
In Windows?8, this interface is extended with
Specifies the work queue for the topology branch that contains this object.
An application can register a branch of the topology to use a private work queue. The Media Session notifies any pipeline object that supports
When the application unregisters the topology branch, the Media Session calls SetWorkQueue again with the value
Notifies the object to register its worker threads with the Multimedia Class Scheduler Service (MMCSS).
The MMCSS task identifier.
The name of the MMCSS task.
If this method succeeds, it returns
The object's worker threads should register themselves with MMCSS by calling AvSetMmThreadCharacteristics, using the task name and identifier specified in this method.
Notifies the object to unregister its worker threads from the Multimedia Class Scheduler Service (MMCSS).
If this method succeeds, it returns
The object's worker threads should unregister themselves from MMCSS by calling AvRevertMmThreadCharacteristics.
Specifies the work queue for the topology branch that contains this object.
The identifier of the work queue, or the value
If this method succeeds, it returns
An application can register a branch of the topology to use a private work queue. The Media Session notifies any pipeline object that supports
When the application unregisters the topology branch, the Media Session calls SetWorkQueue again with the value
Notifies a pipeline object to register itself with the Multimedia Class Scheduler Service (MMCSS).
This interface is a replacement for the
Notifies the object to register its worker threads with the Multimedia Class Scheduler Service (MMCSS).
The MMCSS task identifier. If the value is zero on input, the object should create a new MCCSS task group. See Remarks.
The name of the MMCSS task.
The base priority of the thread.
If this method succeeds, it returns
If the object does not create worker threads, the method should simply return
Otherwise, if the value of *pdwTaskIndex
is zero on input, the object should perform the following steps:
*pdwTaskIndex
equal to the task identifier.If the value of *pdwTaskIndex
is nonzero on input, the parameter contains an existing MMCSS task identifer. In that case, all worker threads of the object should register themselves for that task by calling AvSetMmThreadCharacteristics.
Notifies the object to unregister its worker threads from the Multimedia Class Scheduler Service (MMCSS).
If this method succeeds, it returns
Specifies the work queue that this object should use for asynchronous work items.
The work queue identifier.
The base priority for work items.
If this method succeeds, it returns
The object should use the values of dwMultithreadedWorkQueueId and lWorkItemBasePriority when it queues new work items. Use the
Used by the Microsoft Media Foundation proxy/stub DLL to marshal certain asynchronous method calls across process boundaries.
Applications do not use or implement this interface.
Modifies a topology for use in a Terminal Services environment.
To use this interface, do the following:
The application must call UpdateTopology before calling
Modifies a topology for use in a Terminal Services environment.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application is running in a Terminal Services client session, call this method before calling
Retrieves a reference to the remote object for which this object is a proxy.
Retrieves a reference to the remote object for which this object is a proxy.
Interface identifier (IID) of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a reference to the object that is hosting this proxy.
Interface identifier (IID) of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets and retrieves Synchronized Accessible Media Interchange (SAMI) styles on the SAMI Media Source.
To get a reference to this interface, call
Gets the number of styles defined in the SAMI file.
Gets a list of the style names defined in the SAMI file.
Gets the number of styles defined in the SAMI file.
Receives the number of SAMI styles in the file.
If this method succeeds, it returns
Gets a list of the style names defined in the SAMI file.
Pointer to a
If this method succeeds, it returns
Sets the current style on the SAMI media source.
Pointer to a null-terminated string containing the name of the style. To clear the current style, pass an empty string (""). To get the list of style names, call
If this method succeeds, it returns
Gets the current style from the SAMI media source.
Receives a reference to a null-terminated string that contains the name of the style. If no style is currently set, the method returns an empty string. The caller must free the memory for the string by calling CoTaskMemFree.
If this method succeeds, it returns
Represents a media sample, which is a container object for media data. For video, a sample typically contains one video frame. For audio data, a sample typically contains multiple audio samples, rather than a single sample of audio.
A media sample contains zero or more buffers. Each buffer manages a block of memory, and is represented by the
To create a new media sample, call
When you call CopyAllItems, inherited from the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves flags associated with the sample.
Currently no flags are defined. Instead, metadata for samples is defined using attributes. To get attibutes from a sample, use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the presentation time of the sample.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the duration of the sample.
If the sample contains more than one buffer, the duration includes the data from all of the buffers.
If the retrieved duration is zero, or if the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of buffers in the sample.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the total length of the valid data in all of the buffers in the sample. The length is calculated as the sum of the values retrieved by the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves flags associated with the sample.
Currently no flags are defined. Instead, metadata for samples is defined using attributes. To get attibutes from a sample, use the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets flags associated with the sample.
Currently no flags are defined. Instead, metadata for samples is defined using attributes. To set attibutes on a sample, use the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the presentation time of the sample.
Receives the presentation time, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The sample does not have a presentation time. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the presentation time of the sample.
The presentation time, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Some pipeline components require samples that have time stamps. Generally the component that generates the data for the sample also sets the time stamp. The Media Session might modify the time stamps.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the duration of the sample.
Receives the duration, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The sample does not have a specified duration. |
?
If the sample contains more than one buffer, the duration includes the data from all of the buffers.
If the retrieved duration is zero, or if the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the duration of the sample.
Duration of the sample, in 100-nanosecond units.
If this method succeeds, it returns
This method succeeds if the duration is negative, although negative durations are probably not valid for most types of data. It is the responsibility of the object that consumes the sample to validate the duration.
The duration can also be zero. This might be valid for some types of data. For example, the sample might contain stream metadata with no buffers.
Until this method is called, the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of buffers in the sample.
Receives the number of buffers in the sample. A sample might contain zero buffers.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets a buffer from the sample, by index.
Note??In most cases, it is safer to use the
A sample might contain more than one buffer. Use the GetBufferByIndex method to enumerate the individual buffers.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Converts a sample with multiple buffers into a sample with a single buffer.
Receives a reference to the
If the sample contains more than one buffer, this method copies the data from the original buffers into a new buffer, and replaces the original buffer list with the new buffer. The new buffer is returned in the ppBuffer parameter.
If the sample contains a single buffer, this method returns a reference to the original buffer. In typical use, most samples do not contain multiple buffers.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Adds a buffer to the end of the list of buffers in the sample.
Pointer to the buffer's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
?
For uncompressed video data, each buffer should contain a single video frame, and samples should not contain multiple frames. In general, storing multiple buffers in a sample is discouraged.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes a buffer at a specified index from the sample.
Index of the buffer. To find the number of buffers in the sample, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes all of the buffers from the sample.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the total length of the valid data in all of the buffers in the sample. The length is calculated as the sum of the values retrieved by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Copies the sample data to a buffer. This method concatenates the valid data from all of the buffers of the sample, in order.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| The buffer is not large enough to contain the data. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Callback interface to get media data from the sample-grabber sink.
The sample-grabber sink enables an application to get data from the Media Foundation pipeline without implementing a custom media sink. To use the sample-grabber sink, the application must perform the following steps:
Implement the
Call
Create a topology that includes an output node with the sink's
Pass this topology to the Media Session.
During playback, the sample-grabber sink calls methods on the application's callback.
You cannot use the sample-grabber sink to get protected content.
Extends the
This callback interface is used with the sample-grabber sink. It extends the
The OnProcessSampleEx method adds a parameter that contains the attributes for the media sample. You can use the attributes to get information about the sample, such as field dominance and telecine flags.
To use this interface, do the following:
Begins an asynchronous request to write a media sample to the stream.
When the sample has been written to the stream, the callback object's
Begins an asynchronous request to write a media sample to the stream.
A reference to the
A reference to the
A reference to the
If this method succeeds, it returns
When the sample has been written to the stream, the callback object's
Completes an asynchronous request to write a media sample to the stream.
A reference to the
If this method succeeds, it returns
Call this method when the
Provides encryption for media data inside the protected media path (PMP).
Retrieves the version of sample protection that the component implements on input.
Retrieves the version of sample protection that the component implements on output.
Retrieves the version of sample protection that the component implements on input.
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the version of sample protection that the component implements on output.
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the sample protection certificate.
Specifies the version number of the sample protection scheme for which to receive a certificate. The version number is specified as a
Receives a reference to a buffer containing the certificate. The caller must free the memory for the buffer by calling CoTaskMemFree.
Receives the size of the ppCert buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
?
For certain version numbers of sample protection, the downstream component must provide a certificate. Components that do not support these version numbers can return E_NOTIMPL.
Retrieves initialization information for sample protection from the upstream component.
Specifies the version number of the sample protection scheme. The version number is specified as a
Identifier of the output stream. The identifier corresponds to the output stream identifier returned by the
Pointer to a certificate provided by the downstream component.
Size of the certificate, in bytes.
Receives a reference to a buffer that contains the initialization information for downstream component. The caller must free the memory for the buffer by calling CoTaskMemFree.
Receives the size of the ppbSeed buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
?
This method must be implemented by the upstream component. The method fails if the component does not support the requested sample protection version. Downstream components do not implement this method and should return E_NOTIMPL.
Initializes sample protection on the downstream component.
Specifies the version number of the sample protection scheme. The version number is specified as a
Identifier of the input stream. The identifier corresponds to the output stream identifier returned by the
Pointer to a buffer that contains the initialization data provided by the upstream component. To retrieve this buffer, call
Size of the pbSeed buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Persists media data from a source byte stream to an application-provided byte stream.
The byte stream used for HTTP download implements this interface. To get a reference to this interface, call
Retrieves the percentage of content saved to the provided byte stream.
Begins saving a Windows Media file to the application's byte stream.
Pointer to the
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When the operation completes, the callback object's
Completes the operation started by
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Cancels the operation started by
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the percentage of content saved to the provided byte stream.
Receives the percentage of completion.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Begins an asynchronous request to create an object from a URL.
When the Source Resolver creates a media source from a URL, it passes the request to a scheme handler. The scheme handler might create a media source directly from the URL, or it might return a byte stream. If it returns a byte stream, the source resolver use a byte-stream handler to create the media source from the byte stream.
The dwFlags parameter must contain the
If the
The following table summarizes the behavior of these two flags when passed to this method:
Flag | Object created |
---|---|
Media source or byte stream | |
Byte stream |
?
The
When the operation completes, the scheme handler calls the
Begins an asynchronous request to create an object from a URL.
When the Source Resolver creates a media source from a URL, it passes the request to a scheme handler. The scheme handler might create a media source directly from the URL, or it might return a byte stream. If it returns a byte stream, the source resolver use a byte-stream handler to create the media source from the byte stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Cannot open the URL with the requested access (read or write). |
| Unsupported byte stream type. |
?
The dwFlags parameter must contain the
If the
The following table summarizes the behavior of these two flags when passed to this method:
Flag | Object created |
---|---|
Media source or byte stream | |
Byte stream |
?
The
When the operation completes, the scheme handler calls the
Completes an asynchronous request to create an object from a URL.
Pointer to the
Receives a member of the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation was canceled. |
?
Call this method from inside the
Cancels the current request to create an object from a URL.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You can use this method to cancel a previous call to BeginCreateObject. Because that method is asynchronous, however, it might be completed before the operation can be canceled. Therefore, your callback might still be invoked after you call this method.
The operation cannot be canceled if BeginCreateObject returns
Establishes a one-way secure channel between two objects.
Retrieves the client's certificate.
Receives a reference to a buffer allocated by the object. The buffer contains the client's certificate. The caller must release the buffer by calling CoTaskMemFree.
Receives the size of the ppCert buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Passes the encrypted session key to the client.
Pointer to a buffer that contains the encrypted session key. This parameter can be
Size of the pbEncryptedSessionKey buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
For a particular seek position, gets the two nearest key frames.
If an application seeks to a non?key frame, the decoder must start decoding from the previous key frame. This can increase latency, because several frames might get decoded before the requested frame is reached. To reduce latency, an application can call this method to find the two key frames that are closest to the desired time, and then seek to one of those key frames.
For a particular seek position, gets the two nearest key frames.
A reference to a
The seek position. The units for this parameter are specified by pguidTimeFormat.
Receives the position of the nearest key frame that appears earlier than pvarStartPosition. The units for this parameter are specified by pguidTimeFormat.
Receives the position of the nearest key frame that appears earlier than pvarStartPosition. The units for this parameter are specified by pguidTimeFormat.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The time format specified in pguidTimeFormat is not supported. |
?
If an application seeks to a non?key frame, the decoder must start decoding from the previous key frame. This can increase latency, because several frames might get decoded before the requested frame is reached. To reduce latency, an application can call this method to find the two key frames that are closest to the desired time, and then seek to one of those key frames.
Implemented by the Microsoft Media Foundation sink writer object.
To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Implemented by the Microsoft Media Foundation sink writer object.
To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Implemented by the Microsoft Media Foundation sink writer object.
To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Called by the media pipeline to get information about a transform provided by the sensor transform.
The index of the transform for which information is being requested. In the current release, this value will always be 0.
Gets the identifier for the transform.
The attribute store to be populated.
A collection of
If this method succeeds, it returns
Implemented by the Sequencer Source. The sequencer source enables an application to create a sequence of topologies. To create the sequencer source, call
Adds a topology to the end of the queue.
Pointer to the
A combination of flags from the
Receives the sequencer element identifier for this topology.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The source topology node is missing one of the following attributes: |
?
The sequencer plays topologies in the order they are queued. You can queue as many topologies as you want to preroll.
The application must indicate to the sequencer when it has queued the last topology on the Media Session. To specify the last topology, set the SequencerTopologyFlags_Last flag in the dwFlags parameter when you append the topology. The sequencer uses this information to end playback with the pipeline. Otherwise, the sequencer waits indefinitely for a new topology to be queued.
Deletes a topology from the queue.
The sequencer element identifier of the topology to delete.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Maps a presentation descriptor to its associated sequencer element identifier and the topology it represents.
Pointer to the
Receives the sequencer element identifier. This value is assigned by the sequencer source when the application calls
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The presentation descriptor is not valid. |
| This segment was canceled. |
?
The topology returned in ppTopology is the original topology that the application specified in AppendTopology. The source nodes in this topology contain references to the native sources. Do not queue this topology on the Media Session. Instead, call
Updates a topology in the queue.
Sequencer element identifier of the topology to update.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The sequencer source has been shut down. |
?
This method is asynchronous. When the operation is completed, the sequencer source sends an
Updates the flags for a topology in the queue.
Sequencer element identifier of the topology to update.
Bitwise OR of flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Queries an object for a specified service interface.
A service is an interface that is exposed by one object but might be implemented by another object. The GetService method is equivalent to QueryInterface, with the following difference: when QueryInterface retrieves a reference to an interface, it is guaranteed that you can query the returned interface and get back the original interface. The GetService method does not make this guarantee, because the retrieved interface might be implemented by a separate object.
The
Retrieves a service interface.
The service identifier (SID) of the service. For a list of service identifiers, see Service Interfaces.
The interface identifier (IID) of the interface being requested.
Receives the interface reference. The caller must release the interface.
Applies to: desktop apps | Metro style apps
Retrieves a service interface.
The service identifier (SID) of the service. For a list of service identifiers, see Service Interfaces.
Exposed by some Media Foundation objects that must be explicitly shut down.
The following types of object expose
Any component that creates one of these objects is responsible for calling Shutdown on the object before releasing the object. Typically, applications do not create any of these objects directly, so it is not usually necessary to use this interface in an application.
To obtain a reference to this interface, call QueryInterface on the object.
If you are implementing a custom object, your object can expose this interface, but only if you can guarantee that your application will call Shutdown.
Media sources, media sinks, and synchronous MFTs should not implement this interface, because the Media Foundation pipeline will not call Shutdown on these objects. Asynchronous MFTs must implement this interface.
This interface is not related to the
Some Media Foundation interfaces define a Shutdown method, which serves the same purpose as
Queries the status of an earlier call to the
Until Shutdown is called, the GetShutdownStatus method returns
If an object's Shutdown method is asynchronous, pStatus might receive the value
Shuts down a Media Foundation object and releases all resources associated with the object.
If this method succeeds, it returns
The
Queries the status of an earlier call to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The Shutdown method has not been called on this object. |
?
Until Shutdown is called, the GetShutdownStatus method returns
If an object's Shutdown method is asynchronous, pStatus might receive the value
Provides a method that allows content protection systems to get the procedure address of a function in the signed library. This method provides the same functionality as GetProcAddress which is not available to Windows Store apps.
See
Gets the procedure address of the specified function in the signed library.
The entry point name in the DLL that specifies the function.
Receives the address of the entry point.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
See
Controls the master volume level of the audio session associated with the streaming audio renderer (SAR) and the audio capture source.
The SAR and the audio capture source expose this interface as a service. To get a reference to the interface, call
To control the volume levels of individual channels, use the
Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation). For each channel, the attenuation level is the product of:
The master volume level of the audio session.
The volume level of the channel.
For example, if the master volume is 0.8 and the channel volume is 0.5, the attenuaton for that channel is 0.8 ? 0.5 = 0.4. Volume levels can exceed 1.0 (positive gain), but the audio engine clips any audio samples that exceed zero decibels. To change the volume level of individual channels, use the
Use the following formula to convert the volume level to the decibel (dB) scale:
Attenuation (dB) = 20 * log10(Level)
For example, a volume level of 0.50 represents 6.02 dB of attenuation.
Retrieves the master volume level.
If an external event changes the master volume, the audio renderer sends an
Queries whether the audio is muted.
Calling
Sets the master volume level.
Volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
Events outside of the application can change the master volume level. For example, the user can change the volume from the system volume-control program (SndVol). If an external event changes the master volume, the audio renderer sends an
Retrieves the master volume level.
Receives the volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
If an external event changes the master volume, the audio renderer sends an
Mutes or unmutes the audio.
Specify TRUE to mute the audio, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
This method does not change the volume level returned by the
Queries whether the audio is muted.
Receives a Boolean value. If TRUE, the audio is muted; otherwise, the audio is not muted.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
Calling
Implemented by the Microsoft Media Foundation sink writer object.
To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Adds a stream to the sink writer.
A reference to the
Receives the zero-based index of the new stream.
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Sets the input format for a stream on the sink writer.
The zero-based index of the stream. The index is received by the pdwStreamIndex parameter of the
A reference to the
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The underlying media sink does not support the format, no conversion is possible, or a dynamic format change is not possible. |
| The dwStreamIndex parameter is invalid. |
| Could not find an encoder for the encoded format. |
?
The input format does not have to match the target format that is written to the media sink. If the formats do not match, the method attempts to load an encoder that can encode from the input format to the target format.
After streaming begins?that is, after the first call to
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Initializes the sink writer for writing.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
Call this method after you configure the input streams and before you send any data to the sink writer.
You must call BeginWriting before calling any of the following methods:
The underlying media sink must have at least one input stream. Otherwise, BeginWriting returns
If BeginWriting succeeds, any further calls to BeginWriting return
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Delivers a sample to the sink writer.
The zero-based index of the stream for this sample.
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
You must call
By default, the sink writer limits the rate of incoming data by blocking the calling thread inside the WriteSample method. This prevents the application from delivering samples too quickly. To disable this behavior, set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Indicates a gap in an input stream.
The zero-based index of the stream.
The position in the stream where the gap in the data occurs. The value is given in 100-nanosecond units, relative to the start of the stream.
If this method succeeds, it returns
For video, call this method once for each missing frame. For audio, call this method at least once per second during a gap in the audio. Set the
Internally, this method calls
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Places a marker in the specified stream.
The zero-based index of the stream.
Pointer to an application-defined value. The value of this parameter is returned to the caller in the pvContext parameter of the caller's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
To use this method, you must provide an asynchronous callback when you create the sink writer. Otherwise, the method returns
Markers provide a way to be notified when the media sink consumes all of the samples in a stream up to a certain point. The media sink does not process the marker until it has processed all of the samples that came before the marker. When the media sink processes the marker, the sink writer calls the application's OnMarker method. When the callback is invoked, you know that the sink has consumed all of the previous samples for that stream.
For example, to change the format midstream, call PlaceMarker at the point where the format changes. When OnMarker is called, it is safe to call
Internally, this method calls
Note??The pvContext parameter of the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Notifies the media sink that a stream has reached the end of a segment.
The zero-based index of a stream, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
You must call
This method sends an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Flushes one or more streams.
The zero-based index of the stream to flush, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
You must call
For each stream that is flushed, the sink writer drops all pending samples, flushes the encoder, and sends an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Completes all writing operations on the sink writer.
If this method succeeds, it returns
Call this method after you send all of the input samples to the sink writer. The method performs any operations needed to create the final output from the media sink.
If you provide a callback interface when you create the sink writer, this method completes asynchronously. When the operation completes, the
Internally, this method calls
After this method is called, the following methods will fail:
If you do not call Finalize, the output from the media sink might be incomplete or invalid. For example, required file headers might be missing from the output file.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Queries the underlying media sink or encoder for an interface.
The zero-based index of a stream to query, or
A service identifier
The interface identifier (IID) of the interface being requested.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
If the dwStreamIndex parameter equals
If the input and output types of the sink are identical and compressed, it's possible that no encoding is required and the video encoder will not be instantiated. In that case, GetServiceForStream will return
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Gets statistics about the performance of the sink writer.
The zero-based index of a stream to query, or
A reference to an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid stream number. |
?
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Callback interface for the Microsoft Media Foundation sink writer.
Set the callback reference by setting the
The callback methods can be called from any thread, so an object that implements this interface must be thread-safe.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Called when the
Returns an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Called when the
Returns an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Extends the
This interface provides a mechanism for apps that use
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when the transform chain in the
Returns an
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when an asynchronous error occurs with the
Returns an
Provides additional functionality on the sink writer for dynamically changing the media type and encoder configuration.
The Sink Writer implements this interface in Windows?8.1. To get a reference to this interface, call QueryInterface on the
Dynamically changes the target media type that Sink Writer is encoding to.
Specifies the stream index.
The new media format to encode to.
The new set of encoding parameters to configure the encoder with. If not specified, previously provided parameters will be used.
If this method succeeds, it returns
The new media type must be supported by the media sink being used and by the encoder MFTs installed on the system.
Dynamically updates the encoder configuration with a collection of new encoder settings.
Specifies the stream index.
A set of encoding parameters to configure the encoder with.
If this method succeeds, it returns
The encoder will be configured with these settings after all previously queued input media samples have been sent to it through
Extends the
The Sink Writer implements this interface in Windows?8. To get a reference to this interface, call QueryInterface on the Sink Writer.
Gets a reference to a Media Foundation transform (MFT) for a specified stream.
The zero-based index of a stream.
The zero-based index of the MFT to retreive.
Receives a reference to a
Receives a reference to the
If this method succeeds, it returns
Represents a buffer which contains media data for a
Gets a value that indicates if Append, AppendByteStream, or Remove is in process.
Gets the buffered time range.
Gets or sets the timestamp offset for media segments appended to the
Gets or sets the timestamp for the start of the append window.
Gets or sets the timestamp for the end of the append window.
Gets a value that indicates if Append, AppendByteStream, or Remove is in process.
true if Append, AppendByteStream, or Remove; otherwise, false.
Gets the buffered time range.
The buffered time range.
If this method succeeds, it returns
Gets the timestamp offset for media segments appended to the
The timestamp offset.
Sets the timestamp offset for media segments appended to the
If this method succeeds, it returns
Gets the timestamp for the start of the append window.
The timestamp for the start of the append window.
Sets the timestamp for the start of the append window.
The timestamp for the start of the append window.
If this method succeeds, it returns
Gets the timestamp for the end of the append window.
The timestamp for the end of the append window.
Sets the timestamp for the end of the append window.
The timestamp for the end of the append window.
Appends the specified media segment to the
If this method succeeds, it returns
Appends the media segment from the specified byte stream to the
If this method succeeds, it returns
Aborts the processing of the current media segment.
If this method succeeds, it returns
Removes the media segments defined by the specified time range from the
If this method succeeds, it returns
Represents a collection of
Gets the number of
Gets the number of
The number of source buffers in the list.
Gets the
The source buffer.
Provides functionality for raising events associated with
Used to indicate that the source buffer has started updating.
Used to indicate that the source buffer has been aborted.
Used to indicate that an error has occurred with the source buffer.
Used to indicate that the source buffer is updating.
Used to indicate that the source buffer has finished updating.
Callback interface to receive notifications from a network source on the progress of an asynchronous open operation.
Called by the network source when the open operation begins or ends.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The networks source calls this method with the following event types.
For more information, see How to Get Events from the Network Source.
Implemented by the Microsoft Media Foundation source reader object.
To create the source reader, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Queries whether a stream is selected.
The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
Receives TRUE if the stream is selected and will generate data. Receives
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Selects or deselects one or more streams.
The stream to set. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
Specify TRUE to select streams or
If this method succeeds, it returns
There are two common uses for this method:
For an example of deselecting a stream, see Tutorial: Decoding Audio.
If a stream is deselected, the
Stream selection does not affect how the source reader loads or unloads decoders in memory. In particular, deselecting a stream does not force the source reader to unload the decoder for that stream.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Gets a format that is supported natively by the media source.
Specifies which stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the media type to retrieve.
Receives a reference to the
This method queries the underlying media source for its native output format. Potentially, each source stream can produce more than one output format. Use the dwMediaTypeIndex parameter to loop through the available formats. Generally, file sources offer just one format per stream, but capture devices might offer several formats.
The method returns a copy of the media type, so it is safe to modify the object received in the ppMediaType parameter.
To set the output type for a stream, call the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Gets the current media type for a stream.
The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
Receives a reference to the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Sets the media type for a stream.
This media type defines that format that the Source Reader produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one decoder was found for the native stream type, but the type specified by pMediaType was rejected. |
| One or more sample requests are still pending. |
| The dwStreamIndex parameter is invalid. |
| Could not find a decoder for the native stream type. |
?
For each stream, you can set the media type to any of the following:
Audio resampling support was added to the source reader with Windows?8. In versions of Windows prior to Windows?8, the source reader does not support audio resampling. If you need to resample the audio in versions of Windows earlier than Windows?8, you can use the Audio Resampler DSP.
If you set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Seeks to a new position in the media source.
A
Value | Meaning |
---|---|
| 100-nanosecond units. |
?
Some media sources might support additional values.
The position from which playback will be started. The units are specified by the guidTimeFormat parameter. If the guidTimeFormat parameter is GUID_NULL, set the variant type to VT_I8.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| One or more sample requests are still pending. |
?
The SetCurrentPosition method does not guarantee exact seeking. The accuracy of the seek depends on the media content. If the media content contains a video stream, the SetCurrentPosition method typically seeks to the nearest key frame before the desired position. The distance between key frames depends on several factors, including the encoder implementation, the video content, and the particular encoding settings used to encode the content. The distance between key frame can vary within a single video file (for example, depending on scene complexity).
After seeking, the application should call
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Reads the next sample from the media source.
The stream to pull data from. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| Get the next available sample, regardless of which stream. |
?
A bitwise OR of zero or more flags from the
Receives the zero-based index of the stream.
Receives a bitwise OR of zero or more flags from the
Receives the time stamp of the sample, or the time of the stream event indicated in pdwStreamFlags. The time is given in 100-nanosecond units.
Receives a reference to the
If the requested stream is not selected, the return code is
This method can complete synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about setting the callback reference, see
Flushes one or more streams.
The stream to flush. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
If this method succeeds, it returns
The Flush method discards all queued samples and cancels all pending sample requests.
This method can complete either synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about the setting the callback reference, see
In synchronous mode, the method blocks until the operation is complete.
In asynchronous mode, the application's
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Queries the underlying media source or decoder for an interface.
The stream or object to query. If the value is
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A service identifier
The interface identifier (IID) of the interface being requested.
Receives a reference to the requested interface. The caller must release the interface.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Gets an attribute from the underlying media source.
The stream or object to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A
Otherwise, if the dwStreamIndex parameter specifies a stream, guidAttribute specifies a stream descriptor attribute. For a list of values, see Stream Descriptor Attributes.
A reference to a
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
A reference to the
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
A reference to the
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Pointer to the
Call CoInitialize(Ex) and
By default, when the application releases the source reader, the source reader shuts down the media source by calling
To change this default behavior, set the
When using the Source Reader, do not call any of the following methods on the media source:
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Windows Phone 8.1: This API is supported.
A reference to the
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Gets a format that is supported natively by the media source.
Specifies which stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the media type to retrieve.
Receives a reference to the
This method queries the underlying media source for its native output format. Potentially, each source stream can produce more than one output format. Use the dwMediaTypeIndex parameter to loop through the available formats. Generally, file sources offer just one format per stream, but capture devices might offer several formats.
The method returns a copy of the media type, so it is safe to modify the object received in the ppMediaType parameter.
To set the output type for a stream, call the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Selects or deselects one or more streams.
The stream to set. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
Specify TRUE to select streams or
If this method succeeds, it returns
There are two common uses for this method:
For an example of deselecting a stream, see Tutorial: Decoding Audio.
If a stream is deselected, the
Stream selection does not affect how the source reader loads or unloads decoders in memory. In particular, deselecting a stream does not force the source reader to unload the decoder for that stream.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Sets the media type for a stream.
This media type defines that format that the Source Reader produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one decoder was found for the native stream type, but the type specified by pMediaType was rejected. |
| One or more sample requests are still pending. |
| The dwStreamIndex parameter is invalid. |
| Could not find a decoder for the native stream type. |
?
For each stream, you can set the media type to any of the following:
The source reader does not support audio resampling. If you need to resample the audio, you can use the Audio Resampler DSP.
If you set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Sets the media type for a stream.
This media type defines that format that the Source Reader produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one decoder was found for the native stream type, but the type specified by pMediaType was rejected. |
| One or more sample requests are still pending. |
| The dwStreamIndex parameter is invalid. |
| Could not find a decoder for the native stream type. |
?
For each stream, you can set the media type to any of the following:
The source reader does not support audio resampling. If you need to resample the audio, you can use the Audio Resampler DSP.
If you set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Seeks to a new position in the media source.
The SetCurrentPosition method does not guarantee exact seeking. The accuracy of the seek depends on the media content. If the media content contains a video stream, the SetCurrentPosition method typically seeks to the nearest key frame before the desired position. The distance between key frames depends on several factors, including the encoder implementation, the video content, and the particular encoding settings used to encode the content. The distance between key frame can vary within a single video file (for example, depending on scene complexity).
After seeking, the application should call
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Gets the current media type for a stream.
The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
Receives a reference to the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Reads the next sample from the media source.
The stream to pull data from. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| Get the next available sample, regardless of which stream. |
?
A bitwise OR of zero or more flags from the
Receives the zero-based index of the stream.
Receives a bitwise OR of zero or more flags from the
Receives the time stamp of the sample, or the time of the stream event indicated in pdwStreamFlags. The time is given in 100-nanosecond units.
Receives a reference to the
If the requested stream is not selected, the return code is MF_E_INVALIDREQUEST. See
This method can complete synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about setting the callback reference, see
In asynchronous mode:
[out]
parameters must be In synchronous mode:
In synchronous mode, if the dwStreamIndex parameter is
This method can return flags in the pdwStreamFlags parameter without returning a media sample in ppSample. Therefore, the ppSample parameter can receive a
If there is a gap in the stream, pdwStreamFlags receives the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Flushes one or more streams.
The stream to flush. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
If this method succeeds, it returns
The Flush method discards all queued samples and cancels all pending sample requests.
This method can complete either synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about the setting the callback reference, see
In synchronous mode, the method blocks until the operation is complete.
In asynchronous mode, the application's
Note??In Windows?7, there was a bug in the implementation of this method, which causes OnFlush to be called before the flush operation completes. A hotfix is available that fixes this bug. For more information, see http://support.microsoft.com/kb/979567.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Queries the underlying media source or decoder for an interface.
The stream or object to query. If the value is
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A service identifier
The interface identifier (IID) of the interface being requested.
Receives a reference to the requested interface. The caller must release the interface.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Gets an attribute from the underlying media source.
The stream or object to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A
Otherwise, if the dwStreamIndex parameter specifies a stream, guidAttribute specifies a stream descriptor attribute. For a list of values, see Stream Descriptor Attributes.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Applies to: desktop apps | Metro style apps
Gets an attribute from the underlying media source.
The stream or object to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A
Otherwise, if the dwStreamIndex parameter specifies a stream, guidAttribute specifies a stream descriptor attribute. For a list of values, see Stream Descriptor Attributes.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Callback interface for the Microsoft Media Foundation source reader.
Use the
The callback methods can be called from any thread, so an object that implements this interface must be thread-safe.
If you do not specify a callback reference, the source reader operates synchronously.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Called when the
Returns an
The pSample parameter might be
If there is a gap in the stream, dwStreamFlags contains the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Called when the
Returns an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Called when the source reader receives certain events from the media source.
For stream events, the value is the zero-based index of the stream that sent the event. For source events, the value is
A reference to the
Returns an
In the current implementation, the source reader uses this method to forward the following events to the application:
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Extends the
This interface provides a mechanism for apps that use
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when the transform chain in the
Returns an
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when an asynchronous error occurs with the
Returns an
Extends the
The Source Reader implements this interface in Windows?8. To get a reference to this interface, call QueryInterface on the Source Reader.
Sets the native media type for a stream on the media source.
A reference to the
Receives a bitwise OR of zero or more of the following flags.
Value | Meaning |
---|---|
All effects were removed from the stream. | |
The current output type changed. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. |
| The dwStreamIndex parameter is invalid. |
?
This method sets the output type that is produced by the media source. Unlike the
In asynchronous mode, this method fails if a sample request is pending. In that case, wait for the OnReadSample callback to be invoked before calling the method. For more information about using the Source Reader in asynchronous mode, see
This method can trigger a change in the output format for the stream. If so, the
This method is useful with audio and video capture devices, because a device might support several output formats. This method enables the application to choose the device format before decoders and other transforms are added.
Adds a transform, such as an audio or video effect, to a stream.
The stream to configure. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to one of the following:
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The transform does not support the current stream format, and no conversion was possible. See Remarks for more information. |
| Invalid request. |
| The dwStreamIndex parameter is invalid. |
?
This method attempts to add the transform at the end of the current processing chain.
To use this method, make the following sequence of calls:
The AddTransformForStream method will not insert a decoder into the processing chain. If the native stream format is encoded, and the transform requires an uncompressed format, call SetCurrentMediaType to set the uncompressed format (step 1 in the previous list). However, the method will insert a video processor to convert between RGB and YUV formats, if required.
The method fails if the source reader was configured with the
In asynchronous mode, the method also fails if a sample request is pending. In that case, wait for the OnReadSample callback to be invoked before calling the method. For more information about using the Source Reader in asynchronous mode, see
You can add a transform at any time during streaming. However, the method does not flush or drain the pipeline before inserting the transform. Therefore, if data is already in the pipeline, the next sample is not guaranteed to have the transform applied.
Removes all of the Media Foundation transforms (MFTs) for a specified stream, with the exception of the decoder.
The stream for which to remove the MFTs. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. |
| The dwStreamIndex parameter is invalid. |
?
Calling this method can reset the current output type for the stream. To get the new output type, call
In asynchronous mode, this method fails if a sample request is pending. In that case, wait for the OnReadSample callback to be invoked before calling the method. For more information about using the Source Reader in asynchronous mode, see
Gets a reference to a Media Foundation transform (MFT) for a specified stream.
The stream to query for the MFT. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the MFT to retreive.
Receives a
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwTransformIndex parameter is out of range. |
| The dwStreamIndex parameter is invalid. |
?
You can use this method to configure an MFT after it is inserted into the processing chain. Do not use the reference returned in ppTransform to set media types on the MFT or to process data. In particular, calling any of the following
If a decoder is present, it appears at index position zero.
To avoid losing any data, you should drain the source reader before calling this method. For more information, see Draining the Data Pipeline.
Creates a media source from a URL or a byte stream. The Source Resolver implements this interface. To create the source resolver, call
Creates a media source or a byte stream from a URL. This method is synchronous.
Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags. See remarks below.
Pointer to the
Receives a member of the
Receives a reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| The URL scheme is not supported. |
?
The dwFlags parameter must contain either the
It is recommended that you do not set
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Creates a media source from a byte stream. This method is synchronous.
Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
Pointer to the
Receives a member of the
Receives a reference to the media source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| This byte stream is not supported. |
?
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Begins an asynchronous request to create a media source or a byte stream from a URL.
Null-terminated string that contains the URL to resolve.
Bitwise OR of flags. See Source Resolver Flags.
Pointer to the
Receives an
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| The URL scheme is not supported. |
?
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
When the operation completes, the source resolver calls the
The usage of the pProps parameter depends on the implementation of the media source.
Completes an asynchronous request to create an object from a URL.
Pointer to the
Receives a member of the
Receives a reference to the media source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation was canceled. |
?
Call this method from inside your application's
Begins an asynchronous request to create a media source from a byte stream.
A reference to the byte stream's
A null-terminated string that contains the original URL of the byte stream. This parameter can be
A bitwise OR of one or more flags. See Source Resolver Flags.
A reference to the
Receives an
A reference to the
A oointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| The byte stream is not supported. |
| The byte stream does not support seeking. |
?
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
When the operation completes, the source resolver calls the
Completes an asynchronous request to create a media source from a byte stream.
Pointer to the
Receives a member of the
Receives a reference to the media source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The application canceled the operation. |
?
Call this method from inside your application's
Cancels an asynchronous request to create an object.
Pointer to the
If this method succeeds, it returns
You can use this method to cancel a previous call to BeginCreateObjectFromByteStream or BeginCreateObjectFromURL. Because these methods are asynchronous, however, they might be completed before the operation can be canceled. Therefore, your callback might still be invoked after you call this method.
Note??This method cannot be called remotely.?Applies to: desktop apps | Metro style apps
Creates a media source or a byte stream from a URL. This method is synchronous.
Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags.
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Note??This method cannot be called remotely.
Applies to: desktop apps | Metro style apps
Creates a media source or a byte stream from a URL. This method is synchronous.
Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags.
Receives a member of the
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Note??This method cannot be called remotely.
Applies to: desktop apps | Metro style apps
Creates a media source or a byte stream from a URL. This method is synchronous.
Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags.
Pointer to the
Receives a member of the
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Note??This method cannot be called remotely.
Applies to: desktop apps | Metro style apps
Creates a media source from a byte stream. This method is synchronous.
Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Note??This method cannot be called remotely.
Applies to: desktop apps | Metro style apps
Creates a media source from a byte stream. This method is synchronous.
Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
Receives a member of the
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Note??This method cannot be called remotely.
Applies to: desktop apps | Metro style apps
Creates a media source from a byte stream. This method is synchronous.
Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
Pointer to the
Receives a member of the
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Note??This method cannot be called remotely.
Implemented by a client and called by Microsoft Media Foundation to get the client Secure Sockets Layer (SSL) certificate requested by the server.
In most HTTPS connections the server provides a certificate so that the client can ensure the identity of the server. However, in certain cases the server might wants to verify the identity of the client by requesting the client to send a certificate. For this scenario, a client application must provide a mechanism for Media Foundation to retrieve the client side certificate while opening an HTTPS URL with the source resolver or the scheme handler. The application must implement
If the
Gets the client SSL certificate synchronously.
Pointer to a string that contains the URL for which a client-side SSL certificate is required. Media Foundation can resolve the scheme and send the request to the server.
Pointer to the buffer that stores the certificate.This caller must free the buffer by calling CoTaskMemFree.
Pointer to a DWORD variable that receives the number of bytes required to hold the certificate data in the buffer pointed by *ppbData.
If this method succeeds, it returns
Starts an asynchronous call to get the client SSL certificate.
A null-terminated string that contains the URL for which a client-side SSL certificate is required. Media Foundation can resolve the scheme and send the request to the server.
A reference to the
A reference to the
If this method succeeds, it returns
When the operation completes, the callback object's
Completes an asynchronous request to get the client SSL certificate.
A reference to the
Receives a reference to the buffer that stores the certificate.The caller must free the buffer by calling CoTaskMemFree.
Receives the size of the ppbData buffer, in bytes.
If this method succeeds, it returns
Call this method after the
Indicates whether the server SSL certificate must be verified by the caller, Media Foundation, or the
Pointer to a string that contains the URL that is sent to the server.
Pointer to a
Pointer to a
If this method succeeds, it returns
Called by Media Foundation when the server SSL certificate has been received; indicates whether the server certificate is accepted.
Pointer to a string that contains the URL used to send the request to the server, and for which a server-side SSL certificate has been received.
Pointer to a buffer that contains the server SSL certificate.
Pointer to a DWORD variable that indicates the size of pbData in bytes.
Pointer to a
If this method succeeds, it returns
Gets information about one stream in a media source.
A presentation descriptor contains one or more stream descriptors. To get the stream descriptors from a presentation descriptor, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an identifier for the stream.
The stream identifier uniquely identifies a stream within a presentation. It does not change throughout the lifetime of the stream. For example, if the presentation changes while the source is running, the index number of the stream may change, but the stream identifier does not.
In general, stream identifiers do not have a specific meaning, other than to identify the stream. Some media sources may assign stream identifiers based on meaningful values, such as packet identifiers, but this depends on the implementation.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type handler for the stream. The media type handler can be used to enumerate supported media types for the stream, get the current media type, and set the media type.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an identifier for the stream.
Receives the stream identifier.
If this method succeeds, it returns
The stream identifier uniquely identifies a stream within a presentation. It does not change throughout the lifetime of the stream. For example, if the presentation changes while the source is running, the index number of the stream may change, but the stream identifier does not.
In general, stream identifiers do not have a specific meaning, other than to identify the stream. Some media sources may assign stream identifiers based on meaningful values, such as packet identifiers, but this depends on the implementation.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type handler for the stream. The media type handler can be used to enumerate supported media types for the stream, get the current media type, and set the media type.
Receives a reference to the
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Passes configuration information to the media sinks that are used for streaming the content. Optionally, this interface is supported by media sinks. The built-in ASF streaming media sink and the MP3 media sink implement this interface.
Called by the streaming media client before the Media Session starts streaming to specify the byte offset or the time offset.
A Boolean value that specifies whether qwSeekOffset gives a byte offset of a time offset.
Value | Meaning |
---|---|
| The qwSeekOffset parameter specifies a byte offset. |
The qwSeekOffset parameter specifies the time position in 100-nanosecond units. |
?
A byte offset or a time offset, depending on the value passed in fSeekOffsetIsByteOffset. Time offsets are specified in 100-nanosecond units.
If this method succeeds, it returns
Represents a stream on a media sink object.
Retrieves the media sink that owns this stream sink.
Retrieves the stream identifier for this stream sink.
Retrieves the media sink that owns this stream sink.
Receives a reference to the media sink's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
Retrieves the stream identifier for this stream sink.
Receives the stream identifier. If this stream sink was added by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
Delivers a sample to the stream. The media sink processes the sample.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink is in the wrong state to receive a sample. For example, preroll is complete but the presenation clock has not started yet. |
| The sample has an invalid time stamp. See Remarks. |
| The media sink is paused or stopped and cannot process the sample. |
| The presentation clock was not set. Call |
| The sample does not have a time stamp. |
| The stream sink has not been initialized. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
Call this method when the stream sink sends an
This method can return
Negative time stamps.
Time stamps that jump backward (within the same stream).
The time stamps for one stream have drifted too far from the time stamps on another stream within the same media sink (for example, an archive sink that multiplexes the streams).
Not every media sink returns an error code in these situations.
Places a marker in the stream.
Specifies the marker type, as a member of the
Optional reference to a
Optional reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
This method causes the stream sink to send an
Causes the stream sink to drop any samples that it has received and has not rendered yet.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream sink has not been initialized yet. You might need to set a media type. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
If any samples are still queued from previous calls to the
Any pending marker events from the
This method is synchronous. It does not return until the sink has discarded all pending samples.
Provides a method that retireves system id data.
Retrieves system id data.
The size in bytes of the returned data.
Receives the returned data. The caller must free this buffer by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets up the
If this method succeeds, it returns
Converts between Society of Motion Picture and Television Engineers (SMPTE) time codes and 100-nanosecond time units.
If an object supports this interface, it must expose the interface as a service. To get a reference to the interface, call
The Advanced Streaming Format (ASF) media source exposes this interface.
Starts an asynchronous call to convert Society of Motion Picture and Television Engineers (SMPTE) time code to 100-nanosecond units.
Time in SMPTE time code to convert. The vt member of the
Pointer to the
PPointer to the
The method returns an
Return code | Description |
---|---|
| pPropVarTimecode is not VT_I8. |
| The object's Shutdown method was called. |
| The byte stream is not seekable. The time code cannot be read from the end of the byte stream. |
?
When the asynchronous method completes, the callback object's
The value of pPropVarTimecode is a 64-bit unsigned value typed as a LONGLONG. The upper DWORD contains the range. (A range is a continuous series of time codes.) The lower DWORD contains the time code in the form of a hexadecimal number 0xhhmmssff, where each 2-byte sequence is read as a decimal value.
void CreateTimeCode( DWORD dwFrames, DWORD dwSeconds, DWORD dwMinutes, DWORD dwHours, DWORD dwRange,*pvar ) { ULONGLONG ullTimecode = ((ULONGLONG)dwRange) << 32; ullTimecode += dwFrames % 10; ullTimecode += (( (ULONGLONG)dwFrames ) / 10) << 4; ullTimecode += (( (ULONGLONG)dwSeconds ) % 10) << 8; ullTimecode += (( (ULONGLONG)dwSeconds ) / 10) << 12; ullTimecode += (( (ULONGLONG)dwMinutes ) % 10) << 16; ullTimecode += (( (ULONGLONG)dwMinutes ) / 10) << 20; ullTimecode += (( (ULONGLONG)dwHours ) % 10) << 24; ullTimecode += (( (ULONGLONG)dwHours ) / 10) << 28; pvar->vt = VT_I8; pvar->hVal.QuadPart = (LONGLONG)ullTimecode; }
Completes an asynchronous request to convert time in Society of Motion Picture and Television Engineers (SMPTE) time code to 100-nanosecond units.
Pointer to the
Receives the converted time.
If this method succeeds, it returns
Call this method after the
Starts an asynchronous call to convert time in 100-nanosecond units to Society of Motion Picture and Television Engineers (SMPTE) time code.
The time to convert, in 100-nanosecond units.
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The object's Shutdown method was called. |
| The byte stream is not seekable. The time code cannot be read from the end of the byte stream. |
?
When the asynchronous method completes, the callback object's
Completes an asynchronous request to convert time in 100-nanosecond units to Society of Motion Picture and Television Engineers (SMPTE) time code.
A reference to the
A reference to a
If this method succeeds, it returns
Call this method after the
The value of pPropVarTimecode is a 64-bit unsigned value typed as a LONGLONG. The upper DWORD contains the range. (A range is a continuous series of time codes.) The lower DWORD contains the time code in the form of a hexadecimal number 0xhhmmssff, where each 2-byte sequence is read as a decimal value.
ParseTimeCode( const & var, DWORD *pdwRange, DWORD *pdwFrames, DWORD *pdwSeconds, DWORD *pdwMinutes, DWORD *pdwHours ) { if (var.vt != VT_I8) { return E_INVALIDARG; } ULONGLONG ullTimeCode = (ULONGLONG)var.hVal.QuadPart; DWORD dwTimecode = (DWORD)(ullTimeCode & 0xFFFFFFFF); *pdwRange = (DWORD)(ullTimeCode >> 32); *pdwFrames = dwTimecode & 0x0000000F; *pdwFrames += (( dwTimecode & 0x000000F0) >> 4 ) * 10; *pdwSeconds = ( dwTimecode & 0x00000F00) >> 8; *pdwSeconds += (( dwTimecode & 0x0000F000) >> 12 ) * 10; *pdwMinutes = ( dwTimecode & 0x000F0000) >> 16; *pdwMinutes += (( dwTimecode & 0x00F00000) >> 20 ) * 10; *pdwHours = ( dwTimecode & 0x0F000000) >> 24; *pdwHours += (( dwTimecode & 0xF0000000) >> 28 ) * 10; return ; }
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A timed-text object represents a component of timed text.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the offset to the cue time.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Retrieves a list of all timed-text tracks registered with the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of active timed-text tracks in the timed-text component.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of all the timed-text tracks in the timed-text component.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of the timed-metadata tracks in the timed-text component.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables or disables inband mode.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether inband mode is enabled.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Registers a timed-text notify object.
A reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Selects or deselects a track of text in the timed-text component.
The identifier of the track to select.
Specifies whether to select or deselect a track of text. Specify TRUE to select the track or
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Adds a timed-text data source.
A reference to the
Null-terminated wide-character string that contains the label of the data source.
Null-terminated wide-character string that contains the language of the data source.
A
Specifies whether to add the default data source. Specify TRUE to add the default data source or
Receives a reference to the unique identifier for the added track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Adds a timed-text data source from the specified URL.
The URL of the timed-text data source.
Null-terminated wide-character string that contains the label of the data source.
Null-terminated wide-character string that contains the language of the data source.
A
Specifies whether to add the default data source. Specify TRUE to add the default data source or
Receives a reference to the unique identifier for the added track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Removes the timed-text track with the specified identifier.
The identifier of the track to remove.
If this method succeeds, it returns
Get the identifier for a track by calling GetId.
When a track is removed, all buffered data from the track is also removed.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the offset to the cue time.
A reference to a variable that receives the offset to the cue time.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Sets the offset to the cue time.
The offset to the cue time.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Retrieves a list of all timed-text tracks registered with the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of active timed-text tracks in the timed-text component.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of all the timed-text tracks in the timed-text component.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of the timed-metadata tracks in the timed-text component.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables or disables inband mode.
Specifies whether inband mode is enabled. If TRUE, inband mode is enabled. If
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether inband mode is enabled.
Returns whether inband mode is enabled. If TRUE, inband mode is enabled. If
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents the data content of a timed-text object.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text object.
A reference to a memory block that receives a reference to the data content of the timed-text object.
A reference to a variable that receives the length in bytes of the data content.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of a timed-text cue.
The identifier is retrieved by this method is dynamically generated by the system and is guaranteed to uniquely identify a cue within a single timed-text track. It is not guaranteed to be unique across tracks. If a cue already has an identifier that is provided in the text-track data format, this ID can be retrieved by calling GetOriginalId.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the start time of the cue in the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the duration time of the cue in the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the display region of the timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the style of the timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of lines of text in the timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of a timed-text cue.
The identifier of a timed-text cue.
The identifier is retrieved by this method is dynamically generated by the system and is guaranteed to uniquely identify a cue within a single timed-text track. It is not guaranteed to be unique across tracks. If a cue already has an identifier that is provided in the text-track data format, this ID can be retrieved by calling GetOriginalId.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the cue identifier that is provided in the text-track data format, if available.
The cue identifier that is provided in the text-track data format.
If this method succeeds, it returns
This method retrieves an identifier for the cue that is included in the source data, if one was specified. The system dynamically generates identifiers for cues that are guaranteed to be unique within a single time-text track. To obtain this system-generated ID, call GetId.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text cue.
Returns a
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the start time of the cue in the track.
Returns the start time of the cue in the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the duration time of the cue in the track.
Returns the duration time of the cue in the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the timed-text cue.
Returns the identifier of the timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text cue.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the display region of the timed-text cue.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the style of the timed-text cue.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of lines of text in the timed-text cue.
Returns the number of lines of text.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a line of text in the cue from the index of the line.
The index of the line of text in the cue to retrieve.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a block of formatted timed-text.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of subformats in the formatted timed-text object.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text in the formatted timed-text object.
A reference to a variable that receives the null-terminated wide-character string that contains the text.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of subformats in the formatted timed-text object.
Returns the number of subformats.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a subformat in the formatted timed-text object.
The index of the subformat in the formatted timed-text object.
A reference to a variable that receives the first character of the subformat.
A reference to a variable that receives the length, in characters, of the subformat.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Interface that defines callbacks for Microsoft Media Foundation Timed Text notifications.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a text track is added
The identifier of the track that was added.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a text track is removed.
The identifier of the track that was removed.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a track is selected or deselected.
The identifier of the track that was selected or deselected.
TRUE if the track was selected.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when an error occurs in a text track.
An
The extended error code for the last error.
The identifier of the track on which the error occurred.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a cue event occurs in a text track.
A value specifying the type of event that has occured.
The current time when the cue event occurred.
The
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Resets the timed-text-notify object.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents the display region of a timed-text object.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the region.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the writing mode of the region.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the display alignment of the region.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether a clip of text overflowed the region.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the word wrap feature is enabled in the region.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the Z-index (depth) of the region.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the scroll mode of the region.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the name of the region.
A reference to a variable that receives the null-terminated wide-character string that contains the name of the region.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the position of the region.
A reference to a variable that receives the X-coordinate of the position.
A reference to a variable that receives the Y-coordinate of the position.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the extent of the region.
A reference to a variable that receives the width of the region.
A reference to a variable that receives the height of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the writing mode of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the display alignment of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the height of each line of text in the region.
A reference to a variable that receives the height of each line of text in the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether a clip of text overflowed the region.
A reference to a variable that receives a value that specifies whether a clip of text overflowed the region. The variable specifies TRUE if the clip overflowed; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the padding that surrounds the region.
A reference to a variable that receives the padding before the start of the region.
A reference to a variable that receives the start of the region.
A reference to a variable that receives the padding after the end of the region.
A reference to a variable that receives the end of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the word wrap feature is enabled in the region.
A reference to a variable that receives a value that specifies whether the word wrap feature is enabled in the region. The variable specifies TRUE if word wrap is enabled; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the Z-index (depth) of the region.
A reference to a variable that receives the Z-index (depth) of the region.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the scroll mode of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the color of the timed-text style.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text style is external.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the color of the timed-text style.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the timed-text style.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text always shows the background.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font style of the timed-text style.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text is bold.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the right to left writing mode of the timed-text style is enabled.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text alignment of the timed-text style.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets how text is decorated for the timed-text style.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the name of the timed-text style.
A reference to a variable that receives the null-terminated wide-character string that contains the name of the style.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text style is external.
Returns whether the timed-text style is external. If TRUE, the timed-text style is external; otherwise,
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font family of the timed-text style.
A reference to a variable that receives the null-terminated wide-character string that contains the font family of the style.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font size of the timed-text style.
A reference to a variable that receives the font size of the timed-text style.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the color of the timed-text style.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the timed-text style.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text always shows the background.
A reference to a variable that receives a value that specifies whether the style of timed text always shows the background. The variable specifies TRUE if the background is always shown; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font style of the timed-text style.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text is bold.
A reference to a variable that receives a value that specifies whether the style of timed text is bold. The variable specifies TRUE if the style is bold; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the right to left writing mode of the timed-text style is enabled.
A reference to a variable that receives a value that specifies whether the right to left writing mode is enabled. The variable specifies TRUE if the right to left writing mode is enabled; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text alignment of the timed-text style.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets how text is decorated for the timed-text style.
A reference to a variable that receives a combination of
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text outline for the timed-text style.
A reference to a variable that receives a
A reference to a variable that receives the thickness.
A reference to a variable that receives the blur radius.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a track of timed text.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the track of timed text.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Sets the label of a timed-text track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is inband.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is active.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a value indicating the error type of the latest error associated with the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the extended error code for the latest error associated with the track.
If the most recent error was associated with a track, this value will be the same
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the track of timed text.
Returns the identifier of the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the label of the track.
A reference to a variable that receives the null-terminated wide-character string that contains the label of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Sets the label of a timed-text track.
A reference to a null-terminated wide-character string that contains the label of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the language of the track.
A reference to a variable that receives the null-terminated wide-character string that contains the language of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text track.
Returns a
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is inband.
Returns whether the timed-text track is inband. If TRUE, the timed-text track is inband; otherwise,
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the in-band metadata of the track.
A reference to a variable that receives the null-terminated wide-character string that contains the in-band metadata of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is active.
Returns whether the timed-text track is active. If TRUE, the timed-text track is active; otherwise,
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a value indicating the error type of the latest error associated with the track.
A value indicating the error type of the latest error associated with the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the extended error code for the latest error associated with the track.
The extended error code for the latest error associated with the track.
If the most recent error was associated with a track, this value will be the same
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a
A
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a list of timed-text tracks.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the length, in tracks, of the timed-text-track list.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the length, in tracks, of the timed-text-track list.
Returns the length, in tracks, of the timed-text-track list.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a text track in the list from the index of the track.
The index of the track in the list to retrieve.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a text track in the list from the identifier of the track.
The identifier of the track in the list to retrieve.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
Provides a timer that invokes a callback at a specified time.
The presentation clock exposes this interface. To get a reference to the interface, call QueryInterface.
Sets a timer that invokes a callback at the specified time.
Bitwise OR of zero or more flags from the
The time at which the timer should fire, in units of the clock's frequency. The time is either absolute or relative to the current time, depending on the value of dwFlags.
Pointer to the
Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The clock was shut down. |
| The method succeeded, but the clock is stopped. |
?
If the clock is stopped, the method returns MF_S_CLOCK_STOPPED. The callback will not be invoked until the clock is started.
Cancels a timer that was set using the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Because the timer is dispatched asynchronously, the application's timer callback might get invoked even if this method succeeds.
Creates a fully loaded topology from the input partial topology.
This method creates any intermediate transforms that are needed to complete the topology. It also sets the input and output media types on all of the objects in the topology. If the method succeeds, the full topology is returned in the ppOutputTopo parameter.
You can use the pCurrentTopo parameter to provide a full topology that was previously loaded. If this topology contains objects that are needed in the new topology, the topology loader can re-use them without creating them again. This caching can potentially make the process faster. The objects from pCurrentTopo will not be reconfigured, so you can specify a topology that is actively streaming data. For example, while a topology is still running, you can pre-load the next topology.
Before calling this method, you must ensure that the output nodes in the partial topology have valid
Creates a fully loaded topology from the input partial topology.
A reference to the
Receives a reference to the
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| One or more output nodes contain |
?
This method creates any intermediate transforms that are needed to complete the topology. It also sets the input and output media types on all of the objects in the topology. If the method succeeds, the full topology is returned in the ppOutputTopo parameter.
You can use the pCurrentTopo parameter to provide a full topology that was previously loaded. If this topology contains objects that are needed in the new topology, the topology loader can re-use them without creating them again. This caching can potentially make the process faster. The objects from pCurrentTopo will not be reconfigured, so you can specify a topology that is actively streaming data. For example, while a topology is still running, you can pre-load the next topology.
Before calling this method, you must ensure that the output nodes in the partial topology have valid
Represents a topology. A topology describes a collection of media sources, sinks, and transforms that are connected in a certain order. These objects are represented within the topology by topology nodes, which expose the
To create a topology, call
Gets the identifier of the topology.
Gets the number of nodes in the topology.
Gets the source nodes in the topology.
Gets the output nodes in the topology.
Gets the identifier of the topology.
Receives the identifier, as a TOPOID value.
If this method succeeds, it returns
Adds a node to the topology.
Pointer to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pNode is invalid, possibly because the node already exists in the topology. |
?
Removes a node from the topology.
Pointer to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified node is not a member of this topology. |
?
This method does not destroy the node, so the
The method breaks any connections between the specified node and other nodes.
Gets the number of nodes in the topology.
Receives the number of nodes.
If this method succeeds, it returns
Gets a node in the topology, specified by index.
The zero-based index of the node. To get the number of nodes in the topology, call
Receives a reference to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index is less than zero. |
| No node can be found at the index wIndex. |
?
Removes all nodes from the topology.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You do not need to clear a topology before disposing of it. The Clear method is called automatically when the topology is destroyed.
Converts this topology into a copy of another topology.
A reference to the
If this method succeeds, it returns
This method does the following:
Gets a node in the topology, specified by node identifier.
The identifier of the node to retrieve. To get a node's identifier, call
Receives a reference to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The topology does not contain a node with this identifier. |
?
Gets the source nodes in the topology.
Receives a reference to the
If this method succeeds, it returns
Gets the output nodes in the topology.
Receives a reference to the
If this method succeeds, it returns
Represents a node in a topology. The following node types are supported:
To create a new node, call the
Sets the object associated with this node.
All node types support this method, but the object reference is not used by every node type.
Node type | Object reference |
---|---|
Source node. | Not used. |
Transform node. | |
Output node | |
Tee node. | Not used. |
?
If the object supports
Gets the object associated with this node.
Retrieves the node type.
Retrieves or sets the identifier of the node.
When a node is first created, it is assigned an identifier. Node identifiers are unique within a topology, but can be reused across several topologies. The topology loader uses the identifier to look up nodes in the previous topology, so that it can reuse objects from the previous topology.
To find a node in a topology by its identifier, call
Retrieves the number of input streams that currently exist on this node.
The input streams may or may not be connected to output streams on other nodes. To get the node that is connected to a specified input stream, call
The
Retrieves the number of output streams that currently exist on this node.
The output streams may or may not be connected to input streams on other nodes. To get the node that is connected to a specific output stream on this node, call
The
Sets the object associated with this node.
A reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
All node types support this method, but the object reference is not used by every node type.
Node type | Object reference |
---|---|
Source node. | Not used. |
Transform node. | |
Output node | |
Tee node. | Not used. |
?
If the object supports
Gets the object associated with this node.
Receives a reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no object associated with this node. |
?
Retrieves the node type.
Receives the node type, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the identifier of the node.
Receives the identifier.
If this method succeeds, it returns
When a node is first created, it is assigned an identifier. Node identifiers are unique within a topology, but can be reused across several topologies. The topology loader uses the identifier to look up nodes in the previous topology, so that it can reuse objects from the previous topology.
To find a node in a topology by its identifier, call
Sets the identifier for the node.
The identifier for the node.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The TOPOID has already been set for this object. |
?
When a node is first created, it is assigned an identifier. Typically there is no reason for an application to override the identifier. Within a topology, each node identifier should be unique.
Retrieves the number of input streams that currently exist on this node.
Receives the number of input streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The input streams may or may not be connected to output streams on other nodes. To get the node that is connected to a specified input stream, call
The
Retrieves the number of output streams that currently exist on this node.
Receives the number of output streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The output streams may or may not be connected to input streams on other nodes. To get the node that is connected to a specific output stream on this node, call
The
Connects an output stream from this node to the input stream of another node.
Zero-based index of the output stream on this node.
Pointer to the
Zero-based index of the input stream on the other node.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The method failed. |
| Invalid parameter. |
?
Node connections represent data flow from one node to the next. The streams are logical, and are specified by index.
If the node is already connected at the specified output, the method breaks the existing connection. If dwOutputIndex or dwInputIndexOnDownstreamNode specify streams that do not exist yet, the method adds as many streams as needed.
This method checks for certain invalid conditions:
An output node cannot have any output connections. If you call this method on an output node, the method returns E_FAIL.
A node cannot be connected to itself. If pDownstreamNode specifies the same node as the method call, the method returns E_INVALIDARG.
However, if the method succeeds, it does not guarantee that the node connection is valid. It is possible to create a partial topology that the topology loader cannot resolve. If so, the
To break an existing node connection, call
Disconnects an output stream on this node.
Zero-based index of the output stream to disconnect.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwOutputIndex parameter is out of range. |
| The specified output stream is not connected to another node. |
?
If the specified output stream is connected to another node, this method breaks the connection.
Retrieves the node that is connected to a specified input stream on this node.
Zero-based index of an input stream on this node.
Receives a reference to the
Receives the index of the output stream that is connected to this node's input stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index is out of range. |
| The specified input stream is not connected to another node. |
?
Retrieves the node that is connected to a specified output stream on this node.
Zero-based index of an output stream on this node.
Receives a reference to the
Receives the index of the input stream that is connected to this node's output stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index is out of range. |
| The specified input stream is not connected to another node. |
?
Sets the preferred media type for an output stream on this node.
Zero-based index of the output stream.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node is an output node. |
?
The preferred type is a hint for the topology loader.
Do not call this method after loading a topology or setting a topology on the Media Session. Changing the preferred type on a running topology can cause connection errors.
If no output stream exists at the specified index, the method creates new streams up to and including the specified index number.
Output nodes cannot have outputs. If this method is called on an output node, it returns E_NOTIMPL
Retrieves the preferred media type for an output stream on this node.
Zero-based index of the output stream.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node does not have a preferred output type. |
| Invalid stream index. |
| This node is an output node. |
?
Output nodes cannot have outputs. If this method is called on an output node, it returns E_NOTIMPL.
The preferred output type provides a hint to the topology loader. In a fully resolved topology, there is no guarantee that every topology node will have a preferred output type. To get the actual media type for a node, you must get a reference to the node's underlying object. (For more information, see
Sets the preferred media type for an input stream on this node.
Zero-based index of the input stream.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node is a source node. |
?
The preferred type is a hint for the topology loader.
Do not call this method after loading a topology or setting a topology on the Media Session. Changing the preferred type on a running topology can cause connection errors.
If no input stream exists at the specified index, the method creates new streams up to and including the specified index number.
Source nodes cannot have inputs. If this method is called on a source node, it returns E_NOTIMPL.
Retrieves the preferred media type for an input stream on this node.
Zero-based index of the input stream.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node does not have a preferred input type. |
| Invalid stream index. |
| This node is a source node. |
?
Source nodes cannot have inputs. If this method is called on a source node, it returns E_NOTIMPL.
The preferred input type provides a hint to the topology loader. In a fully resolved topology, there is no guarantee that every topology node will have a preferred input type. To get the actual media type for a node, you must get a reference to the node's underlying object. (For more information, see
Copies the data from another topology node into this node.
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The node types do not match. |
?
The two nodes must have the same node type. To get the node type, call
This method copies the object reference, preferred types, and attributes from pNode to this node. It also copies the TOPOID that uniquely identifies each node in a topology. It does not duplicate any of the connections from pNode to other nodes.
The purpose of this method is to copy nodes from one topology to another. Do not use duplicate nodes within the same topology.
Updates the attributes of one or more nodes in the Media Session's current topology.
The Media Session exposes this interface as a service. To get a reference to the interface, call
Currently the only attribute that can be updated is the
Updates the attributes of one or more nodes in the current topology.
Reserved.
The number of elements in the pUpdates array.
Pointer to an array of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Currently the only attribute that can be updated is the
Enables a custom video mixer or video presenter to get interface references from the Enhanced Video Renderer (EVR). The mixer can also use this interface to get interface references from the presenter, and the presenter can use it to get interface references from the mixer.
To use this interface, implement the
Retrieves an interface from the enhanced video renderer (EVR), or from the video mixer or video presenter.
Specifies the scope of the search. Currently this parameter is ignored. Use the value
Reserved, must be zero.
Service
Interface identifier of the requested interface.
Array of interface references. If the method succeeds, each member of the array contains either a valid interface reference or
Pointer to a value that specifies the size of the ppvObjects array. The value must be at least 1. In the current implementation, there is no reason to specify an array size larger than one element. The value is not changed on output.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The requested interface is not available. |
| The method was not called from inside the |
| The object does not support the specified service |
?
This method can be called only from inside the
The presenter can use this method to query the EVR and the mixer. The mixer can use it to query the EVR and the presenter. Which objects are queried depends on the caller and the service
Caller | Service | Objects queried |
---|---|---|
Presenter | MR_VIDEO_RENDER_SERVICE | EVR |
Presenter | MR_VIDEO_MIXER_SERVICE | Mixer |
Mixer | MR_VIDEO_RENDER_SERVICE | Presenter and EVR |
?
The following interfaces are available from the EVR:
IMediaEventSink. This interface is documented in the DirectShow SDK documentation.
The following interfaces are available from the mixer:
Initializes a video mixer or presenter. This interface is implemented by mixers and presenters, and enables them to query the enhanced video renderer (EVR) for interface references.
When the EVR loads the video mixer and the video presenter, the EVR queries the object for this interface and calls InitServicePointers. Inside the InitServicePointers method, the object can query the EVR for interface references.
Signals the mixer or presenter to query the enhanced video renderer (EVR) for interface references.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The
When the EVR calls
Signals the object to release the interface references obtained from the enhanced video renderer (EVR).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
After this method is called, any interface references obtained during the previous call to
Tracks the reference counts on a video media sample. Video samples created by the
Use this interface to determine whether it is safe to delete or re-use the buffer contained in a sample. One object assigns itself as the owner of the video sample by calling SetAllocator. When all objects release their reference counts on the sample, the owner's callback method is invoked.
Sets the owner for the sample.
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The owner was already set. This method cannot be called twice on the sample. |
?
When this method is called, the sample holds an additional reference count on itself. When every other object releases its reference counts on the sample, the sample invokes the pSampleAllocator callback method. To get a reference to the sample, call
After the callback is invoked, the sample clears the callback. To reinstate the callback, you must call SetAllocator again.
It is safe to pass in the sample's
Implemented by the transcode profile object.
The transcode profile stores configuration settings that the topology builder uses to generate the transcode topology for the output file. These configuration settings are specified by the caller and include audio and video stream properties, encoder settings, and container settings that are specified by the caller.
To create the transcode profile object, call
Gets or sets the audio stream settings that are currently set in the transcode profile.
If there are no audio attributes set in the transcode profile, the call to GetAudioAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Gets or sets the video stream settings that are currently set in the transcode profile.
If there are no container attributes set in the transcode profile, the GetVideoAttributes method succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Gets or sets the container settings that are currently set in the transcode profile.
If there are no container attributes set in the transcode profile, the call to GetContainerAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets audio stream configuration settings in the transcode profile.
To get a list of compatible audio media types supported by the Media Foundation transform (MFT) encoder , call
If this method succeeds, it returns
Gets the audio stream settings that are currently set in the transcode profile.
Receives a reference to the
If this method succeeds, it returns
If there are no audio attributes set in the transcode profile, the call to GetAudioAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets video stream configuration settings in the transcode profile.
For example code, see
If this method succeeds, it returns
Gets the video stream settings that are currently set in the transcode profile.
Receives a reference to the
If this method succeeds, it returns
If there are no container attributes set in the transcode profile, the GetVideoAttributes method succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets container configuration settings in the transcode profile.
For example code, see
If this method succeeds, it returns
Gets the container settings that are currently set in the transcode profile.
Receives a reference to the
If this method succeeds, it returns
If there are no container attributes set in the transcode profile, the call to GetContainerAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets the name of the encoded output file.
The media sink will create a local file with the specified file name.
Alternately, you can call
Sets the name of the encoded output file.
The media sink will create a local file with the specified file name.
Alternately, you can call
Sets an output byte stream for the transcode media sink.
Call this method to provide a writeable byte stream that will receive the transcoded data.
Alternatively, you can provide the name of an output file, by calling
The pByteStreamActivate parameter must specify an activation object that creates a writeable byte stream. Internally, the transcode media sink calls
*pByteStream = null ; hr = pByteStreamActivate->ActivateObject(IID_IMFByteStream, (void**)&pByteStream);
Currently, Microsoft Media Foundation does not provide any byte-stream activation objects. To use this method, an application must provide a custom implementation of
Sets the transcoding profile on the transcode sink activation object.
Before calling this method, initialize the profile object as follows:
Gets the media types for the audio and video streams specified in the transcode profile.
Before calling this method, call
Sets the name of the encoded output file.
Pointer to a null-terminated string that contains the name of the output file.
If this method succeeds, it returns
The media sink will create a local file with the specified file name.
Alternately, you can call
Sets an output byte stream for the transcode media sink.
A reference to the
If this method succeeds, it returns
Call this method to provide a writeable byte stream that will receive the transcoded data.
Alternatively, you can provide the name of an output file, by calling
The pByteStreamActivate parameter must specify an activation object that creates a writeable byte stream. Internally, the transcode media sink calls
*pByteStream = null ; hr = pByteStreamActivate->ActivateObject(IID_IMFByteStream, (void**)&pByteStream);
Currently, Microsoft Media Foundation does not provide any byte-stream activation objects. To use this method, an application must provide a custom implementation of
Sets the transcoding profile on the transcode sink activation object.
A reference to the
If this method succeeds, it returns
Before calling this method, initialize the profile object as follows:
Gets the media types for the audio and video streams specified in the transcode profile.
A reference to an
If the method succeeds, the method assigns
If this method succeeds, it returns
Before calling this method, call
Implemented by all Media Foundation Transforms (MFTs).
Gets the global attribute store for this Media Foundation transform (MFT).
Use the
Implementation of this method is optional unless the MFT needs to support a particular set of attributes. Exception: Hardware-based MFTs must implement this method. See Hardware MFTs.
Queries whether the Media Foundation transform (MFT) is ready to produce output data.
If the method returns the
MFTs are not required to implement this method. If the method returns E_NOTIMPL, you must call ProcessOutput to determine whether the transform has output data.
If the MFT has more than one output stream, but it does not produce samples at the same time for each stream, it can set the
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputStatus. See Creating Hybrid DMO/MFT Objects.
Gets the minimum and maximum number of input and output streams for this Media Foundation transform (MFT).
Receives the minimum number of input streams.
Receives the maximum number of input streams. If there is no maximum, receives the value MFT_STREAMS_UNLIMITED.
Receives the minimum number of output streams.
Receives the maximum number of output streams. If there is no maximum, receives the value MFT_STREAMS_UNLIMITED.
If this method succeeds, it returns
If the MFT has a fixed number of streams, the minimum and maximum values are the same.
It is not recommended to create an MFT that supports zero inputs or zero outputs. An MFT with no inputs or no outputs may not be compatible with the rest of the Media Foundation pipeline. You should create a Media Foundation sink or source for this purpose instead.
When an MFT is first created, it is not guaranteed to have the minimum number of streams. To find the actual number of streams, call
This method should not be called with
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetStreamLimits. See Creating Hybrid DMO/MFT Objects.
Gets the current number of input and output streams on this Media Foundation transform (MFT).
Receives the number of input streams.
Receives the number of output streams.
If this method succeeds, it returns
The number of streams includes unselected streams?that is, streams with no media type or a
This method should not be called with
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetStreamCount. See Creating Hybrid DMO/MFT Objects.
Gets the stream identifiers for the input and output streams on this Media Foundation transform (MFT).
Number of elements in the pdwInputIDs array.
Pointer to an array allocated by the caller. The method fills the array with the input stream identifiers. The array size must be at least equal to the number of input streams. To get the number of input streams, call
If the caller passes an array that is larger than the number of input streams, the MFT must not write values into the extra array entries.
Number of elements in the pdwOutputIDs array.
Pointer to an array allocated by the caller. The method fills the array with the output stream identifiers. The array size must be at least equal to the number of output streams. To get the number of output streams, call GetStreamCount.
If the caller passes an array that is larger than the number of output streams, the MFT must not write values into the extra array entries.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. See Remarks. |
| One or both of the arrays is too small. |
?
Stream identifiers are necessary because some MFTs can add or remove streams, so the index of a stream may not be unique. Therefore,
This method can return E_NOTIMPL if both of the following conditions are true:
This method must be implemented if any of the following conditions is true:
All input stream identifiers must be unique within an MFT, and all output stream identifiers must be unique. However, an input stream and an output stream can share the same identifier.
If the client adds an input stream, the client assigns the identifier, so the MFT must allow arbitrary identifiers, as long as they are unique. If the MFT creates an output stream, the MFT assigns the identifier.
By convention, if an MFT has exactly one fixed input stream and one fixed output stream, it should assign the identifier 0 to both streams.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetStreamIDs. See Creating Hybrid DMO/MFT Objects.
Gets the buffer requirements and other information for an input stream on this Media Foundation transform (MFT).
Input stream identifier. To get the list of stream identifiers, call
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
?
It is valid to call this method before setting the media types.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputStreamInfo. See Creating Hybrid DMO/MFT Objects.
Gets the buffer requirements and other information for an output stream on this Media Foundation transform (MFT).
Output stream identifier. To get the list of stream identifiers, call
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. |
?
It is valid to call this method before setting the media types.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputStreamInfo. See Creating Hybrid DMO/MFT Objects.
Gets the global attribute store for this Media Foundation transform (MFT).
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not support attributes. |
?
Use the
Implementation of this method is optional unless the MFT needs to support a particular set of attributes. Exception: Hardware-based MFTs must implement this method. See Hardware MFTs.
Gets the attribute store for an input stream on this Media Foundation transform (MFT).
Input stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not support input stream attributes. |
| Invalid stream identifier. |
?
Implementation of this method is optional unless the MFT needs to support a particular set of attributes.
To get the attribute store for the entire MFT, call
Gets the attribute store for an output stream on this Media Foundation transform (MFT).
Output stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not support output stream attributes. |
| Invalid stream identifier. |
?
Implementation of this method is optional unless the MFT needs to support a particular set of attributes.
To get the attribute store for the entire MFT, call
Removes an input stream from this Media Foundation transform (MFT).
Identifier of the input stream to remove.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The transform has a fixed number of input streams. |
| The stream is not removable, or the transform currently has the minimum number of input streams it can support. |
| Invalid stream identifier. |
| The transform has unprocessed input buffers for the specified stream. |
?
If the transform has a fixed number of input streams, the method returns E_NOTIMPL.
An MFT might support this method but not allow certain input streams to be removed. If an input stream can be removed, the
If the transform still has unprocessed input for that stream, the method might succeed or it might return
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTDeleteInputStream. See Creating Hybrid DMO/MFT Objects.
Adds one or more new input streams to this Media Foundation transform (MFT).
Number of streams to add.
Array of stream identifiers. The new stream identifiers must not match any existing input streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The MFT has a fixed number of input streams. |
?
If the new streams exceed the maximum number of input streams for this transform, the method returns E_INVALIDARG. To find the maximum number of input streams, call
If any of the new stream identifiers conflicts with an existing input stream, the method returns E_INVALIDARG.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTAddInputStreams. See Creating Hybrid DMO/MFT Objects.
Gets an available media type for an input stream on this Media Foundation transform (MFT).
Input stream identifier. To get the list of stream identifiers, call
Index of the media type to retrieve. Media types are indexed from zero and returned in approximate order of preference.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not have a list of available input types. |
| Invalid stream identifier. |
| The dwTypeIndex parameter is out of range. |
| You must set the output types before setting the input types. |
?
The MFT defines a list of available media types for each input stream and orders them by preference. This method enumerates the available media types for an input stream. To enumerate the available types, increment dwTypeIndex until the method returns
Setting the media type on one stream might change the available types for another stream, or change the preference order. However, an MFT is not required to update the list of available types dynamically. The only guaranteed way to test whether you can set a particular input type is to call
In some cases, an MFT cannot return a list of input types until one or more output types are set. If so, the method returns
An MFT is not required to implement this method. However, most MFTs should implement this method, unless the supported types are simple and can be discovered through the
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputAvailableType. See Creating Hybrid DMO/MFT Objects.
For encoders, after the output type is set, GetInputAvailableType must return a list of input types that are compatible with the current output type. This means that all types returned by GetInputAvailableType after the output type is set must be valid types for SetInputType.
Encoders should reject input types if the attributes of the input media type and output media type do not match, such as resolution setting with
Gets an available media type for an output stream on this Media Foundation transform (MFT).
Output stream identifier. To get the list of stream identifiers, call
Index of the media type to retrieve. Media types are indexed from zero and returned in approximate order of preference.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not have a list of available output types. |
| Invalid stream identifier. |
| The dwTypeIndex parameter is out of range. |
| You must set the input types before setting the output types. |
?
The MFT defines a list of available media types for each output stream and orders them by preference. This method enumerates the available media types for an output stream. To enumerate the available types, increment dwTypeIndex until the method returns MF_E_NO_MORE_TYPES.
Setting the media type on one stream can change the available types for another stream (or change the preference order). However, an MFT is not required to update the list of available types dynamically. The only guaranteed way to test whether you can set a particular input type is to call
In some cases, an MFT cannot return a list of output types until one or more input types are set. If so, the method returns
An MFT is not required to implement this method. However, most MFTs should implement this method, unless the supported types are simple and can be discovered through the
This method can return a partial media type. A partial media type contains an incomplete description of a format, and is used to provide a hint to the caller. For example, a partial type might include just the major type and subtype GUIDs. However, after the client sets the input types on the MFT, the MFT should generally return at least one complete output type, which can be used without further modification. For more information, see Complete and Partial Media Types.
Some MFTs cannot provide an accurate list of output types until the MFT receives the first input sample. For example, the MFT might need to read the first packet header to deduce the format. An MFT should handle this situation as follows:
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputAvailableType. See Creating Hybrid DMO/MFT Objects.
Sets, tests, or clears the media type for an input stream on this Media Foundation transform (MFT).
Input stream identifier. To get the list of stream identifiers, call
Pointer to the
Zero or more flags from the _MFT_SET_TYPE_FLAGS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT cannot use the proposed media type. |
| Invalid stream identifier. |
| The proposed type is not valid. This error code indicates that the media type itself is not configured correctly; for example, it might contain mutually contradictory attributes. |
| The MFT cannot switch types while processing data. Try draining or flushing the MFT. |
| You must set the output types before setting the input types. |
| The MFT could not find a suitable DirectX Video Acceleration (DXVA) configuration. |
?
This method can be used to set, test without setting, or clear the media type:
Setting the media type on one stream may change the acceptable types on another stream.
An MFT may require the caller to set one or more output types before setting the input type. If so, the method returns
If the MFT supports DirectX Video Acceleration (DXVA) but is unable to find a suitable DXVA configuration (for example, if the graphics driver does not have the right capabilities), the method should return
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTSetInputType. See Creating Hybrid DMO/MFT Objects.
Sets, tests, or clears the media type for an output stream on this Media Foundation transform (MFT).
Output stream identifier. To get the list of stream identifiers, call
Pointer to the
Zero or more flags from the _MFT_SET_TYPE_FLAGS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The transform cannot use the proposed media type. |
| Invalid stream identifier. |
| The proposed type is not valid. This error code indicates that the media type itself is not configured correctly; for example, it might contain mutually contradictory flags. |
| The MFT cannot switch types while processing data. Try draining or flushing the MFT. |
| You must set the input types before setting the output types. |
| The MFT could not find a suitable DirectX Video Acceleration (DXVA) configuration. |
?
This method can be used to set, test without setting, or clear the media type:
Setting the media type on one stream may change the acceptable types on another stream.
An MFT may require the caller to set one or more input types before setting the output type. If so, the method returns
If the MFT supports DirectX Video Acceleration (DXVA) but is unable to find a suitable DXVA configuration (for example, if the graphics driver does not have the right capabilities), the method should return
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTSetOutputType. See Creating Hybrid DMO/MFT Objects.
Gets the current media type for an input stream on this Media Foundation transform (MFT).
Input stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
| The input media type has not been set. |
?
If the specified input stream does not yet have a media type, the method returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputCurrentType. See Creating Hybrid DMO/MFT Objects.
Gets the current media type for an output stream on this Media Foundation transform (MFT).
Output stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
| The output media type has not been set. |
?
If the specified output stream does not yet have a media type, the method returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputCurrentType. See Creating Hybrid DMO/MFT Objects.
Queries whether an input stream on this Media Foundation transform (MFT) can accept more data.
Input stream identifier. To get the list of stream identifiers, call
Receives a member of the _MFT_INPUT_STATUS_FLAGS enumeration, or zero. If the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
| The media type is not set on one or more streams. |
?
If the method returns the
Use this method to test whether the input stream is ready to accept more data, without incurring the overhead of allocating a new sample and calling ProcessInput.
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output (or both).
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputStatus. See Creating Hybrid DMO/MFT Objects.
Queries whether the Media Foundation transform (MFT) is ready to produce output data.
Receives a member of the _MFT_OUTPUT_STATUS_FLAGS enumeration, or zero. If the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
| The media type is not set on one or more streams. |
?
If the method returns the
MFTs are not required to implement this method. If the method returns E_NOTIMPL, you must call ProcessOutput to determine whether the transform has output data.
If the MFT has more than one output stream, but it does not produce samples at the same time for each stream, it can set the
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputStatus. See Creating Hybrid DMO/MFT Objects.
Sets the range of time stamps the client needs for output.
Specifies the earliest time stamp. The Media Foundation transform (MFT) will accept input until it can produce an output sample that begins at this time; or until it can produce a sample that ends at this time or later. If there is no lower bound, use the value MFT_OUTPUT_BOUND_LOWER_UNBOUNDED.
Specifies the latest time stamp. The MFT will not produce an output sample with time stamps later than this time. If there is no upper bound, use the value MFT_OUTPUT_BOUND_UPPER_UNBOUNDED.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
| The media type is not set on one or more streams. |
?
This method can be used to optimize preroll, especially in formats that have gaps between time stamps, or formats where the data must start on a sync point, such as MPEG-2. Calling this method is optional, and implementation of this method by an MFT is optional. If the MFT does not implement the method, the return value is E_NOTIMPL.
If an MFT implements this method, it must limit its output data to the range of times specified by hnsLowerBound and hnsUpperBound. The MFT discards any input data that is not needed to produce output within this range. If the sample boundaries do not exactly match the range, the MFT should split the output samples, if possible. Otherwise, the output samples can overlap the range.
For example, suppose the output range is 100 to 150 milliseconds (ms), and the output format is video with each frame lasting 33 ms. A sample with a time stamp of 67 ms overlaps the range (67 + 33 = 100) and is produced as output. A sample with a time stamp of 66 ms is discarded (66 + 33 = 99). Similarly, a sample with a time stamp of 150 ms is produced as output, but a sample with a time stamp of 151 is discarded.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTSetOutputBounds. See Creating Hybrid DMO/MFT Objects.
Sends an event to an input stream on this Media Foundation transform (MFT).
Input stream identifier. To get the list of stream identifiers, call
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
| Invalid stream number. |
| The media type is not set on one or more streams. |
| The pipeline should not propagate the event. |
?
An MFT can handle sending the event downstream, or it can let the pipeline do this, as indicated by the return value:
To send the event downstream, the MFT adds the event to the collection object that is provided by the client in the pEvents member of the
Events must be serialized with the samples that come before and after them. Attach the event to the output sample that follows the event. (The pipeline will process the event first, and then the sample.) If an MFT holds back one or more samples between calls to
If an MFT does not hold back samples and does not need to examine any events, it can return E_NOTIMPL.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTProcessEvent. See Creating Hybrid DMO/MFT Objects.
Sends a message to the Media Foundation transform (MFT).
The message to send, specified as a member of the
Message parameter. The meaning of this parameter depends on the message type.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. Applies to the |
| The media type is not set on one or more streams. |
?
Before calling this method, set the media types on all input and output streams.
The MFT might ignore certain message types. If so, the method returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTProcessMessage. See Creating Hybrid DMO/MFT Objects.
Delivers data to an input stream on this Media Foundation transform (MFT).
Input stream identifier. To get the list of stream identifiers, call
Pointer to the
Reserved. Must be zero.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid stream identifier. |
| The input sample requires a valid sample duration. To set the duration, call Some MFTs require that input samples have valid durations. Some MFTs do not require sample durations. |
| The input sample requires a time stamp. To set the time stamp, call Some MFTs require that input samples have valid time stamps. Some MFTs do not require time stamps. |
| The transform cannot process more input at this time. |
| The media type is not set on one or more streams. |
| The media type is not supported for DirectX Video Acceleration (DXVA). A DXVA-enabled decoder might return this error code. |
?
Note??If you are converting a DirectX Media Object (DMO) to an MFT, be aware that S_FALSE is not a valid return code for In most cases, if the method succeeds, the MFT stores the sample and holds a reference count on the
If the MFT already has enough input data to produce an output sample, it does not accept new input data, and ProcessInput returns
An exception to this rule is the
An MFT can process the input data in the ProcessInput method. However, most MFTs wait until the client calls ProcessOutput.
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output. It should never be in both states or neither state. An MFT should only accept as much input as it needs to generate at least one output sample, at which point ProcessInput returns
If an MFT encounters a non-fatal error in the input data, it can simply drop the data and attempt to recover when it gets the more input data. To request more input data, the MFT returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTProcessInput. See Creating Hybrid DMO/MFT Objects.
Generates output from the current input data.
Bitwise OR of zero or more flags from the _MFT_PROCESS_OUTPUT_FLAGS enumeration.
Number of elements in the pOutputSamples array. The value must be at least 1.
Pointer to an array of
Receives a bitwise OR of zero or more flags from the _MFT_PROCESS_OUTPUT_STATUS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ProcessOutput method was called on an asynchronous MFT that was not expecting this method call. |
| Invalid stream identifier in the dwStreamID member of one or more |
| The transform cannot produce output data until it receives more input data. |
| The format has changed on an output stream, or there is a new preferred format, or there is a new output stream. |
| You must set the media type on one or more streams of the MFT. |
?
Note??If you are converting a DirectX Media Object (DMO) to an MFT, be aware that S_FALSE is not a valid return code for The size of the pOutputSamples array must be equal to or greater than the number of selected output streams. The number of selected output streams equals the total number of output streams minus the number of deselected streams. A stream is deselected if it has the
This method generates output samples and can also generate events. If the method succeeds, at least one of the following conditions is true:
If MFT_UNIQUE_METHOD_NAMES is defined before including Mftransform.h, this method is renamed MFTProcessOutput. See Creating Hybrid DMO/MFT Objects.
Implemented by components that provide input trust authorities (ITAs). This interface is used to get the ITA for each of the component's streams.
Retrieves the input trust authority (ITA) for a specified stream.
The stream identifier for which the ITA is being requested.
The interface identifier (IID) of the interface being requested. Currently the only supported value is IID_IMFInputTrustAuthority.
Receives a reference to the ITA's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ITA does not expose the requested interface. |
?
Implemented by components that provide output trust authorities (OTAs). Any Media Foundation transform (MFT) or media sink that is designed to work within the protected media path (PMP) and also sends protected content outside the Media Foundation pipeline must implement this interface.
The policy engine uses this interface to negotiate what type of content protection should be applied to the content. Applications do not use this interface directly.
If an MFT supports
Gets the number of output trust authorities (OTAs) provided by this trusted output. Each OTA reports a single action.
Queries whether this output is a policy sink, meaning it handles the rights and restrictions required by the input trust authority (ITA).
A trusted output is generally considered to be a policy sink if it does not pass the media content that it receives anywhere else; or, if it does pass the media content elsewhere, either it protects the content using some proprietary method such as encryption, or it sufficiently devalues the content so as not to require protection.
Gets the number of output trust authorities (OTAs) provided by this trusted output. Each OTA reports a single action.
Receives the number of OTAs.
If this method succeeds, it returns
Gets an output trust authority (OTA), specified by index.
Zero-based index of the OTA to retrieve. To get the number of OTAs provided by this object, call
Receives a reference to the
If this method succeeds, it returns
Queries whether this output is a policy sink, meaning it handles the rights and restrictions required by the input trust authority (ITA).
Receives a Boolean value. If TRUE, this object is a policy sink. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
A trusted output is generally considered to be a policy sink if it does not pass the media content that it receives anywhere else; or, if it does pass the media content elsewhere, either it protects the content using some proprietary method such as encryption, or it sufficiently devalues the content so as not to require protection.
Limits the effective video resolution.
This method limits the effective resolution of the video image. The actual resolution on the target device might be higher, due to stretching the image.
The EVR might call this method at any time if the
Limits the effective video resolution.
This method limits the effective resolution of the video image. The actual resolution on the target device might be higher, due to stretching the image.
The EVR might call this method at any time if the
Queries whether the plug-in has any transient vulnerabilities at this time.
Receives a Boolean value. If TRUE, the plug-in has no transient vulnerabilities at the moment and can receive protected content. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method provides a way for the plug-in to report temporary conditions that would cause the input trust authority (ITA) to distrust the plug-in. For example, if an EVR presenter is in windowed mode, it is vulnerable to GDI screen captures.
To disable screen capture in Direct3D, the plug-in must do the following:
Create the Direct3D device in full-screen exlusive mode.
Specify the D3DCREATE_DISABLE_PRINTSCREEN flag when you create the device. For more information, see IDirect3D9::CreateDevice in the DirectX documentation.
In addition, the graphics adapter must support the Windows Vista Display Driver Model (WDDM) and the Direct3D extensions for Windows Vista (sometimes called D3D9Ex or D3D9L).
If these conditions are met, the presenter can return TRUE in the pYes parameter. Otherwise, it should return
The EVR calls this method whenever the device changes. If the plug-in returns
This method should be used only to report transient conditions. A plug-in that is never in a trusted state should not implement the
Queries whether the plug-in can limit the effective video resolution.
Receives a Boolean value. If TRUE, the plug-in can limit the effective video resolution. Otherwise, the plug-in cannot limit the video resolution. If the method fails, the EVR treats the value as
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Constriction is a protection mechanism that limits the effective resolution of the video frame to a specified maximum number of pixels.
Video constriction can be implemented by either the mixer or the presenter.
If the method returns TRUE, the EVR might call
Limits the effective video resolution.
Maximum number of source pixels that may appear in the final video image, in thousands of pixels. If the value is zero, the video is disabled. If the value is MAXDWORD (0xFFFFFFFF), video constriction is removed and the video may be rendered at full resolution.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method limits the effective resolution of the video image. The actual resolution on the target device might be higher, due to stretching the image.
The EVR might call this method at any time if the
Enables or disables the ability of the plug-in to export the video image.
Boolean value. Specify TRUE to disable image exporting, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
An EVR plug-in might expose a way for the application to get a copy of the video frames. For example, the standard EVR presenter implements
If the plug-in supports image exporting, this method enables or disables it. Before this method has been called for the first time, the EVR assumes that the mechanism is enabled.
If the plug-in does not support image exporting, this method should return
While image exporting is disabled, any associated export method, such as GetCurrentImage, should return
Returns the device identifier supported by a video renderer component. This interface is implemented by mixers and presenters for the enhanced video renderer (EVR). If you replace either of these components, the mixer and presenter must report the same device identifier.
Returns the identifier of the video device supported by an EVR mixer or presenter.
If a mixer or presenter uses Direct3D 9, it must return the value IID_IDirect3DDevice9 in pDeviceID. The EVR's default mixer and presenter both return this value. If you write a custom mixer or presenter, it can return some other value. However, the mixer and presenter must use matching device identifiers.
Returns the identifier of the video device supported by an EVR mixer or presenter.
Receives the device identifier. Generally, the value is IID_IDirect3DDevice9.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
If a mixer or presenter uses Direct3D 9, it must return the value IID_IDirect3DDevice9 in pDeviceID. The EVR's default mixer and presenter both return this value. If you write a custom mixer or presenter, it can return some other value. However, the mixer and presenter must use matching device identifiers.
Controls how the Enhanced Video Renderer (EVR) displays video.
The EVR presenter implements this interface. To get a reference to the interface, call
If you implement a custom presenter for the EVR, the presenter can optionally expose this interface as a service.
Queries how the enhanced video renderer (EVR) handles the aspect ratio of the source video.
Gets or sets the clipping window for the video.
There is no default clipping window. The application must set the clipping window.
Gets or sets the border color for the video.
The border color is used for areas where the enhanced video renderer (EVR) does not draw any video.
The border color is not used for letterboxing. To get the letterbox color, call IMFVideoProcessor::GetBackgroundColor.
Gets or sets various video rendering settings.
Queries whether the enhanced video renderer (EVR) is currently in full-screen mode.
Gets the size and aspect ratio of the video, prior to any stretching by the video renderer.
Receives the size of the native video rectangle. This parameter can be
Receives the aspect ratio of the video. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one of the parameters must be non- |
| The video renderer has been shut down. |
?
If no media types have been set on any video streams, the method succeeds but all parameters are set to zero.
You can set pszVideo or pszARVideo to
Gets the range of sizes that the enhanced video renderer (EVR) can display without significantly degrading performance or image quality.
Receives the minimum ideal size. This parameter can be
Receives the maximum ideal size. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one parameter must be non- |
| The video renderer has been shut down. |
?
You can set pszMin or pszMax to
Sets the source and destination rectangles for the video.
Pointer to an
Specifies the destination rectangle. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one parameter must be non- |
| The video renderer has been shut down. |
?
The source rectangle defines which portion of the video is displayed. It is specified in normalized coordinates. For more information, see
The destination rectangle defines a rectangle within the clipping window where the video appears. It is specified in pixels, relative to the client area of the window. To fill the entire window, set the destination rectangle to {0, 0, width, height}, where width and height are dimensions of the window client area. The default destination rectangle is {0, 0, 0, 0}.
To update just one of these rectangles, set the other parameter to
Before setting the destination rectangle (prcDest), you must set the video window by calling
Gets the source and destination rectangles for the video.
Pointer to an
Receives the current destination rectangle.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| One or more required parameters are |
| The video renderer has been shut down. |
?
Specifies how the enhanced video renderer (EVR) handles the aspect ratio of the source video.
Bitwise OR of one or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid flags. |
| The video renderer has been shut down. |
?
Queries how the enhanced video renderer (EVR) handles the aspect ratio of the source video.
Receives a bitwise OR of one or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
Sets the source and destination rectangles for the video.
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one parameter must be non- |
| The video renderer has been shut down. |
?
The source rectangle defines which portion of the video is displayed. It is specified in normalized coordinates. For more information, see
The destination rectangle defines a rectangle within the clipping window where the video appears. It is specified in pixels, relative to the client area of the window. To fill the entire window, set the destination rectangle to {0, 0, width, height}, where width and height are dimensions of the window client area. The default destination rectangle is {0, 0, 0, 0}.
To update just one of these rectangles, set the other parameter to
Before setting the destination rectangle (prcDest), you must set the video window by calling
Gets the clipping window for the video.
Receives a handle to the window where the enhanced video renderer (EVR) will draw the video.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
There is no default clipping window. The application must set the clipping window.
Repaints the current video frame. Call this method whenever the application receives a WM_PAINT message.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The EVR cannot repaint the frame at this time. This error can occur while the EVR is switching between full-screen and windowed mode. The caller can safely ignore this error. |
| The video renderer has been shut down. |
?
Gets a copy of the current image being displayed by the video renderer.
Pointer to a sizeof(
before calling the method.
Receives a reference to a buffer that contains a packed Windows device-independent bitmap (DIB). The caller must free the memory for the bitmap by calling CoTaskMemFree.
Receives the size of the buffer returned in pDib, in bytes.
Receives the time stamp of the captured image.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The content is protected and the license does not permit capturing the image. |
| The video renderer has been shut down. |
?
This method can be called at any time. However, calling the method too frequently degrades the video playback performance.
This method retrieves a copy of the final composited image, which includes any substreams, alpha-blended bitmap, aspect ratio correction, background color, and so forth.
In windowed mode, the bitmap is the size of the destination rectangle specified in
Sets the border color for the video.
Specifies the border color as a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
By default, if the video window straddles two monitors, the enhanced video renderer (EVR) clips the video to one monitor and draws the border color on the remaining portion of the window. (To change the clipping behavior, call
The border color is not used for letterboxing. To change the letterbox color, call IMFVideoProcessor::SetBackgroundColor.
Gets the border color for the video.
Receives the border color, as a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
The border color is used for areas where the enhanced video renderer (EVR) does not draw any video.
The border color is not used for letterboxing. To get the letterbox color, call IMFVideoProcessor::GetBackgroundColor.
Sets various preferences related to video rendering.
Bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid flags. |
| The video renderer has been shut down. |
?
Gets various video rendering settings.
Receives a bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
[This API is not supported and may be altered or unavailable in the future. ]
Sets or unsets full-screen rendering mode.
To implement full-screen playback, an application should simply resize the video window to cover the entire area of the monitor. Also set the window to be a topmost window, so that the application receives all mouse-click messages. For more information about topmost windows, see the documentation for the SetWindowPos function.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
The default EVR presenter implements full-screen mode using Direct3D exclusive mode.
If you use this method to switch to full-screen mode, set the application window to be a topmost window and resize the window to cover the entire monitor. This ensures that the application window receives all mouse-click messages. Also set the keyboard focus to the application window. When you switch out of full-screen mode, restore the window's original size and position.
By default, the cursor is still visible in full-screen mode. To hide the cursor, call ShowCursor.
The transition to and from full-screen mode occurs asynchronously. To get the current mode, call
Queries whether the enhanced video renderer (EVR) is currently in full-screen mode.
Receives a Boolean value. If TRUE, the EVR is in full-screen mode. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The EVR is currently switching between full-screen and windowed mode. |
?
Represents a description of a video format.
If the major type of a media type is
Applications should avoid using this interface except when a method or function requires an
Represents a description of a video format.
If the major type of a media type is
Applications should avoid using this interface except when a method or function requires an
Represents a description of a video format.
If the major type of a media type is
Applications should avoid using this interface except when a method or function requires an
[This API is not supported and may be altered or unavailable in the future. Instead, applications should set the
Retrieves an alternative representation of the media type.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is equivalent to
Instead of calling this method, applications should set the
Controls how the Enhanced Video Renderer (EVR) mixes video substreams. Applications can use this interface to control video mixing during playback.
The EVR mixer implements this interface. To get a reference to the interface, call
If you implement a custom mixer for the EVR, the mixer can optionally expose this interface as a service.
Sets the z-order of a video stream.
Identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Z-order value. The z-order of the reference stream must be zero. The maximum z-order value is the number of streams minus one.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The value of dwZ is larger than the maximum z-order value. |
| Invalid z-order for this stream. For the reference stream, dwZ must be zero. For all other streams, dwZ must be greater than zero. |
| Invalid stream identifier. |
?
The EVR draws the video streams in the order of their z-order values, starting with zero. The reference stream must be first in the z-order, and the remaining streams can be in any order.
Retrieves the z-order of a video stream.
Identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Receives the z-order value.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
?
Sets the position of a video stream within the composition rectangle.
Identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The coordinates of the bounding rectangle given in pnrcOutput are not valid. |
| Invalid stream identifier. |
?
The mixer draws each video stream inside a bounding rectangle that is specified relative to the final video image. This bounding rectangle is given in normalized coordinates. For more information, see
The coordinates of the bounding rectangle must fall within the range [0.0, 1.0]. Also, the X and Y coordinates of the upper-left corner cannot exceed the X and Y coordinates of the lower-right corner. In other words, the bounding rectangle must fit entirely within the composition rectangle and cannot be flipped vertically or horizontally.
The following diagram shows how the EVR mixes substreams.
The output rectangle for the stream is specified by calling SetStreamOutputRect. The source rectangle is specified by calling
Retrieves the position of a video stream within the composition rectangle.
The identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
?
Controls preferences for video deinterlacing.
The default video mixer for the Enhanced Video Renderer (EVR) implements this interface.
To get a reference to the interface, call
Gets or sets the current preferences for video deinterlacing.
Sets the preferences for video deinterlacing.
Bitwise OR of zero or more flags from the
If this method succeeds, it returns
Gets the current preferences for video deinterlacing.
Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Maps a position on an input video stream to the corresponding position on an output video stream.
To obtain a reference to this interface, call
Maps output image coordinates to input image coordinates. This method provides the reverse transformation for components that map coordinates on the input image to different coordinates on the output image.
X-coordinate of the output image, normalized to the range [0...1].
Y-coordinate of the output image, normalized to the range [0...1].
Output stream index for the coordinate mapping.
Input stream index for the coordinate mapping.
Receives the mapped x-coordinate of the input image, normalized to the range [0...1].
Receives the mapped y-coordinate of the input image, normalized to the range [0...1].
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
In the following diagram, R(dest) is the destination rectangle for the video. You can obtain this rectangle by calling
The position of P relative to R(dest) in normalized coordinates is calculated as follows:
float xn = float(x + 0.5) / widthDest;
float xy = float(y + 0.5) / heightDest;
where widthDest and heightDest are the width and height of R(dest) in pixels.
To calculate the position of P relative to R1, call MapOutputCoordinateToInputStream as follows:
float x1 = 0, y1 = 0;
hr = pMap->MapOutputCoordinateToInputStream(xn, yn, 0, dwInputStreamIndex, &x1, &y1);
The values returned in x1 and y1 are normalized to the range [0...1]. To convert back to pixel coordinates, scale these values by the size of R1:
int scaledx = int(floor(x1 * widthR1));
int scaledy = int(floor(xy * heightR1));
Note that x1 and y1 might fall outside the range [0...1] if P lies outside of R1.
Represents a video presenter. A video presenter is an object that receives video frames, typically from a video mixer, and presents them in some way, typically by rendering them to the display. The enhanced video renderer (EVR) provides a default video presenter, and applications can implement custom presenters.
The video presenter receives video frames as soon as they are available from upstream. The video presenter is responsible for presenting frames at the correct time and for synchronizing with the presentation clock.
Configures the Video Processor MFT.
This interface controls how the Video Processor MFT generates output frames.
Sets the border color.
Sets the source rectangle. The source rectangle is the portion of the input frame that is blitted to the destination surface.
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
Sets the destination rectangle. The destination rectangle is the portion of the output surface where the source rectangle is blitted.
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
Specifies whether to flip the video image.
Specifies whether to rotate the video to the correct orientation.
The original orientation of the video is specified by the
If eRotation is
Specifies the amount of downsampling to perform on the output.
Sets the border color.
A reference to an
If this method succeeds, it returns
Sets the source rectangle. The source rectangle is the portion of the input frame that is blitted to the destination surface.
A reference to a
If this method succeeds, it returns
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
Sets the destination rectangle. The destination rectangle is the portion of the output surface where the source rectangle is blitted.
A reference to a
If this method succeeds, it returns
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
Specifies whether to flip the video image.
An
If this method succeeds, it returns
Specifies whether to rotate the video to the correct orientation.
A
If this method succeeds, it returns
The original orientation of the video is specified by the
If eRotation is
Specifies the amount of downsampling to perform on the output.
The sampling size. To disable constriction, set this parameter to
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Configures the Video Processor MFT.
This interface controls how the Video Processor MFT generates output frames.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Overrides the rotation operation that is performed in the video processor.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Returns the list of supported effects in the currently configured video processor.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Overrides the rotation operation that is performed in the video processor.
Rotation value in degrees. Typically, you can only use values from the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables effects that were implemented with IDirectXVideoProcessor::VideoProcessorBlt.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Returns the list of supported effects in the currently configured video processor.
A combination of
If this method succeeds, it returns
Sets a new mixer or presenter for the Enhanced Video Renderer (EVR).
Both the EVR media sink and the DirectShow EVR filter implement this interface. To get a reference to the interface, call QueryInterface on the media sink or the filter. Do not use
The EVR activation object returned by the
Sets a new mixer or presenter for the enhanced video renderer (EVR).
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Either the mixer or the presenter is invalid. |
| The mixer and presenter cannot be replaced in the current state. (EVR media sink.) |
| The video renderer has been shut down. |
| One or more input pins are connected. (DirectShow EVR filter.) |
?
Call this method directly after creating the EVR, before you do any of the following:
Call
Call
Connect any pins on the EVR filter, or set any media types on EVR media sink.
The EVR filter returns VFW_E_WRONG_STATE if any of the filter's pins are connected. The EVR media sink returns
The device identifiers for the mixer and the presenter must match. The
If the video renderer is in the protected media path (PMP), the mixer and presenter objects must be certified safe components and pass any trust authority verification that is being enforced. Otherwise, this method will fail.
Allocates video samples for a video media sink.
The stream sinks on the enhanced video renderer (EVR) expose this interface as a service. To obtain a reference to the interface, call
Specifies the Direct3D device manager for the video media sink to use.
The media sink uses the Direct3D device manager to obtain a reference to the Direct3D device, which it uses to allocate Direct3D surfaces. The device manager enables multiple objects in the pipeline (such as a video renderer and a video decoder) to share the same Direct3D device.
Specifies the Direct3D device manager for the video media sink to use.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The media sink uses the Direct3D device manager to obtain a reference to the Direct3D device, which it uses to allocate Direct3D surfaces. The device manager enables multiple objects in the pipeline (such as a video renderer and a video decoder) to share the same Direct3D device.
Releases all of the video samples that have been allocated.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Specifies the number of samples to allocate and the media type for the samples.
Number of samples to allocate.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid media type. |
?
Gets a video sample from the allocator.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The allocator was not initialized. Call |
| No samples are available. |
?
Enables an application to track video samples allocated by the enhanced video renderer (EVR).
The stream sinks on the EVR expose this interface as a service. To get a reference to the interface, call the
Sets the callback object that receives notification whenever a video sample is returned to the allocator.
To get a video sample from the allocator, call the
The allocator holds at most one callback reference. Calling this method again replaces the previous callback reference.
Sets the callback object that receives notification whenever a video sample is returned to the allocator.
A reference to the
If this method succeeds, it returns
To get a video sample from the allocator, call the
The allocator holds at most one callback reference. Calling this method again replaces the previous callback reference.
Gets the number of video samples that are currently available for use.
Receives the number of available samples.
If this method succeeds, it returns
To get a video sample from the allocator, call the
Allocates video samples that contain Microsoft Direct3D?11 texture surfaces.
You can use this interface to allocateDirect3D?11 video samples, rather than allocate the texture surfaces and media samples directly. To get a reference to this interface, call the
To allocate video samples, perform the following steps:
Initializes the video sample allocator object.
The initial number of samples to allocate.
The maximum number of samples to allocate.
A reference to the
A reference to the
If this method succeeds, it returns
The callback for the
Called when a video sample is returned to the allocator.
If this method succeeds, it returns
To get a video sample from the allocator, call the
The callback for the
Called when allocator samples are released for pruning by the allocator, or when the allocator is removed.
The sample to be pruned.
If this method succeeds, it returns
Completes an asynchronous request to register the topology work queues with the Multimedia Class Scheduler Service (MMCSS).
Call this method when the
Registers the topology work queues with the Multimedia Class Scheduler Service (MMCSS).
A reference to the
A reference to the
If this method succeeds, it returns
Each source node in the topology defines one branch of the topology. The branch includes every topology node that receives data from that node. An application can assign each branch of a topology its own work queue and then associate those work queues with MMCSS tasks.
To use this method, perform the following steps.
The BeginRegisterTopologyWorkQueuesWithMMCSS method is asynchronous. When the operation completes, the callback object's
To unregister the topology work queues from MMCSS, call
Completes an asynchronous request to register the topology work queues with the Multimedia Class Scheduler Service (MMCSS).
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when the
Unregisters the topology work queues from the Multimedia Class Scheduler Service (MMCSS).
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is asynchronous. When the operation completes, the callback object's
Completes an asynchronous request to unregister the topology work queues from the Multimedia Class Scheduler Service (MMCSS).
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when the
Retrieves the Multimedia Class Scheduler Service (MMCSS) class for a specified branch of the current topology.
Identifies the work queue assigned to this topology branch. The application defines this value by setting the
Pointer to a buffer that receives the name of the MMCSS class. This parameter can be
On input, specifies the size of the pwszClass buffer, in characters. On output, receives the required size of the buffer, in characters. The size includes the terminating null character.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no work queue with the specified identifier. |
| The pwszClass buffer is too small to receive the class name. |
?
Retrieves the Multimedia Class Scheduler Service (MMCSS) task identifier for a specified branch of the current topology.
Identifies the work queue assigned to this topology branch. The application defines this value by setting the
Receives the task identifier.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Associates a platform work queue with a Multimedia Class Scheduler Service (MMCSS) task.
The platform work queue to register with MMCSS. See Work Queue Identifiers. To register all of the standard work queues to the same MMCSS task, set this parameter to
The name of the MMCSS task to be performed.
The unique task identifier. To obtain a new task identifier, set this value to zero.
A reference to the
A reference to the
If this method succeeds, it returns
This method is asynchronous. When the operation completes, the callback object's
To unregister the work queue from the MMCSS class, call
Completes an asynchronous request to associate a platform work queue with a Multimedia Class Scheduler Service (MMCSS) task.
Pointer to the
The unique task identifier.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this function when the
To unregister the work queue from the MMCSS class, call
Unregisters a platform work queue from a Multimedia Class Scheduler Service (MMCSS) task.
Platform work queue to register with MMCSS. See
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is asynchronous. When the operation completes, the callback object's
Completes an asynchronous request to unregister a platform work queue from a Multimedia Class Scheduler Service (MMCSS) task.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when the
Retrieves the Multimedia Class Scheduler Service (MMCSS) class for a specified platform work queue.
Platform work queue to query. See
Pointer to a buffer that receives the name of the MMCSS class. This parameter can be
On input, specifies the size of the pwszClass buffer, in characters. On output, receives the required size of the buffer, in characters. The size includes the terminating null character.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pwszClass buffer is too small to receive the class name. |
?
Retrieves the Multimedia Class Scheduler Service (MMCSS) task identifier for a specified platform work queue.
Platform work queue to query. See
Receives the task identifier.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Extends the
This interface allows applications to control both platform and topology work queues.
The
Retrieves the Multimedia Class Scheduler Service (MMCSS) string associated with the given topology work queue.
The id of the topology work queue.
Pointer to the buffer the work queue's MMCSS task id will be copied to.
If this method succeeds, it returns
Registers a platform work queue with Multimedia Class Scheduler Service (MMCSS) using the specified class and task id.
The id of one of the standard platform work queues.
The MMCSS class which the work queue should be registered with.
The task id which the work queue should be registered with. If dwTaskId is 0, a new MMCSS bucket will be created.
The priority.
Standard callback used for async operations in Media Foundation.
Standard state used for async operations in Media Foundation.
If this method succeeds, it returns
Gets the priority of the Multimedia Class Scheduler Service (MMCSS) priority associated with the specified platform work queue.
Topology work queue id for which the info will be returned.
Pointer to a buffer allocated by the caller that the work queue's MMCSS task id will be copied to.
Contains an image that is stored as metadata for a media source. This structure is used as the data item for the WM/Picture metadata attribute.
The WM/Picture attribute is defined in the Windows Media Format SDK. The attribute contains a picture related to the content, such as album art.
To get this attribute from a media source, call
Image data.
This format differs from the WM_PICTURE structure used in the Windows Media Format SDK. The WM_PICTURE structure contains internal references to two strings and the image data. If the structure is copied, these references become invalid. The
Contains synchronized lyrics stored as metadata for a media source. This structure is used as the data item for the WM/Lyrics_Synchronised metadata attribute.
The WM/Lyrics_Synchronised attribute is defined in the Windows Media Format SDK. The attribute contains lyrics synchronized to times in the source file.
To get this attribute from a media source, call
Null-terminated wide-character string that contains a description.
Lyric data. The format of the lyric data is described in the Windows Media Format SDK documentation.
This format differs from the WM_SYNCHRONISED_LYRICS structure used in the Windows Media Format SDK. The WM_SYNCHRONISED_LYRICS structure contains internal references to two strings and the lyric data. If the structure is copied, these references become invalid. The
Specifies the format of time stamps in the lyrics. This member is equivalent to the bTimeStampFormat member in the WM_SYNCHRONISED_LYRICS structure. The WM_SYNCHRONISED_LYRICS structure is documented in the Windows Media Format SDK.
Specifies the type of synchronized strings that are in the lyric data. This member is equivalent to the bContentType member in the WM_SYNCHRONISED_LYRICS structure.
Size, in bytes, of the lyric data.
Describes the indexing configuration for a stream and type of index.
Number of bytes used for each index entry. If the value is MFASFINDEXER_PER_ENTRY_BYTES_DYNAMIC, the index entries have variable size.
Optional text description of the index.
Indexing interval. The units of this value depend on the index type. A value of MFASFINDEXER_NO_FIXED_INTERVAL indicates that there is no fixed indexing interval.
Specifies an index for the ASF indexer object.
The index object of an ASF file can contain a number of distinct indexes. Each index is identified by the type of index and the stream number. No ASF index object can contain more than one index for a particular combination of stream number and index type.
The type of index. Currently this value must be GUID_NULL, which specifies time-based indexing.
The stream number to which this structure applies.
Contains statistics about the progress of the ASF multiplexer.
Use
Number of frames written by the ASF multiplexer.
Number of frames dropped by the ASF multiplexer.
Describes a 4:4:4:4 Y'Cb'Cr' sample.
Cr (chroma difference) value.
Cb (chroma difference) value.
Y (luma) value.
Alpha value.
Specifies the buffering parameters for a network byte stream.
Size of the file, in bytes. If the total size is unknown, set this member to -1.
Size of the playable media data in the file, excluding any trailing data that is not useful for playback. If this value is unknown, set this member to -1.
Pointer to an array of
The number of elements in the prgBuckets array.
Amount of data to buffer from the network, in 100-nanosecond units. This value is in addition to the buffer windows defined in the prgBuckets member.
Amount of additional data to buffer when seeking, in 100-nanosecond units. This value reflects the fact that downloading must start from the previous key frame before the seek point. If the value is unknown, set this member to zero.
The playback duration of the file, in 100-nanosecond units. If the duration is unknown, set this member to zero.
Playback rate.
Specifies a range of bytes.
The offset, in bytes, of the start of the range.
The offset, in bytes, of the end of the range.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A transform describing the location of a camera relative to other cameras or an established external reference.
The Position value should be expressed in real-world coordinates in units of meters. The coordinate system of both position and orientation should be right-handed Cartesian as shown in the following diagram.
Important??The position and orientation are expressed as transforms toward the reference frame or origin. For example, a Position value of {-5, 0, 0} means that the origin is 5 meters to the left of the sensor, and therefore the sensor is 5 meters to the right of the origin. A sensor that is positioned 2 meters above the origin should specify a Position of {0, -2, 0} because that is the translation from the sensor to the origin.
If the sensor is aligned with the origin, the rotation is the identity quaternion and the forward vector is along the -Z axis {0, 0, -1}. If the sensor is rotated +30 degrees around the Y axis from the origin, then the Orientation value should be a rotation of -30 degrees around the Y axis, because it represents the rotation from the sensor to the origin.
?A reference
The transform position.
The transform rotation.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Describes the location of a camera relative to other cameras or an established external reference.
The number of transforms in the CalibratedTransforms array.
The array of transforms in the extrinsic data.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a polynomial lens distortion model.
The first radial distortion coefficient.
The second radial distortion coefficient.
The third radial distortion coefficient.
The first tangential distortion coefficient.
The second tangential distortion coefficient.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a pinhole camera model.
For square pixels, the X and Y fields of the FocalLength should be the same.
The PrincipalPoint field is expressed in pixels, not in normalized coordinates. The origin [0,0] is the bottom, left corner of the image.
The focal length of the camera.
The principal point of the camera.
This structure contains blob information for the EV compensation feedback for the photo captured.
A KSCAMERA_EXTENDEDPROP_EVCOMP_XXX step flag.
The EV compensation value in units of the step specified.
The CapturedMetadataISOGains structure describes the blob format for MF_CAPTURE_METADATA_ISO_GAINS.
The CapturedMetadataISOGains structure only describes the blob format for the MF_CAPTURE_METADATA_ISO_GAINS attribute. The metadata item structure for ISO gains (KSCAMERA_METADATA_ITEMHEADER + ISO gains metadata payload) is up to driver and must be 8-byte aligned.
This structure describes the blob format for the MF_CAPTURE_METADATA_WHITEBALANCE_GAINS attribute.
The MF_CAPTURE_METADATA_WHITEBALANCE_GAINS attribute contains the white balance gains applied to R, G, B by the sensor or ISP when the preview frame was captured. This is a unitless.
The CapturedMetadataWhiteBalanceGains structure only describes the blob format for the MF_CAPTURE_METADATA_WHITEBALANCE_GAINS attribute. The metadata item structure for white balance gains (KSCAMERA_METADATA_ITEMHEADER + white balance gains metadata payload) is up to driver and must be 8-byte aligned.
The R value of the blob.
The G value of the blob.
The B value of the blob.
Defines the properties of a clock.
The interval at which the clock correlates its clock time with the system time, in 100-nanosecond units. If the value is zero, the correlation is made whenever the
The unique identifier of the underlying device that provides the time. If two clocks have the same unique identifier, they are based on the same device. If the underlying device is not shared between two clocks, the value can be GUID_NULL.
A bitwise OR of flags from the
The clock frequency in Hz. A value of MFCLOCK_FREQUENCY_HNS means that the clock has a frequency of 10 MHz (100-nanosecond ticks), which is the standard MFTIME time unit in Media Foundation. If the
The amount of inaccuracy that may be present on the clock, in parts per billion (ppb). For example, an inaccuracy of 50 ppb means the clock might drift up to 50 seconds per billion seconds of real time. If the tolerance is not known, the value is MFCLOCK_TOLERANCE_UNKNOWN. This constant is equal to 50 parts per million (ppm).
The amount of jitter that may be present, in 100-nanosecond units. Jitter is the variation in the frequency due to sampling the underlying clock. Jitter does not include inaccuracies caused by drift, which is reflected in the value of dwClockTolerance.
For clocks based on a single device, the minimum jitter is the length of the tick period (the inverse of the frequency). For example, if the frequency is 10 Hz, the jitter is 0.1 second, which is 1,000,000 in MFTIME units. This value reflects the fact that the clock might be sampled just before the next tick, resulting in a clock time that is one period less than the actual time. If the frequency is greater than 10 MHz, the jitter should be set to 1 (the minimum value).
If a clock's underlying hardware device does not directly time stamp the incoming data, the jitter also includes the time required to dispatch the driver's interrupt service routine (ISR). In that case, the expected jitter should include the following values:
Value | Meaning |
---|---|
| Jitter due to time stamping during the device driver's ISR. |
| Jitter due to time stamping during the deferred procedure call (DPC) processing. |
| Jitter due to dropping to normal thread execution before time stamping. |
?
Contains information about the data that you want to provide as input to a protection system function.
The identifier of the function that you need to run. This value is defined by the implementation of the protection system.
The size of the private data that the implementation of the security processor implementation reserved. You can determine this value by calling the
The size of the data provided as input to the protection system function that you want to run.
Reserved.
The data to provide as input to the protection system function.
If the value of the PrivateDataByteCount member is greater than 0, bytes 0 through PrivateDataByteCount - 1 are reserved for use by the independent hardware vendor (IHV). Bytes PrivateDataByteCount through HWProtectionDataByteCount + PrivateDataByteCount - 1 contain the input data for the protection system function.
The protection system specification defines the format and size of the DRM function.
Contains information about the data you received as output from a protection system function.
The size of the private data that the implementation of the security processor reserves, in bytes. You can determine this value by calling the
The maximum size of data that the independent hardware vendor (IHV) can return in the output buffer, in bytes.
The size of the data that the IHV wrote to the output buffer, in bytes.
The result of the protection system function.
The number of 100 nanosecond units spent transporting the data.
The number of 100 nanosecond units spent running the protection system function.
The output of the protection system function.
If the value of the PrivateDataByteCount member is greater than 0, bytes 0 through PrivateDataByteCount - 1 are reserved for IHV use. Bytes PrivateDataByteCount through MaxHWProtectionDataByteCount + PrivateDataByteCount - 1 contain the region of the array into which the driver should return the output data from the protection system function.
The protection system specification defines the format and size of the function.
Advises the secure processor of the Multimedia Class Scheduler service (MMCSS) parameters so that real-time tasks can be scheduled at the expected priority.
The identifier for the MMCSS task.
The name of the MMCSS task.
The base priority of the thread that runs the MMCSS task.
The
This structure is identical to the DirectShow
Major type
Subtype
If TRUE, samples are of a fixed size. This field is informational only. For audio, it is generally set to TRUE. For video, it is usually TRUE for uncompressed video and
If TRUE, samples are compressed using temporal (interframe) compression. (A value of TRUE indicates that not all frames are key frames.) This field is informational only.
Size of the sample in bytes. For compressed data, the value can be zero.
Format type | Format structure |
---|---|
| DVINFO |
| |
| |
| None. |
| |
| |
| |
?
Not used. Set to
Size of the format block of the media type.
Pointer to the format structure. The structure type is specified by the formattype member. The format structure must be present, unless formattype is GUID_NULL or FORMAT_None.
The FaceCharacterization structure describes the blob format for the MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute.
The MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute contains the blink and facial expression state for the face ROIs identified in MF_CAPTURE_METADATA_FACEROIS. For a device that does not support blink or facial expression detection, this attribute should be omitted.
The facial expressions that can be detected are defined as follows:
#define MF_METADATAFACIALEXPRESSION_SMILE 0x00000001
The FaceCharacterizationBlobHeader and FaceCharacterization structures only describe the blob format for the MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute. The metadata item structure for the face characterizations (KSCAMERA_METADATA_ITEMHEADER + face characterizations metadata payload) is up to driver and must be 8-byte aligned.
0 indicates no blink for the left eye, 100 indicates definite blink for the left eye (0 - 100).
0 indicates no blink for the right eye, 100 indicates definite blink for the right eye (0 - 100).
A defined facial expression value.
0 indicates no such facial expression as identified, 100 indicates definite such facial expression as defined (0 - 100).
The FaceCharacterizationBlobHeader structure describes the size and count information of the blob format for the MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute.
Size of this header + all following FaceCharacterization structures.
Number of FaceCharacterization structures in the blob. Must match the number of FaceRectInfo structures in FaceRectInfoBlobHeader.
The FaceRectInfo structure describes the blob format for the MF_CAPTURE_METADATA_FACEROIS attribute.
The MF_CAPTURE_METADATA_FACEROIS attribute contains the face rectangle info detected by the driver. By default driver\MFT0 should provide the face information on preview stream. If the driver advertises the capability on other streams, driver\MFT must provide the face info on the corresponding streams if the application enables face detection on those streams. When video stabilization is enabled on the driver, the face information should be provided post-video stabilization. The dominate face must be the first FaceRectInfo in the blob.
The FaceRectInfoBlobHeader and FaceRectInfo structures only describe the blob format for the MF_CAPTURE_METADATA_FACEROIS attribute. The metadata item structure for face ROIs (KSCAMERA_METADATA_ITEMHEADER + face ROIs metadata payload) is up to driver and must be 8-byte aligned.
Relative coordinates on the frame that face detection is running (Q31 format).
Confidence level of the region being a face (0 - 100).
The FaceRectInfoBlobHeader structure describes the size and count information of the blob format for the MF_CAPTURE_METADATA_FACEROIS attribute.
Size of this header + all following FaceRectInfo structures.
Number of FaceRectInfo structures in the blob.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A vector with two components.
X component of the vector.
Y component of the vector.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A vector with three components.
X component of the vector.
Y component of the vector.
Z component of the vector.
Contains coefficients used to transform multichannel audio into a smaller number of audio channels. This process is called fold-down.
To specify this information in the media type, set the
The ASF media source supports fold-down from six channels (5.1 audio) to two channels (stereo). It gets the information from the g_wszFold6To2Channels3 attribute in the ASF header. This attribute is documented in the Windows Media Format SDK documentation.
Size of the structure, in bytes.
Number of source channels.
Number of destination channels.
Specifies the assignment of audio channels to speaker positions in the transformed audio. This member is a bitwise OR of flags that define the speaker positions. For a list of valid flags, see
Array that contains the fold-down coefficients. The number of coefficients is cSrcChannels?cDstChannels. If the number of coefficients is less than the size of the array, the remaining elements in the array are ignored. For more information about how the coefficients are applied, see Windows Media Audio Professional Codec Features.
The HistogramBlobHeader structure describes the blob size and the number of histograms in the blob for the MF_CAPTURE_METADATA_HISTOGRAM attribute.
Size of the entire histogram blob in bytes.
Number of histograms in the blob. Each histogram is identified by a HistogramHeader.
The HistogramDataHeader structure describes the blob format for the MF_CAPTURE_METADATA_HISTOGRAM attribute.
Size in bytes of this header + all following histogram data.
Mask of the color channel for the histogram data.
1 if linear, 0 if nonlinear.
The HistogramGrid structure describes the blob format for MF_CAPTURE_METADATA_HISTOGRAM.
Width of the sensor output that histogram is collected from.
Height of the sensor output that histogram is collected from.
Absolute coordinates of the region on the sensor output that the histogram is collected for.
The HistogramHeader structure describes the blob format for MF_CAPTURE_METADATA_HISTOGRAM.
The MF_CAPTURE_METADATA_HISTOGRAM attribute contains a histogram when a preview frame is captured.
For the ChannelMasks field, the following bitmasks indicate the available channels in the histogram:
#define MF_HISTOGRAM_CHANNEL_Y 0x00000001 #define MF_HISTOGRAM_CHANNEL_R 0x00000002 #define MF_HISTOGRAM_CHANNEL_G 0x00000004 #define MF_HISTOGRAM_CHANNEL_B 0x00000008 #define MF_HISTOGRAM_CHANNEL_Cb 0x00000010 #define MF_HISTOGRAM_CHANNEL_Cr 0x00000020
Each blob can contain multiple histograms collected from different regions or different color spaces of the same frame. Each histogram in the blob is identified by its own HistogramHeader. Each histogram has its own region and sensor output size associated. For full frame histogram, the region will match the sensor output size specified in HistogramGrid.
Histogram data for all available channels are grouped under one histogram. Histogram data for each channel is identified by a HistogramDataHeader immediate above the data. ChannelMasks indicate how many and what channels are having the histogram data, which is the bitwise OR of the supported MF_HISTOGRAM_CHANNEL_* bitmasks as defined above. ChannelMask indicates what channel the data is for, which is identified by any one of the MF_HISTOGRAM_CHANNEL_* bitmasks.
Histogram data is an array of ULONG with each entry representing the number of pixels falling under a set of tonal values as categorized by the bin. The data in the array should start from bin 0 to bin N-1, where N is the number of bins in the histogram, for example, HistogramBlobHeader.Bins.
For Windows?10, if KSPROPERTY_CAMERACONTROL_EXTENDED_HISTOGRAM is supported, at minimum a full frame histogram with Y channel must be provided which should be the first histogram in the histogram blob. Note that HistogramBlobHeader, HistogramHeader, HistogramDataHeader and Histogram data only describe the blob format for the MF_CAPTURE_METADATA_HISTOGRAM attribute. The metadata item structure for the histogram (KSCAMERA_METADATA_ITEMHEADER + all histogram metadata payload) is up to driver and must be 8-byte aligned.
Size of this header + (HistogramDataHeader + histogram data following) * number of channels available.
Number of bins in the histogram.
Color space that the histogram is collected from
Masks of the color channels that the histogram is collected for.
Grid that the histogram is collected from.
Describes an action requested by an output trust authority (OTA). The request is sent to an input trust authority (ITA).
Specifies the action as a member of the
Pointer to a buffer that contains a ticket object, provided by the OTA.
Size of the ticket object, in bytes.
Contains parameters for the
Specifies the buffering requirements of a file.
This structure describes the buffering requirements for content encoded at the bit rate specified in the dwBitrate. The msBufferWindow member indicates how much data should be buffered before starting playback. The size of the buffer in bytes is msBufferWinow?dwBitrate / 8000.
Bit rate, in bits per second.
Size of the buffer window, in milliseconds.
The MetadataTimeStamps structure describes the blob format for the MF_CAPTURE_METADATA_FACEROITIMESTAMPS attribute.
The MF_CAPTURE_METADATA_FACEROITIMESTAMPS attribute contains the time stamp information for the face ROIs identified in MF_CAPTURE_METADATA_FACEROIS. For a device that cannot provide the time stamp for face ROIs, this attribute should be omitted.
For the Flags field, the following bit flags indicate which time stamp is valid:
#define MF_METADATATIMESTAMPS_DEVICE 0x00000001 #define MF_METADATATIMESTAMPS_PRESENTATION 0x00000002
MFT0 must set Flags to MF_METADATATIEMSTAMPS_DEVICE and the appropriate QPC time for Device, if the driver provides the timestamp metadata for the face ROIs.
The MetadataTimeStamps structure only describes the blob format for the MF_CAPTURE_METADATA_FACEROITIMESTAMPS attribute. The metadata item structure for timestamp (KSCAMERA_METADATA_ITEMHEADER + timestamp metadata payload) is up to driver and must be 8-byte aligned.
Bitwise OR of the MF_METADATATIMESTAMPS_* flags.
QPC time for the sample the face rectangle is derived from (in 100ns).
PTS for the sample the face rectangle is derived from (in 100ns).
Provides information on a screen-to-screen move and a dirty rectangle copy operation.
A
A
Contains encoding statistics from the Digital Living Network Alliance (DLNA) media sink.
This structure is used with the
Contains format data for a binary stream in an Advanced Streaming Format (ASF) file.
This structure is used with the
This structure corresponds to the first 60 bytes of the Type-Specific Data field of the Stream Properties Object, in files where the stream type is ASF_Binary_Media. For more information, see the ASF specification.
The Format Data field of the Type-Specific Data field is contained in the
Major media type. This value is the
Media subtype.
If TRUE, samples have a fixed size in bytes. Otherwise, samples have variable size.
If TRUE, the data in this stream uses temporal compression. Otherwise, samples are independent of each other.
If bFixedSizeSamples is TRUE, this member specifies the sample size in bytes. Otherwise, the value is ignored and should be 0.
Format type
Defines custom color primaries for a video source. The color primaries define how to convert colors from RGB color space to CIE XYZ color space.
This structure is used with the
Red x-coordinate.
Red y-coordinate.
Green x-coordinate.
Green y-coordinate.
Blue x-coordinate.
Blue y-coordinate.
White point x-coordinate.
White point y-coordinate.
Contains the authentication information for the credential manager.
The response code of the authentication challenge. For example, NS_E_PROXY_ACCESSDENIED.
Set this flag to TRUE if the currently logged on user's credentials should be used as the default credentials.
If TRUE, the authentication package will send unencrypted credentials over the network. Otherwise, the authentication package encrypts the credentials.
The original URL that requires authentication.
The name of the site or proxy that requires authentication.
The name of the realm for this authentication.
The name of the authentication package. For example, "Digest" or "MBS_BASIC".
The number of times that the credential manager should retry after authentication fails.
Specifies an offset as a fixed-point real number.
The value of the number is value + (fract / 65536.0f).
The fractional part of the number.
The integer part of the number.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
If the flags member contains the
To cancel authentication, set fProceedWithAuthentication equal to
By default, MFPlay uses the network source's implementation of
Contains one palette entry in a color table.
This union can be used to represent both RGB palettes and Y'Cb'Cr' palettes. The video format that defines the palette determines which union member should be used.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
This event is not used to signal the failure of an asynchronous
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains information that is common to every type of MFPlay event.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a pinhole camera intrinsic model for a specified resolution.
The width for the pinhole camera intrinsic model.
The height for the pinhole camera intrinsic model.
The pinhole camera model.
The lens distortion model.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Contains zero or 1 pinhole camera intrinsic models that describe how to project a 3D point in physical world onto the 2D image frame of a camera.
The number of camera intrinsic models in the IntrinsicModels array.
The array of camera intrinsic models in the intrinsic data.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Media items are created asynchronously. If multiple items are created, the operations can complete in any order, not necessarily in the same order as the method calls. You can use the dwUserData member to identify the items, if you have simultaneous requests pending.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
If one or more streams could not be connected to a media sink, the event property store contains the MFP_PKEY_StreamRenderingResults property. The value of the property is an array of
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
If MFEventType is
Property | Description |
---|---|
MFP_PKEY_StreamIndex | The index of the stream whose format changed. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A four dimensional vector, used to represent a rotation.
X component of the vector.
Y component of the vector.
Z component of the vector.
W component of the vector.
Represents a ratio.
Numerator of the ratio.
Denominator of the ratio.
Defines a regions of interest.
The bounds of the region.
Specifies the quantization parameter delta for the specified region from the rest of the frame.
Contains information about a revoked component.
Specifies the reason for the revocation. The following values are defined.
Value | Meaning |
---|---|
| A boot driver could not be verified. |
| A certificate in a trusted component's certificate chain was revoked. |
| The high-security certificate for authenticating the protected environment (PE) was revoked. The high-security certificate is typically used by ITAs that handle high-definition content and next-generation formats such as HD-DVD. |
| A certificate's extended key usage (EKU) object is invalid. |
| The root certificate is not valid. |
| The low-security certificate for authenticating the PE was revoked. The low-security certificate is typically used by ITAs that handle standard-definition content and current-generation formats. |
| A trusted component was revoked. |
| The GRL was not found. |
| Could not load the global revocation list (GRL). |
| The GRL signature is invalid. |
| A certificate chain was not well-formed, or a boot driver is unsigned or is signed with an untrusted certificate. |
| A component was signed by a test certificate. |
?
In addition, one of the following flags might be present, indicating the type of component that failed to load.
Value | Meaning |
---|---|
| User-mode component. |
| Kernel-mode component. |
?
Contains a hash of the file header.
Contains a hash of the public key in the component's certificate.
File name of the revoked component.
Contains information about one or more revoked components.
Revocation information version.
Number of elements in the pRRComponents array.
Array of
Contains statistics about the performance of the sink writer.
The size of the structure, in bytes.
The time stamp of the most recent sample given to the sink writer. The sink writer updates this value each time the application calls
The time stamp of the most recent sample to be encoded. The sink writer updates this value whenever it calls
The time stamp of the most recent sample given to the media sink. The sink writer updates this value whenever it calls
The time stamp of the most recent stream tick. The sink writer updates this value whenever the application calls
The system time of the most recent sample request from the media sink. The sink writer updates this value whenever it receives an
The number of samples received.
The number of samples encoded.
The number of samples given to the media sink.
The number of stream ticks received.
The amount of data, in bytes, currently waiting to be processed.
The total amount of data, in bytes, that has been sent to the media sink.
The number of pending sample requests.
The average rate, in media samples per 100-nanoseconds, at which the application sent samples to the sink writer.
The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the encoder.
The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the media sink.
Not for application use.
This structure is used internally by the Microsoft Media Foundation AVStream proxy.
Reserved.
Reserved.
Contains information about an input stream on a Media Foundation transform (MFT). To get these values, call
Before the media types are set, the only values that should be considered valid are the
The
The
After you set a media type on all of the input and output streams (not including optional streams), all of the values returned by the GetInputStreamInfo method are valid. They might change if you set different media types.
Specifies a new attribute value for a topology node.
Due to an error in the structure declaration, the u64 member is declared as a 32-bit integer, not a 64-bit integer. Therefore, any 64-bit value passed to the
The identifier of the topology node to update. To get the identifier of a topology node, call
Attribute type, specified as a member of the
Attribute value (unsigned 32-bit integer). This member is used when attrType equals
Attribute value (unsigned 32-bit integer). This member is used when attrType equals
Attribute value (floating point). This member is used when attrType equals
Contains information about an output buffer for a Media Foundation transform. This structure is used in the
You must provide an
MFTs can support two different allocation models for output samples:
To find which model the MFT supports for a given output stream, call
Flag | Allocation Model |
---|---|
The MFT allocates the output samples for the stream. Set pSample to | |
The MFT supports both allocation models. | |
Neither (default) | The client must allocate the output samples for the stream. |
?
The behavior of ProcessOutput depends on the initial value of pSample and the value of the dwFlags parameter in the ProcessOutput method.
If pSample is
Restriction: This output stream must have the
If pSample is
Restriction: This output stream must have the
If pSample is non-
Restriction: This output stream must not have the
Any other combinations are invalid and cause ProcessOutput to return E_INVALIDARG.
Each call to ProcessOutput can produce zero or more events and up to one sample per output stream.
Contains information about an output stream on a Media Foundation transform (MFT). To get these values, call
Before the media types are set, the only values that should be considered valid is the
After you set a media type on all of the input and output streams (not including optional streams), all of the values returned by the GetOutputStreamInfo method are valid. They might change if you set different media types.
Contains information about the audio and video streams for the transcode sink activation object.
To get the information stored in this structure, call
The
Contains media type information for registering a Media Foundation transform (MFT).
The major media type. For a list of possible values, see Major Media Types.
The media subtype. For a list of possible values, see the following topics:
Contains parameters for the
Specifies a rectangular area within a video frame.
An
An
A
Contains information about a video compression format. This structure is used in the
For uncompressed video formats, set the structure members to zero.
Describes a video format.
Applications should avoid using this structure. Instead, it is recommended that applications use attributes to describe the video format. For a list of media type attributes, see Media Type Attributes. With attributes, you can set just the format information that you know, which is easier (and more likely to be accurate) than trying to fill in complete format information for the
To initialize a media type object from an
You can use the
Size of the structure, in bytes. This value includes the size of the palette entries that may appear after the surfaceInfo member.
Video subtype. See Video Subtype GUIDs.
Contains video format information that applies to both compressed and uncompressed formats.
This structure is used in the
Developers are encouraged to use media type attributes instead of using the
Structure Member | Media Type Attribute |
---|---|
dwWidth, dwHeight | |
PixelAspectRatio | |
SourceChromaSubsampling | |
InterlaceMode | |
TransferFunction | |
ColorPrimaries | |
TransferMatrix | |
SourceLighting | |
FramesPerSecond | |
NominalRange | |
GeometricAperture | |
MinimumDisplayAperture | |
PanScanAperture | |
VideoFlags | See |
?
Defines a normalized rectangle, which is used to specify sub-rectangles in a video rectangle. When a rectangle N is normalized relative to some other rectangle R, it means the following:
The coordinate (0.0, 0.0) on N is mapped to the upper-left corner of R.
The coordinate (1.0, 1.0) on N is mapped to the lower-right corner of R.
Any coordinates of N that fall outside the range [0...1] are mapped to positions outside the rectangle R. A normalized rectangle can be used to specify a region within a video rectangle without knowing the resolution or even the aspect ratio of the video. For example, the upper-left quadrant is defined as {0.0, 0.0, 0.5, 0.5}.
X-coordinate of the upper-left corner of the rectangle.
Y-coordinate of the upper-left corner of the rectangle.
X-coordinate of the lower-right corner of the rectangle.
Y-coordinate of the lower-right corner of the rectangle.
Contains information about an uncompressed video format. This structure is used in the
Applies to: desktop apps | Metro style apps
Initializes Microsoft Media Foundation.
An application must call this function before using Media Foundation. Before your application quits, call
Do not call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Shuts down the Microsoft Media Foundation platform. Call this function once for every call to
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed: