- DiffColor0 - gray
- DiffTextureOp0 - TextureOperation.Multiply
- DiffTexture0 - tex1.png
- DiffTextureOp0 - TextureOperation.Add
- DiffTexture1 - tex2.png
-
-
- diffFinal = DiffColor0 * sampleTex(DiffTexture0, UV0) + sampleTex(DiffTexture1, UV0) * diffContrib;
-
-
- String: Name of the Blob
- int: Length of Binary Data
- byte[]: Binary Data
- bool: If has next data blob
- String: Name of nested blob
- int: Length of nested blob binary data
- byte[]: Nested blob binary data
- bool: If nested blob has next data blob
- ....
-
- Atten = 1 / (att0 + att1 * d + att2 * d*d)
.
- Atten = 1 / (att0 + att1 * d + att2 * d*d)
- Atten = 1 / (att0 + att1 * d + att2 * d*d)
.
- value1 + (value2 - value1) * amount
.
- Passing amount a value of 0 will cause value1 to be returned, a value of 1 will cause value2 to be returned.
- See ((1 - amount) * value1) + (value2 * amount)
.
- Passing amount a value of 0 will cause value1 to be returned, a value of 1 will cause value2 to be returned.
- This method does not have the floating point precision issue that
- var name = await KeyboardInput.Show("Name", "What's your name?", "Player");
-
-
- var nameTask = KeyboardInput.Show("Name", "What's your name?", "Player");
- KeyboardInput.Cancel("John Doe");
- var name = await nameTask;
-
-
- var color = await MessageBox.Show("Color", "What's your favorite color?", new[] { "Red", "Green", "Blue" });
-
-
- var colorTask = MessageBox.Show("Color", "What's your favorite color?", new[] { "Red", "Green", "Blue" });
- MessageBox.Cancel(0);
- var color = await colorTask;
-
- true
.
- On OpenGL platforms, it is true
if both framebuffer sRGB
- and texture sRGB are supported.
- true
.
- For OpenGL Desktop platforms it is always true
.
- For OpenGL Mobile platforms it requires `GL_EXT_color_buffer_float`.
- If the requested format is not supported an NotSupportedException
- will be thrown.
- true
.
- For OpenGL Desktop platforms it is always true
.
- For OpenGL Mobile platforms it requires `GL_EXT_color_buffer_half_float`.
- If the requested format is not supported an NotSupportedException
- will be thrown.
- true
the false
it will instead do a
- soft full screen by maximizing the window and making it borderless.
- Using this operation it is easy to get certain vertex elements from a VertexBuffer.
-
- For example to get the texture coordinates from a VertexBuffer of
- Vector3[] positions = new Vector3[numVertices];
- vertexBuffer.SetData(0, positions, 0, numVertices, vertexBuffer.VertexDeclaration.VertexStride);
-
-
- Continuing from the previous example, if you want to set only the texture coordinate component of the vertex data,
- you would call this method as follows (note the use of
- Vector2[] texCoords = new Vector2[numVertices];
- vertexBuffer.SetData(12, texCoords, 0, numVertices, vertexBuffer.VertexDeclaration.VertexStride);
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(fileToCompress + ".zlib"))
- {
- using (Stream compressor = new ZlibStream(raw, CompressionMode.Compress))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(fileToCompress & ".zlib")
- Using compressor As Stream = New ZlibStream(raw, CompressionMode.Compress)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(fileToCompress + ".zlib"))
- {
- using (Stream compressor = new ZlibStream(raw,
- CompressionMode.Compress,
- CompressionLevel.BestCompression))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
-
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(fileToCompress & ".zlib")
- Using compressor As Stream = New ZlibStream(raw, CompressionMode.Compress, CompressionLevel.BestCompression)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- using (var output = System.IO.File.Create(fileToCompress + ".zlib"))
- {
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (Stream compressor = new ZlibStream(output, CompressionMode.Compress, CompressionLevel.BestCompression, true))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- // can write additional data to the output stream here
- }
-
-
- Using output As FileStream = File.Create(fileToCompress & ".zlib")
- Using input As Stream = File.OpenRead(fileToCompress)
- Using compressor As Stream = New ZlibStream(output, CompressionMode.Compress, CompressionLevel.BestCompression, True)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- ' can write additional data to the output stream here.
- End Using
-
-
- private void InflateBuffer()
- {
- int bufferSize = 1024;
- byte[] buffer = new byte[bufferSize];
- ZlibCodec decompressor = new ZlibCodec();
-
- Console.WriteLine("\n============================================");
- Console.WriteLine("Size of Buffer to Inflate: {0} bytes.", CompressedBytes.Length);
- MemoryStream ms = new MemoryStream(DecompressedBytes);
-
- int rc = decompressor.InitializeInflate();
-
- decompressor.InputBuffer = CompressedBytes;
- decompressor.NextIn = 0;
- decompressor.AvailableBytesIn = CompressedBytes.Length;
-
- decompressor.OutputBuffer = buffer;
-
- // pass 1: inflate
- do
- {
- decompressor.NextOut = 0;
- decompressor.AvailableBytesOut = buffer.Length;
- rc = decompressor.Inflate(FlushType.None);
-
- if (rc != ZlibConstants.Z_OK && rc != ZlibConstants.Z_STREAM_END)
- throw new Exception("inflating: " + decompressor.Message);
-
- ms.Write(decompressor.OutputBuffer, 0, buffer.Length - decompressor.AvailableBytesOut);
- }
- while (decompressor.AvailableBytesIn > 0 || decompressor.AvailableBytesOut == 0);
-
- // pass 2: finish and flush
- do
- {
- decompressor.NextOut = 0;
- decompressor.AvailableBytesOut = buffer.Length;
- rc = decompressor.Inflate(FlushType.Finish);
-
- if (rc != ZlibConstants.Z_STREAM_END && rc != ZlibConstants.Z_OK)
- throw new Exception("inflating: " + decompressor.Message);
-
- if (buffer.Length - decompressor.AvailableBytesOut > 0)
- ms.Write(buffer, 0, buffer.Length - decompressor.AvailableBytesOut);
- }
- while (decompressor.AvailableBytesIn > 0 || decompressor.AvailableBytesOut == 0);
-
- decompressor.EndInflate();
- }
-
-
-
- int bufferSize = 40000;
- byte[] CompressedBytes = new byte[bufferSize];
- byte[] DecompressedBytes = new byte[bufferSize];
-
- ZlibCodec compressor = new ZlibCodec();
-
- compressor.InitializeDeflate(CompressionLevel.Default);
-
- compressor.InputBuffer = System.Text.ASCIIEncoding.ASCII.GetBytes(TextToCompress);
- compressor.NextIn = 0;
- compressor.AvailableBytesIn = compressor.InputBuffer.Length;
-
- compressor.OutputBuffer = CompressedBytes;
- compressor.NextOut = 0;
- compressor.AvailableBytesOut = CompressedBytes.Length;
-
- while (compressor.TotalBytesIn != TextToCompress.Length && compressor.TotalBytesOut < bufferSize)
- {
- compressor.Deflate(FlushType.None);
- }
-
- while (true)
- {
- int rc= compressor.Deflate(FlushType.Finish);
- if (rc == ZlibConstants.Z_STREAM_END) break;
- }
-
- compressor.EndDeflate();
-
-
-
- private void DeflateBuffer(CompressionLevel level)
- {
- int bufferSize = 1024;
- byte[] buffer = new byte[bufferSize];
- ZlibCodec compressor = new ZlibCodec();
-
- Console.WriteLine("\n============================================");
- Console.WriteLine("Size of Buffer to Deflate: {0} bytes.", UncompressedBytes.Length);
- MemoryStream ms = new MemoryStream();
-
- int rc = compressor.InitializeDeflate(level);
-
- compressor.InputBuffer = UncompressedBytes;
- compressor.NextIn = 0;
- compressor.AvailableBytesIn = UncompressedBytes.Length;
-
- compressor.OutputBuffer = buffer;
-
- // pass 1: deflate
- do
- {
- compressor.NextOut = 0;
- compressor.AvailableBytesOut = buffer.Length;
- rc = compressor.Deflate(FlushType.None);
-
- if (rc != ZlibConstants.Z_OK && rc != ZlibConstants.Z_STREAM_END)
- throw new Exception("deflating: " + compressor.Message);
-
- ms.Write(compressor.OutputBuffer, 0, buffer.Length - compressor.AvailableBytesOut);
- }
- while (compressor.AvailableBytesIn > 0 || compressor.AvailableBytesOut == 0);
-
- // pass 2: finish and flush
- do
- {
- compressor.NextOut = 0;
- compressor.AvailableBytesOut = buffer.Length;
- rc = compressor.Deflate(FlushType.Finish);
-
- if (rc != ZlibConstants.Z_STREAM_END && rc != ZlibConstants.Z_OK)
- throw new Exception("deflating: " + compressor.Message);
-
- if (buffer.Length - compressor.AvailableBytesOut > 0)
- ms.Write(buffer, 0, buffer.Length - compressor.AvailableBytesOut);
- }
- while (compressor.AvailableBytesIn > 0 || compressor.AvailableBytesOut == 0);
-
- compressor.EndDeflate();
-
- ms.Seek(0, SeekOrigin.Begin);
- CompressedBytes = new byte[compressor.TotalBytesOut];
- ms.Read(CompressedBytes, 0, CompressedBytes.Length);
- }
-
-
- var adler = Adler.Adler32(0, null, 0, 0);
- adler = Adler.Adler32(adler, buffer, index, length);
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(outputFile))
- {
- using (Stream compressor = new GZipStream(raw, CompressionMode.Compress))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
- Dim outputFile As String = (fileToCompress & ".compressed")
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(outputFile)
- Using compressor As Stream = New GZipStream(raw, CompressionMode.Compress)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- private void GunZipFile(string filename)
- {
- if (!filename.EndsWith(".gz))
- throw new ArgumentException("filename");
- var DecompressedFile = filename.Substring(0,filename.Length-3);
- byte[] working = new byte[WORKING_BUFFER_SIZE];
- int n= 1;
- using (System.IO.Stream input = System.IO.File.OpenRead(filename))
- {
- using (Stream decompressor= new Ionic.Zlib.GZipStream(input, CompressionMode.Decompress, true))
- {
- using (var output = System.IO.File.Create(DecompressedFile))
- {
- while (n !=0)
- {
- n= decompressor.Read(working, 0, working.Length);
- if (n > 0)
- {
- output.Write(working, 0, n);
- }
- }
- }
- }
- }
- }
-
-
-
- Private Sub GunZipFile(ByVal filename as String)
- If Not (filename.EndsWith(".gz)) Then
- Throw New ArgumentException("filename")
- End If
- Dim DecompressedFile as String = filename.Substring(0,filename.Length-3)
- Dim working(WORKING_BUFFER_SIZE) as Byte
- Dim n As Integer = 1
- Using input As Stream = File.OpenRead(filename)
- Using decompressor As Stream = new Ionic.Zlib.GZipStream(input, CompressionMode.Decompress, True)
- Using output As Stream = File.Create(UncompressedFile)
- Do
- n= decompressor.Read(working, 0, working.Length)
- If n > 0 Then
- output.Write(working, 0, n)
- End IF
- Loop While (n > 0)
- End Using
- End Using
- End Using
- End Sub
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(fileToCompress + ".gz"))
- {
- using (Stream compressor = new GZipStream(raw,
- CompressionMode.Compress,
- CompressionLevel.BestCompression))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
-
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(fileToCompress & ".gz")
- Using compressor As Stream = New GZipStream(raw, CompressionMode.Compress, CompressionLevel.BestCompression)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- using (System.IO.Stream input = System.IO.File.OpenRead(fileToCompress))
- {
- using (var raw = System.IO.File.Create(outputFile))
- {
- using (Stream compressor = new GZipStream(raw, CompressionMode.Compress, CompressionLevel.BestCompression, true))
- {
- byte[] buffer = new byte[WORKING_BUFFER_SIZE];
- int n;
- while ((n= input.Read(buffer, 0, buffer.Length)) != 0)
- {
- compressor.Write(buffer, 0, n);
- }
- }
- }
- }
-
-
- Dim outputFile As String = (fileToCompress & ".compressed")
- Using input As Stream = File.OpenRead(fileToCompress)
- Using raw As FileStream = File.Create(outputFile)
- Using compressor As Stream = New GZipStream(raw, CompressionMode.Compress, CompressionLevel.BestCompression, True)
- Dim buffer As Byte() = New Byte(4096) {}
- Dim n As Integer = -1
- Do While (n <> 0)
- If (n > 0) Then
- compressor.Write(buffer, 0, n)
- End If
- n = input.Read(buffer, 0, buffer.Length)
- Loop
- End Using
- End Using
- End Using
-
-
- byte[] working = new byte[WORKING_BUFFER_SIZE];
- using (System.IO.Stream input = System.IO.File.OpenRead(_CompressedFile))
- {
- using (Stream decompressor= new Ionic.Zlib.GZipStream(input, CompressionMode.Decompress, true))
- {
- using (var output = System.IO.File.Create(_DecompressedFile))
- {
- int n;
- while ((n= decompressor.Read(working, 0, working.Length)) !=0)
- {
- output.Write(working, 0, n);
- }
- }
- }
- }
-
- [This documentation is preliminary and is subject to change.]
A function-linking-graph interface is used for constructing shaders that consist of a sequence of precompiled function calls that pass values to each other .
-To get a function-linking-graph interface, call
You can use the function-linking-graph (FLG) interface methods to construct shaders that consist of a sequence of precompiled function calls that pass values to each other. You don't need to write HLSL and then call the HLSL compiler. Instead, the shader structure is specified programmatically via a C++ API. FLG nodes represent input and output signatures and invocations of precompiled library functions. The order of registering the function-call nodes defines the sequence of invocations. You must specify the input signature node first and the output signature node last. FLG edges define how values are passed from one node to another. The data types of passed values must be the same; there is no implicit type conversion. Shape and swizzling rules follow the HLSL behavior. Values can only be passed forward in this sequence.
-A function-linking-graph interface is used for constructing shaders that consist of a sequence of precompiled function calls that pass values to each other.
Note??This interface is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? -To get a function-linking-graph interface, call
You can use the function-linking-graph (FLG) interface methods to construct shaders that consist of a sequence of precompiled function calls that pass values to each other. You don't need to write HLSL and then call the HLSL compiler. Instead, the shader structure is specified programmatically via a C++ API. FLG nodes represent input and output signatures and invocations of precompiled library functions. The order of registering the function-call nodes defines the sequence of invocations. You must specify the input signature node first and the output signature node last. FLG edges define how values are passed from one node to another. The data types of passed values must be the same; there is no implicit type conversion. Shape and swizzling rules follow the HLSL behavior. Values can only be passed forward in this sequence.
Note??[This documentation is preliminary and is subject to change.]
Sets the input signature of the function-linking-graph.
-An array of
A reference to the
[This documentation is preliminary and is subject to change.]
Sets the output signature of the function-linking-graph.
-An array of
A reference to the
[This documentation is preliminary and is subject to change.]
Initializes a shader module from the function-linking-graph object.
-A reference to an
[This documentation is preliminary and is subject to change.]
Creates a call-function linking node to use in the function-linking-graph.
-A reference to the
The name of the function.
A reference to a variable that receives a reference to the
[This documentation is preliminary and is subject to change.]
Passes the return value from a source linking node to a destination linking node.
-A reference to the
A reference to the
The zero-based index of the destination parameter.
Returns
Gets the error from the last function call of the function-linking-graph.
-Initializes a shader module from the function-linking-graph object.
-The address of a reference to an
An optional reference to a variable that receives a reference to the ID3DBlob interface that you can use to access compiler error messages, or
Returns
Sets the input signature of the function-linking-graph.
-An array of
The number of input parameters in the pInputParameters array.
A reference to a variable that receives a reference to the
Returns
Sets the output signature of the function-linking-graph.
-An array of
The number of output parameters in the pOutputParameters array.
A reference to a variable that receives a reference to the
Returns
Creates a call-function linking node to use in the function-linking-graph.
- The optional namespace for the function, or
A reference to the
The name of the function.
A reference to a variable that receives a reference to the
Passes a value from a source linking node to a destination linking node.
-A reference to the
The zero-based index of the source parameter.
A reference to the
The zero-based index of the destination parameter.
Returns
Passes a value with swizzle from a source linking node to a destination linking node.
-A reference to the
The zero-based index of the source parameter.
The name of the source swizzle.
A reference to the
The zero-based index of the destination parameter.
The name of the destination swizzle.
Returns
Gets the error from the last function call of the function-linking-graph.
-An reference to a variable that receives a reference to the ID3DBlob interface that you can use to access the error.
Returns
Generates Microsoft High Level Shader Language (HLSL) shader code that represents the function-linking-graph.
-Reserved
An reference to a variable that receives a reference to the ID3DBlob interface that you can use to access the HLSL shader source code that represents the function-linking-graph. You can compile this HLSL code, but first you must add code or include statements for the functions called in the function-linking-graph.
Returns
A function-reflection interface accesses function info.
Note??This interface is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? -To get a function-reflection interface, call
Returns all constant buffers provided by this function
-All references to
Returns all function parameters
-All references to
Gets a description of how a resource is bound to a function.
-A zero-based resource index.
A reference to a
A shader consists of executable code (the compiled HLSL functions) and a set of resources that supply the shader with input data. GetResourceBindingDesc gets info about how one resource in the set is bound as an input to the shader. The ResourceIndex parameter specifies the index for the resource.
-Gets a description of how a resource is bound to a function.
-Resource name.
A reference to a
A shader consists of executable code (the compiled HLSL functions) and a set of resources that supply the shader with input data. GetResourceBindingDesc gets info about how one resource in the set is bound as an input to the shader. The ResourceIndex parameter specifies the index for the resource.
-Fills the function descriptor structure for the function.
-Fills the function descriptor structure for the function.
-A reference to a
Returns one of the Direct3D 11 Return Codes.
Gets a constant buffer by index for a function.
-Zero-based index.
A reference to a
A constant buffer supplies either scalar constants or texture constants to a shader. A shader can use one or more constant buffers. For best performance, separate constants into buffers based on the frequency they are updated.
-Gets a constant buffer by name for a function.
-The constant-buffer name.
A reference to a
A constant buffer supplies either scalar constants or texture constants to a shader. A shader can use one or more constant buffers. For best performance, separate constants into buffers based on the frequency they are updated.
-Gets a description of how a resource is bound to a function.
-A zero-based resource index.
A reference to a
Returns one of the Direct3D 11 Return Codes.
A shader consists of executable code (the compiled HLSL functions) and a set of resources that supply the shader with input data. GetResourceBindingDesc gets info about how one resource in the set is bound as an input to the shader. The ResourceIndex parameter specifies the index for the resource.
-Gets a variable by name.
-A reference to a string containing the variable name.
Returns a
Gets a description of how a resource is bound to a function.
-The constant-buffer name of the resource.
A reference to a
Returns one of the Direct3D 11 Return Codes.
A shader consists of executable code (the compiled HLSL functions) and a set of resources that supply the shader with input data. GetResourceBindingDescByName gets info about how one resource in the set is bound as an input to the shader. The Name parameter specifies the name of the resource.
-Gets the function parameter reflector.
-The zero-based index of the function parameter reflector to retrieve.
A reference to a
Values that identify the indended use of a constant-data buffer.
-Bind the constant buffer to an input slot defined in HLSL code (instead of letting the compiler choose the input slot).
Values that identify the intended use of constant-buffer data.
-A buffer containing scalar constants.
A buffer containing texture data.
A buffer containing interface references.
A buffer containing binding information.
Values that indicate the location of a shader #include file.
-You pass a
The local directory.
The system directory.
Values that indicate how the pipeline interprets geometry or hull shader input primitives.
- The
The shader has not been initialized with an input primitive type.
Interpret the input primitive as a point.
Interpret the input primitive as a line.
Interpret the input primitive as a triangle.
Interpret the input primitive as a line with adjacency data.
Interpret the input primitive as a triangle with adjacency data.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Interpret the input primitive as a control point patch.
Indicates semantic flags for function parameters.
-The parameter has no semantic flags.
Indicates an input parameter.
Indicates an output parameter.
Values that identify the data types that can be stored in a register.
-A register component type is specified in the ComponentType member of the
The data type is unknown.
32-bit unsigned integer.
32-bit signed integer.
32-bit floating-point number.
Values that identify the return type of a resource.
-A resource return type is specified in the ReturnType member of the
Return type is an unsigned integer value normalized to a value between 0 and 1.
Return type is a signed integer value normalized to a value between -1 and 1.
Return type is a signed integer.
Return type is an unsigned integer.
Return type is a floating-point number.
Return type is unknown.
Return type is a double-precision value.
Return type is a multiple-dword type, such as a double or uint64, and the component is continued from the previous component that was declared. The first component represents the lower bits.
Values that identify parts of the content of an arbitrary length data buffer.
-These values are passed to the
Values that identify shader-input options.
-Assign a shader input to a register based on the register assignment in the HLSL code (instead of letting the compiler choose the register).
Use a comparison sampler, which uses the SampleCmp (DirectX HLSL Texture Object) and SampleCmpLevelZero (DirectX HLSL Texture Object) sampling functions.
A 2-bit value for encoding texture components.
A 2-bit value for encoding texture components.
A 2-bit value for encoding texture components.
This value is reserved.
Values that identify resource types that can be bound to a shader and that are reflected as part of the resource description for the shader.
-The shader resource is a constant buffer.
The shader resource is a texture buffer.
The shader resource is a texture.
The shader resource is a sampler.
The shader resource is a read-and-write buffer.
The shader resource is a structured buffer.
For more information about structured buffer, see the Remarks section.
The shader resource is a read-and-write structured buffer.
The shader resource is a byte-address buffer.
The shader resource is a read-and-write byte-address buffer.
The shader resource is an append-structured buffer.
The shader resource is a consume-structured buffer.
The shader resource is a read-and-write structured buffer that uses the built-in counter to append or consume.
Values that identify the class of a shader variable.
-The class of a shader variable is not a programming class; the class identifies the variable class such as scalar, vector, object, and so on.
The shader variable is a scalar.
The shader variable is a vector.
The shader variable is a row-major matrix.
The shader variable is a column-major matrix.
The shader variable is an object.
The shader variable is a structure.
The shader variable is a class.
The shader variable is an interface.
Values that identify information about a shader variable.
-A call to the
Indicates that the registers assigned to this shader variable were explicitly declared in shader code (instead of automatically assigned by the compiler).
Indicates that this variable is used by this shader. This value confirms that a particular shader variable (which can be common to many different shaders) is indeed used by a particular shader.
Indicates that this variable is an interface.
Indicates that this variable is a parameter of an interface.
Values that identify various data, texture, and buffer types that can be assigned to a shader variable.
- A call to the
The types in a structured buffer describe the structure of the elements in the buffer. The layout of these types generally match their C++ struct counterparts. The following examples show structured buffers:
struct mystruct {float4 val; uint ind;}; RWStructuredBuffer<mystruct> rwbuf; RWStructuredBuffer<float3> rwbuf2;-
The variable is a void reference.
The variable is a boolean.
The variable is an integer.
The variable is a floating-point number.
The variable is a string.
The variable is a texture.
The variable is a 1D texture.
The variable is a 2D texture.
The variable is a 3D texture.
The variable is a texture cube.
The variable is a sampler.
The variable is a 1D sampler.
The variable is a 2D sampler.
The variable is a 3D sampler.
The variable is a cube sampler.
The variable is a pixel shader.
The variable is a vertex shader.
The variable is a pixel fragment.
The variable is a vertex fragment.
The variable is an unsigned integer.
The variable is an 8-bit unsigned integer.
The variable is a geometry shader.
The variable is a rasterizer-state object.
The variable is a depth-stencil-state object.
The variable is a blend-state object.
The variable is a buffer.
The variable is a constant buffer.
The variable is a texture buffer.
The variable is a 1D-texture array.
The variable is a 2D-texture array.
The variable is a render-target view.
The variable is a depth-stencil view.
The variable is a 2D-multisampled texture.
The variable is a 2D-multisampled-texture array.
The variable is a texture-cube array.
The variable holds a compiled hull-shader binary.
The variable holds a compiled domain-shader binary.
The variable is an interface.
The variable holds a compiled compute-shader binary.
The variable is a double precision (64-bit) floating-point number.
The variable is a 1D read-and-write texture.
The variable is an array of 1D read-and-write textures.
The variable is a 2D read-and-write texture.
The variable is an array of 2D read-and-write textures.
The variable is a 3D read-and-write texture.
The variable is a read-and-write buffer.
The variable is a byte-address buffer.
The variable is a read-and-write byte-address buffer.
The variable is a structured buffer.
For more information about structured buffer, see the Remarks section.
The variable is a read-and-write structured buffer.
The variable is an append structured buffer.
The variable is a consume structured buffer.
The variable is an 8-byte FLOAT.
The variable is a 10-byte FLOAT.
The variable is a 16-byte FLOAT.
The variable is a 12-byte INT.
The variable is a 16-byte INT.
The variable is a 16-byte INT.
Indicates shader type.
-Pixel shader.
Vertex shader.
Geometry shader.
Hull shader.
Domain shader.
Compute shader.
Indicates the end of the enumeration constants.
Strip flag options.
-These flags are used by
Values that identify shader parameters that use system-value semantics.
- The
This parameter does not use a predefined system-value semantic.
This parameter contains position data.
This parameter contains clip-distance data.
This parameter contains cull-distance data.
This parameter contains a render-target-array index.
This parameter contains a viewport-array index.
This parameter contains a vertex ID.
This parameter contains a primitive ID.
This parameter contains an instance ID.
This parameter contains data that identifies whether or not the primitive faces the camera.
This parameter contains a sampler-array index.
This parameter contains one of four tessellation factors that correspond to the amount of parts that a quad patch is broken into along the given edge. This flag is used to tessellate a quad patch.
This parameter contains one of two tessellation factors that correspond to the amount of parts that a quad patch is broken into vertically and horizontally within the patch. This flag is used to tessellate a quad patch.
This parameter contains one of three tessellation factors that correspond to the amount of parts that a tri patch is broken into along the given edge. This flag is used to tessellate a tri patch.
This parameter contains the tessellation factor that corresponds to the amount of parts that a tri patch is broken into within the patch. This flag is used to tessellate a tri patch.
This parameter contains the tessellation factor that corresponds to the number of lines broken into within the patch. This flag is used to tessellate an isolines patch.
This parameter contains the tessellation factor that corresponds to the number of lines that are created within the patch. This flag is used to tessellate an isolines patch.
This parameter contains render-target data.
This parameter contains depth data.
This parameter contains alpha-coverage data.
This parameter signifies that the value is greater than or equal to a reference value. This flag is used to specify conservative depth for a pixel shader.
This parameter signifies that the value is less than or equal to a reference value. This flag is used to specify conservative depth for a pixel shader.
This parameter contains a stencil reference. See Shader Specified Stencil Reference Value.
This parameter contains inner input coverage data. See Conservative Rasterization.
Values that identify domain options for tessellator data.
-The data domain defines the type of data. This enumeration is used by
The data type is undefined.
Isoline data.
Triangle data.
Quad data.
Values that identify output primitive types.
-The output primitive type determines how the tessellator output data is organized; this enumeration is used by
The output primitive type is undefined.
The output primitive type is a point.
The output primitive type is a line.
The output primitive type is a clockwise triangle.
The output primitive type is a counter clockwise triangle.
Values that identify partitioning options.
-During tessellation, the partition option helps to determine how the algorithm chooses the next partition value; this enumeration is used by
The partitioning type is undefined.
Partition with integers only.
Partition with a power-of-two number only.
Partition with an odd, fractional number.
Partition with an even, fractional number.
Reads a file that is on disk into memory.
-A reference to a constant null-terminated string that contains the name of the file to read into memory.
A reference to a variable that receives a reference to the ID3DBlob interface that contains information that
Returns one of the Direct3D 11 return codes.
Writes a memory blob to a file on disk.
-A reference to a ID3DBlob interface that contains the memory blob to write to the file that the pFileName parameter specifies.
A reference to a constant null-terminated string that contains the name of the file to which to write.
A Boolean value that specifies whether to overwrite information in the pFileName file. TRUE specifies to overwrite information and
Returns one of the Direct3D 11 return codes.
Compile HLSL code or an effect file into bytecode for a given target.
-A reference to uncompiled shader data; either ASCII HLSL code or a compiled effect.
Length of pSrcData.
You can use this parameter for strings that specify error messages. If not used, set to
An array of
Optional. A reference to an
#define D3D_COMPILE_STANDARD_FILE_INCLUDE ((*)( )1)
The name of the shader entry point function where shader execution begins. When you compile using a fx profile (for example, fx_4_0, fx_5_0, and so on),
A string that specifies the shader target or set of shader features to compile against. The shader target can be shader model 2, shader model 3, shader model 4, or shader model 5. The target can also be an effect type (for example, fx_4_1). For info about the targets that various profiles support, see Specifying Compiler Targets.
Flags defined by D3D compile constants.
Flags defined by D3D compile effect constants. When you compile a shader and not an effect file,
A reference to a variable that receives a reference to the ID3DBlob interface that you can use to access the compiled code.
A reference to a variable that receives a reference to the ID3DBlob interface that you can use to access compiler error messages, or
Returns one of the Direct3D 11 return codes.
The difference between
Compiles Microsoft High Level Shader Language (HLSL) code into bytecode for a given target.
-A reference to uncompiled shader data (ASCII HLSL code).
The size, in bytes, of the block of memory that pSrcData points to.
An optional reference to a constant null-terminated string containing the name that identifies the source data to use in error messages. If not used, set to
An optional array of
A reference to an
#define D3D_COMPILE_STANDARD_FILE_INCLUDE ((*)( )1)
A reference to a constant null-terminated string that contains the name of the shader entry point function where shader execution begins. When you compile an effect,
A reference to a constant null-terminated string that specifies the shader target or set of shader features to compile against. The shader target can be a shader model (for example, shader model 2, shader model 3, shader model 4, or shader model 5). The target can also be an effect type (for example, fx_4_1). For info about the targets that various profiles support, see Specifying Compiler Targets.
A combination of shader D3D compile constants that are combined by using a bitwise OR operation. The resulting value specifies how the compiler compiles the HLSL code.
A combination of effect D3D compile effect constants that are combined by using a bitwise OR operation. The resulting value specifies how the compiler compiles the effect. When you compile a shader and not an effect file,
A combination of the following flags that are combined by using a bitwise OR operation. The resulting value specifies how the compiler compiles the HLSL code.
Flag | Description |
---|---|
Merge unordered access view (UAV) slots in the secondary data that the pSecondaryData parameter points to. | |
Preserve template slots in the secondary data that the pSecondaryData parameter points to. | |
Require that templates in the secondary data that the pSecondaryData parameter points to match when the compiler compiles the HLSL code. |
?
If pSecondaryData is
A reference to secondary data. If you don't pass secondary data, set to
The size, in bytes, of the block of memory that pSecondaryData points to. If pSecondaryData is
A reference to a variable that receives a reference to the ID3DBlob interface that you can use to access the compiled code.
A reference to a variable that receives a reference to the ID3DBlob interface that you can use to access compiler error messages, or
Returns one of the Direct3D 11 return codes.
The difference between
Compiles Microsoft High Level Shader Language (HLSL) code into bytecode for a given target.
-Returns one of the Direct3D 11 return codes.
Preprocesses uncompiled HLSL code.
-A reference to uncompiled shader data; either ASCII HLSL code or a compiled effect.
Length of pSrcData.
The name of the file that contains the uncompiled HLSL code.
An array of
A reference to an
#define D3D_COMPILE_STANDARD_FILE_INCLUDE ((*)( )1)
The address of a ID3DBlob that contains the compiled code.
A reference to an ID3DBlob that contains compiler error messages, or
Returns one of the Direct3D 11 return codes.
Gets shader debug information.
-A reference to source data; either uncompiled or compiled HLSL code.
Length of pSrcData.
A reference to a buffer that receives the ID3DBlob interface that contains debug information.
Returns one of the Direct3D 11 return codes.
Debug information is embedded in the body of the shader after calling
Gets a reference to a reflection interface.
-A reference to source data as compiled HLSL code.
Length of pSrcData.
The reference
A reference to a reflection interface.
Returns one of the Direct3D 11 return codes.
Shader code contains metadata that can be inspected using the reflection APIs.
The following code illustrates retrieving a
pd3dDevice->CreatePixelShader( pPixelShaderBuffer->GetBufferPointer(), pPixelShaderBuffer->GetBufferSize(), g_pPSClassLinkage, &g_pPixelShader );-* pReflector = null ; -( pPixelShaderBuffer->GetBufferPointer(), pPixelShaderBuffer->GetBufferSize(), IID_ID3D11ShaderReflection, (void**) &pReflector); -
Creates a library-reflection interface from source data that contains an HLSL library of functions.
Note??This function is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? -A reference to source data as an HLSL library of functions.
The size, in bytes, of the block of memory that pSrcData points to.
The reference
A reference to a variable that receives a reference to a library-reflection interface,
Returns
Disassembles compiled HLSL code.
-A reference to source data as compiled HLSL code.
Length of pSrcData.
Flags affecting the behavior of
Flag | Description |
---|---|
Enable the output of color codes. | |
Enable the output of default values. | |
Enable instruction numbering. | |
No effect. | |
Disable debug information. | |
Enable instruction offsets. | |
Disassemble instructions only. | |
| Use hex symbols in disassemblies. |
?
The comment string at the top of the shader that identifies the shader constants and variables.
A reference to a buffer that receives the ID3DBlob interface that accesses assembly text.
Returns one of the Direct3D 11 return codes.
Disassembles a specific region of compiled Microsoft High Level Shader Language (HLSL) code.
-A reference to compiled shader data.
The size, in bytes, of the block of memory that pSrcData points to.
A combination of zero or more of the following flags that are combined by using a bitwise OR operation. The resulting value specifies how
Flag | Description |
---|---|
Enable the output of color codes. | |
Enable the output of default values. | |
Enable instruction numbering. | |
No effect. | |
Disable the output of debug information. | |
Enable the output of instruction offsets. | |
This flag has no effect in |
?
A reference to a constant null-terminated string at the top of the shader that identifies the shader constants and variables.
The number of bytes offset into the compiled shader data where
The number of instructions to disassemble.
A reference to a variable that receives the number of bytes offset into the compiled shader data where
A reference to a buffer that receives the ID3DBlob interface that accesses the disassembled HLSL code.
Returns one of the Direct3D 11 return codes.
Creates a linker interface.
Note??This function is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? -A reference to a variable that receives a reference to the
Returns
Creates a shader module interface from source data for the shader module.
Note??This function is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? -A reference to the source data for the shader module.
The size, in bytes, of the block of memory that pSrcData points to.
A reference to a variable that receives a reference to the
Returns
Creates a function-linking-graph interface.
Note??This function is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? -Reserved
A reference to a variable that receives a reference to the
Returns
Retrieves the byte offsets for instructions within a section of shader code.
-A reference to the compiled shader data.
The size, in bytes, of the block of memory that pSrcData points to.
A combination of the following flags that are combined by using a bitwise OR operation. The resulting value specifies how
Flag | Description |
---|---|
D3D_GET_INST_OFFSETS_INCLUDE_NON_EXECUTABLE (0x01) | Include non-executable code in the retrieved information. |
?
The index of the instruction in the compiled shader data for which
The number of instructions for which
A reference to a variable that receives the total number of instructions in the section of shader code.
A reference to a variable that receives the actual number of offsets.
A new kind of Microsoft High Level Shader Language (HLSL) debugging information from a program database (PDB) file uses instruction-byte offsets within a shader blob (arbitrary-length data buffer). You use
Gets the input signature from a compilation result.
-Returns one of the Direct3D 11 return codes.
Gets the output signature from a compilation result.
-Returns one of the Direct3D 11 return codes.
Gets the input and output signatures from a compilation result.
-Returns one of the Direct3D 11 return codes.
Removes unwanted blobs from a compilation result.
-A reference to source data as compiled HLSL code.
Length of pSrcData.
Strip flag options, represented by
A reference to a variable that receives a reference to the ID3DBlob interface that you can use to access the unwanted stripped out shader code.
Returns one of the Direct3D 11 return codes.
Retrieves a specific part from a compilation result.
-A reference to uncompiled shader data; either ASCII HLSL code or a compiled effect.
Length of uncompiled shader data that pSrcData points to.
A
Flags that indicate how to retrieve the blob part. Currently, no flags are defined.
The address of a reference to the ID3DBlob interface that is used to retrieve the specified part of the buffer.
Returns one of the Direct3D 11 return codes.
Sets information in a compilation result.
-A reference to compiled shader data.
The length of the compiled shader data that pSrcData points to.
A
Flags that indicate how to set the blob part. Currently, no flags are defined; therefore, set to zero.
A reference to data to set in the compilation result.
The length of the data that pPart points to.
A reference to a buffer that receives the ID3DBlob interface for the new shader in which the new part data is set.
Returns one of the Direct3D 11 return codes.
Creates a buffer.
-Number of bytes in the blob.
The address of a reference to the ID3DBlob interface that is used to retrieve the buffer.
Returns one of the Direct3D 11 return codes.
The latest D3dcompiler_nn.dll contains the
Compresses a set of shaders into a more compact form.
-The number of shaders to compress.
An array of
Flags that indicate how to compress the shaders. Currently, only the D3D_COMPRESS_SHADER_KEEP_ALL_PARTS (0x00000001) flag is defined.
The address of a reference to the ID3DBlob interface that is used to retrieve the compressed shader data.
Returns one of the Direct3D 11 return codes.
Decompresses one or more shaders from a compressed set.
-A reference to uncompiled shader data; either ASCII HLSL code or a compiled effect.
Length of uncompiled shader data that pSrcData points to.
The number of shaders to decompress.
The index of the first shader to decompress.
An array of indexes that represent the shaders to decompress.
Flags that indicate how to decompress. Currently, no flags are defined.
The address of a reference to the ID3DBlob interface that is used to retrieve the decompressed shader data.
A reference to a variable that receives the total number of shaders that
Returns one of the Direct3D 11 return codes.
This shader-reflection interface provides access to a constant buffer.
- To create a constant-buffer interface, call
Get a constant-buffer description.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a constant-buffer description.
-A reference to a
Returns one of the following Direct3D 11 Return Codes.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a shader-reflection variable by index.
-Zero-based index.
A reference to a shader-reflection variable interface (see
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a shader-reflection variable by name.
-Variable name.
Returns a sentinel object (end of list marker). To determine if GetVariableByName successfully completed, call
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-A function-parameter-reflection interface accesses function-parameter info.
Note??This interface is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? -To get a function-parameter-reflection interface, call
Fills the parameter descriptor structure for the function's parameter.
-Fills the parameter descriptor structure for the function's parameter.
-A reference to a
Returns one of the Direct3D 11 Return Codes.
To use this interface, create an interface that inherits from
A library-reflection interface accesses library info.
Note?? This interface is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? - To get a library-reflection interface, call
Fills the library descriptor structure for the library reflection.
-Fills the library descriptor structure for the library reflection.
-A reference to a
Returns one of the Direct3D 11 Return Codes.
Gets the function reflector.
-The zero-based index of the function reflector to retrieve.
A reference to a
Returns all function reflectors provided by this library
-All references to
A linker interface is used to link a shader module.
Note?? This interface is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? - To get a linker interface, call
[This documentation is preliminary and is subject to change.]
Links the shader and produces a shader blob that the Direct3D runtime can use.
-Links the shader and produces a shader blob that the Direct3D runtime can use.
- A reference to the
The name of the shader module instance to link from.
The name for the shader blob that is produced.
Reserved.
A reference to a variable that receives a reference to the ID3DBlob interface that you can use to access the compiled shader code.
A reference to a variable that receives a reference to the ID3DBlob interface that you can use to access compiler error messages.
Returns
Adds an instance of a library module to be used for linking.
-A reference to the
Returns
Adds a clip plane with the plane coefficients taken from a cbuffer entry for 10Level9 shaders.
-Returns
[This documentation is preliminary and is subject to change.]
Links the shader and produces a shader blob that the Direct3D runtime can use.
-A reference to the
The name of the shader module instance to link from.
The name for the shader blob that is produced.
Reserved
Returns the compiled
A linking-node interface is used for shader linking.
Note?? This interface is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? - To get a linking-node interface, call
A module interface creates an instance of a module that is used for resource rebinding.
Note?? This interface is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? - To get a module interface, call
[This documentation is preliminary and is subject to change.]
Initializes an instance of a shader module that is used for resource rebinding.
-Initializes an instance of a shader module that is used for resource rebinding.
-The name of a shader module to initialize. This can be
The address of a reference to an
Returns
A module-instance interface is used for resource rebinding.
Note?? This interface is part of the HLSL shader linking technology that you can use on all Direct3D?11 platforms to create precompiled HLSL functions, package them into libraries, and link them into full shaders at run time.? - To get a module-instance interface, call
[This documentation is preliminary and is subject to change.]
Rebinds a resource by name as an unordered access view (UAV) to destination slots.
-Rebinds a constant buffer from a source slot to a destination slot.
-The source slot number for rebinding.
The destination slot number for rebinding.
The offset in bytes of the destination slot for rebinding. The offset must have 16-byte alignment.
Returns:
Rebinds a constant buffer by name to a destination slot.
-The name of the constant buffer for rebinding.
The destination slot number for rebinding.
The offset in bytes of the destination slot for rebinding. The offset must have 16-byte alignment.
Returns:
Rebinds a texture or buffer from source slot to destination slot.
-The first source slot number for rebinding.
The first destination slot number for rebinding.
The number of slots for rebinding.
Returns:
Rebinds a texture or buffer by name to destination slots.
-The name of the texture or buffer for rebinding.
The first destination slot number for rebinding.
The number of slots for rebinding.
Returns:
Rebinds a sampler from source slot to destination slot.
-The first source slot number for rebinding.
The first destination slot number for rebinding.
The number of slots for rebinding.
Returns:
Rebinds a sampler by name to destination slots.
-The name of the sampler for rebinding.
The first destination slot number for rebinding.
The number of slots for rebinding.
Returns:
Rebinds an unordered access view (UAV) from source slot to destination slot.
-The first source slot number for rebinding.
The first destination slot number for rebinding.
The number of slots for rebinding.
Returns:
Rebinds an unordered access view (UAV) by name to destination slots.
-The name of the UAV for rebinding.
The first destination slot number for rebinding.
The number of slots for rebinding.
Returns:
Rebinds a resource as an unordered access view (UAV) from source slot to destination slot.
-The first source slot number for rebinding.
The first destination slot number for rebinding.
The number of slots for rebinding.
Returns:
Rebinds a resource by name as an unordered access view (UAV) to destination slots.
-The name of the resource for rebinding.
The first destination slot number for rebinding.
The number of slots for rebinding.
Returns:
The address of a reference to an
The name of a shader module to initialize. This can be
The address of a reference to an
A shader-reflection interface accesses shader information.
- An
pd3dDevice->CreatePixelShader( pPixelShaderBuffer->GetBufferPointer(), pPixelShaderBuffer->GetBufferSize(), g_pPSClassLinkage, &g_pPixelShader );-* pReflector = null ; -( pPixelShaderBuffer->GetBufferPointer(), pPixelShaderBuffer->GetBufferSize(), IID_ID3D11ShaderReflection, (void**) &pReflector);
Get a shader description.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of Mov instructions.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of Movc instructions.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of conversion instructions.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of bitwise instructions.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the geometry-shader input-primitive description.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Indicates whether a shader is a sample frequency shader.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of interface slots in a shader.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the minimum feature level.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets a group of flags that indicates the requirements of a shader.
-Here is how the D3D11Shader.h header defines the shader requirements flags:
#define-0x00000001 - #define 0x00000002 - #define 0x00000004 - #define 0x00000008 - #define 0x00000010 - #define 0x00000020 - #define 0x00000040 - #define 0x00000080 -
Get a shader description.
-A reference to a shader description. See
Returns one of the following Direct3D 11 Return Codes.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a constant buffer by index.
-Zero-based index.
A reference to a constant buffer (see
A constant buffer supplies either scalar constants or texture constants to a shader. A shader can use one or more constant buffers. For best performance, separate constants into buffers based on the frequency they are updated.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a constant buffer by name.
-The constant-buffer name.
A reference to a constant buffer (see
A constant buffer supplies either scalar constants or texture constants to a shader. A shader can use one or more constant buffers. For best performance, separate constants into buffers based on the frequency they are updated.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a description of how a resource is bound to a shader.
-A zero-based resource index.
A reference to an input-binding description. See
A shader consists of executable code (the compiled HLSL functions) and a set of resources that supply the shader with input data. GetResourceBindingDesc gets information about how one resource in the set is bound as an input to the shader. The ResourceIndex parameter specifies the index for the resource.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get an input-parameter description for a shader.
-A zero-based parameter index.
A reference to a shader-input-signature description. See
An input-parameter description is also called a shader signature. The shader signature contains information about the input parameters such as the order or parameters, their data type, and a parameter semantic.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get an output-parameter description for a shader.
-A zero-based parameter index.
A reference to a shader-output-parameter description. See
An output-parameter description is also called a shader signature. The shader signature contains information about the output parameters such as the order or parameters, their data type, and a parameter semantic.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a patch-constant parameter description for a shader.
-A zero-based parameter index.
A reference to a shader-input-signature description. See
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets a variable by name.
-A reference to a string containing the variable name.
Returns a
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a description of how a resource is bound to a shader.
-The constant-buffer name of the resource.
A reference to an input-binding description. See
A shader consists of executable code (the compiled HLSL functions) and a set of resources that supply the shader with input data. GetResourceBindingDescByName gets information about how one resource in the set is bound as an input to the shader. The Name parameter specifies the name of the resource.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of Mov instructions.
-Returns the number of Mov instructions.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of Movc instructions.
-Returns the number of Movc instructions.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of conversion instructions.
-Returns the number of conversion instructions.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of bitwise instructions.
-The number of bitwise instructions.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the geometry-shader input-primitive description.
- The input-primitive description. See
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Indicates whether a shader is a sample frequency shader.
-Returns true if the shader is a sample frequency shader; otherwise returns false.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of interface slots in a shader.
-The number of interface slots in the shader.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the minimum feature level.
- A reference to one of the enumerated values in
Returns one of the following Direct3D 11 Return Codes.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Retrieves the sizes, in units of threads, of the X, Y, and Z dimensions of the shader's thread-group grid.
-A reference to the size, in threads, of the x-dimension of the thread-group grid. The maximum size is 1024.
A reference to the size, in threads, of the y-dimension of the thread-group grid. The maximum size is 1024.
A reference to the size, in threads, of the z-dimension of the thread-group grid. The maximum size is 64.
Returns the total size, in threads, of the thread-group grid by calculating the product of the size of each dimension.
*pSizeX * *pSizeY * *pSizeZ;
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
When a compute shader is written it defines the actions of a single thread group only. If multiple thread groups are required, it is the role of the
Gets a group of flags that indicates the requirements of a shader.
-A value that contains a combination of one or more shader requirements flags; each flag specifies a requirement of the shader. A default value of 0 means there are no requirements.
Shader requirement flag | Description |
---|---|
Shader requires that the graphics driver and hardware support double data type. For more info, see | |
Shader requires an early depth stencil. | |
Shader requires unordered access views (UAVs) at every pipeline stage. | |
Shader requires 64 UAVs. | |
Shader requires the graphics driver and hardware to support minimum precision. For more info, see Using HLSL minimum precision. | |
Shader requires that the graphics driver and hardware support extended doubles instructions. For more info, see the ExtendedDoublesShaderInstructions member of | |
Shader requires that the graphics driver and hardware support the msad4 intrinsic function in shaders. For more info, see the SAD4ShaderInstructions member of | |
Shader requires that the graphics driver and hardware support Direct3D 9 shadow support. For more info, see | |
Shader requires that the graphics driver and hardware support tiled resources. For more info, see GetResourceTiling. |
?
Here is how the D3D11Shader.h header defines the shader requirements flags:
#define-0x00000001 - #define 0x00000002 - #define 0x00000004 - #define 0x00000008 - #define 0x00000010 - #define 0x00000020 - #define 0x00000040 - #define 0x00000080 -
Get an interface by index.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get the description of a shader-reflection-variable type.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the base class of a class.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets an
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of interfaces.
-This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get the description of a shader-reflection-variable type.
-A reference to a shader-type description (see
Returns one of the following Direct3D 11 Return Codes.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a shader-reflection-variable type by index.
-Zero-based index.
A reference to a
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a shader-reflection-variable type by name.
-Member name.
A reference to a
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a shader-reflection-variable type.
-Zero-based index.
The variable type.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Indicates whether two
Returns
IsEqual indicates whether the sources of the
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the base class of a class.
-Returns a reference to a
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets an
Returns A reference to a
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the number of interfaces.
-Returns the number of interfaces.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get an interface by index.
-Zero-based index.
A reference to a
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Indicates whether a variable is of the specified type.
-A reference to a
Returns
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Indicates whether a class type implements an interface.
-A reference to a
Returns
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Gets the corresponding interface slot for a variable that represents an interface reference.
-GetInterfaceSlot gets the corresponding slot in an dynamic linkage array for an interface instance. The returned slot number is used to set an interface instance to a particular class instance. See the HLSL Interfaces and Classes overview for additional information.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a shader-variable description.
-This method can be used to determine if the
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-This method returns the buffer of the current
Get a shader-variable description.
-A reference to a shader-variable description (see
Returns one of the following Direct3D 11 Return Codes.
This method can be used to determine if the
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Get a shader-variable type.
-A reference to a
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-This method returns the buffer of the current
Returns a reference to the
Gets the corresponding interface slot for a variable that represents an interface reference.
-Index of the array element to get the slot number for. For a non-array variable this value will be zero.
Returns the index of the interface in the interface array.
GetInterfaceSlot gets the corresponding slot in an dynamic linkage array for an interface instance. The returned slot number is used to set an interface instance to a particular class instance. See the HLSL Interfaces and Classes overview for additional information.
This method's interface is hosted in the out-of-box DLL D3DCompiler_xx.dll.
-Describes a shader constant-buffer.
-Constants are supplied to shaders in a shader-constant buffer. Get the description of a shader-constant-buffer by calling
The name of the buffer.
A
The number of unique variables.
Buffer size (in bytes).
A combination of
Describes a function.
-The shader version.
The name of the originator of the function.
A combination of D3DCOMPILE Constants that are combined by using a bitwise OR operation. The resulting value specifies shader compilation and parsing.
The number of constant buffers for the function.
The number of bound resources for the function.
The number of emitted instructions for the function.
The number of temporary registers used by the function.
The number of temporary arrays used by the function.
The number of constant defines for the function.
The number of declarations (input + output) for the function.
The number of non-categorized texture instructions for the function.
The number of texture load instructions for the function.
The number of texture comparison instructions for the function.
The number of texture bias instructions for the function.
The number of texture gradient instructions for the function.
The number of floating point arithmetic instructions used by the function.
The number of signed integer arithmetic instructions used by the function.
The number of unsigned integer arithmetic instructions used by the function.
The number of static flow control instructions used by the function.
The number of dynamic flow control instructions used by the function.
The number of macro instructions used by the function.
The number of array instructions used by the function.
The number of mov instructions used by the function.
The number of movc instructions used by the function.
The number of type conversion instructions used by the function.
The number of bitwise arithmetic instructions used by the function.
A
A value that contains a combination of one or more shader requirements flags; each flag specifies a requirement of the shader. A default value of 0 means there are no requirements. For a list of values, see
The name of the function.
The number of logical parameters in the function signature, not including the return value.
Indicates whether the function returns a value. TRUE indicates it returns a value; otherwise,
Indicates whether there is a Direct3D 10Level9 vertex shader blob. TRUE indicates there is a 10Level9 vertex shader blob; otherwise,
Indicates whether there is a Direct3D 10Level9 pixel shader blob. TRUE indicates there is a 10Level9 pixel shader blob; otherwise,
Describes how a shader resource is bound to a shader input.
-Get a shader-input-signature description by calling
Name of the shader resource.
A
Starting bind point.
Number of contiguous bind points for arrays.
A combination of
If the input is a texture, the
A
The number of samples for a multisampled texture; when a texture isn't multisampled, the value is set to -1 (0xFFFFFFFF).
Describes a library.
-The name of the originator of the library.
A combination of D3DCOMPILE Constants that are combined by using a bitwise OR operation. The resulting value specifies how the compiler compiles.
The number of functions exported from the library.
Describes a function parameter.
-Get a function-parameter description by calling
The name of the function parameter.
The HLSL semantic that is associated with this function parameter. This name includes the index, for example, SV_Target[n].
A
A
The number of rows for a matrix parameter.
The number of columns for a matrix parameter.
A
A combination of
The first input register for this parameter.
The first input register component for this parameter.
The first output register for this parameter.
The first output register component for this parameter.
Describes shader data.
-An array of
A reference to shader data.
Length of shader data that pBytecode points to.
Describes a shader.
-A shader is written in HLSL and compiled into an intermediate language by the HLSL compiler. The shader description returns information about the compiled shader. Get a shader description by calling
Shader version.
The name of the originator of the shader.
Shader compilation/parse flags.
The number of shader-constant buffers.
The number of resource (textures and buffers) bound to a shader.
The number of parameters in the input signature.
The number of parameters in the output signature.
The number of intermediate-language instructions in the compiled shader.
The number of temporary registers in the compiled shader.
Number of temporary arrays used.
Number of constant defines.
Number of declarations (input + output).
Number of non-categorized texture instructions.
Number of texture load instructions
Number of texture comparison instructions
Number of texture bias instructions
Number of texture gradient instructions.
Number of floating point arithmetic instructions used.
Number of signed integer arithmetic instructions used.
Number of unsigned integer arithmetic instructions used.
Number of static flow control instructions used.
Number of dynamic flow control instructions used.
Number of macro instructions used.
Number of array instructions used.
Number of cut instructions used.
Number of emit instructions used.
The
Geometry shader maximum output vertex count.
The
Number of parameters in the patch-constant signature.
Number of geometry shader instances.
Number of control points in the hull shader and domain shader.
The
The
The
Number of barrier instructions in a compute shader.
Number of interlocked instructions in a compute shader.
Number of texture writes in a compute shader.
Describes a shader signature.
-A shader can take n inputs and can produce m outputs. The order of the input (or output) parameters, their associated types, and any attached semantics make up the shader signature. Each shader has an input and an output signature.
When compiling a shader or an effect, some API calls validate shader signatures That is, they compare the output signature of one shader (like a vertex shader) with the input signature of another shader (like a pixel shader). This ensures that a shader outputs data that is compatible with a downstream shader that is consuming that data. Compatible means that a shader signature is a exact-match subset of the preceding shader stage. Exact match means parameter types and semantics must exactly match. Subset means that a parameter that is not required by a downstream stage, does not need to include that parameter in its shader signature.
Get a shader-signature from a shader or an effect by calling APIs such as
A per-parameter string that identifies how the data will be used. For more info, see Semantics.
Semantic index that modifies the semantic. Used to differentiate different parameters that use the same semantic.
The register that will contain this variable's data.
A
A
Mask which indicates which components of a register are used.
Mask which indicates whether a given component is never written (if the signature is an output signature) or always read (if the signature is an input signature).
Indicates which stream the geometry shader is using for the signature parameter.
A
Describes a shader-variable type.
-Get a shader-variable-type description by calling
A
A
Number of rows in a matrix. Otherwise a numeric type returns 1, any other type returns 0.
Number of columns in a matrix. Otherwise a numeric type returns 1, any other type returns 0.
Number of elements in an array; otherwise 0.
Number of members in the structure; otherwise 0.
Offset, in bytes, between the start of the parent structure and this variable. Can be 0 if not a structure member.
Name of the shader-variable type. This member can be
Describes a shader variable.
- Get a shader-variable description using reflection by calling
As of the June 2010 update, DefaultValue emits default values for reflection.
-The variable name.
Offset from the start of the parent structure to the beginning of the variable.
Size of the variable (in bytes).
A combination of
The default value for initializing the variable.
Offset from the start of the variable to the beginning of the texture.
The size of the texture, in bytes.
Offset from the start of the variable to the beginning of the sampler.
The size of the sampler, in bytes.
The
A display subsystem is often referred to as a video card, however, on some machines the display subsystem is part of the motherboard.
To enumerate the display subsystems, use
To get an interface to the adapter for a particular device, use
To create a software adapter, use
Windows?Phone?8: This API is supported.
-Gets a DXGI 1.0 description of an adapter (or video card).
-Graphics apps can use the DXGI API to retrieve an accurate set of graphics memory values on systems that have Windows Display Driver Model (WDDM) drivers. The following are the critical steps involved.
HasWDDMDriver() - { LPDIRECT3DCREATE9EX pD3D9Create9Ex =null ; HMODULE hD3D9 =null ; hD3D9 = LoadLibrary( L"d3d9.dll" ); if (null == hD3D9 ) { return false; } // /* Try to create IDirect3D9Ex interface (also known as a DX9L interface). This interface can only be created if the driver is a WDDM driver. */ // pD3D9Create9Ex = (LPDIRECT3DCREATE9EX) GetProcAddress( hD3D9, "Direct3DCreate9Ex" ); return pD3D9Create9Ex !=null ; - }
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); - * pDXGIAdapter; - pDXGIDevice->GetAdapter(&pDXGIAdapter); - adapterDesc; - pDXGIAdapter->GetDesc(&adapterDesc);
Enumerate adapter (video card) outputs.
-The index of the output.
The address of a reference to an
A code that indicates success or failure (see DXGI_ERROR).
If the adapter came from a device created using
When the EnumOutputs method succeeds and fills the ppOutput parameter with the address of the reference to the output interface, EnumOutputs increments the output interface's reference count. To avoid a memory leak, when you finish using the output interface, call the Release method to decrement the reference count.
EnumOutputs first returns the output on which the desktop primary is displayed. This output corresponds with an index of zero. EnumOutputs then returns other outputs.
-Gets a DXGI 1.0 description of an adapter (or video card).
-A reference to a
Returns
Graphics apps can use the DXGI API to retrieve an accurate set of graphics memory values on systems that have Windows Display Driver Model (WDDM) drivers. The following are the critical steps involved.
HasWDDMDriver() - { LPDIRECT3DCREATE9EX pD3D9Create9Ex =null ; HMODULE hD3D9 =null ; hD3D9 = LoadLibrary( L"d3d9.dll" ); if (null == hD3D9 ) { return false; } // /* Try to create IDirect3D9Ex interface (also known as a DX9L interface). This interface can only be created if the driver is a WDDM driver. */ // pD3D9Create9Ex = (LPDIRECT3DCREATE9EX) GetProcAddress( hD3D9, "Direct3DCreate9Ex" ); return pD3D9Create9Ex !=null ; - }
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); - * pDXGIAdapter; - pDXGIDevice->GetAdapter(&pDXGIAdapter); - adapterDesc; - pDXGIAdapter->GetDesc(&adapterDesc);
Checks whether the system supports a device interface for a graphics component.
-The
The user mode driver version of InterfaceName. This is returned only if the interface is supported, otherwise this parameter will be
An
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); -
Windows?Phone?8: This API is supported.
-Returns the adapter for the specified device.
-If the GetAdapter method succeeds, the reference count on the adapter interface will be incremented. To avoid a memory leak, be sure to release the interface when you are finished using it.
-Gets or sets the GPU thread priority.
-Returns the adapter for the specified device.
-The address of an
Returns
If the GetAdapter method succeeds, the reference count on the adapter interface will be incremented. To avoid a memory leak, be sure to release the interface when you are finished using it.
-Returns a surface. This method is used internally and you should not call it directly in your application.
-A reference to a
The number of surfaces to create.
A DXGI_USAGE flag that specifies how the surface is expected to be used.
An optional reference to a
The address of an
Returns
The CreateSurface method creates a buffer to exchange data between one or more devices. It is used internally, and you should not directly call it.
The runtime automatically creates an
Gets the residency status of an array of resources.
-An array of
An array of
The number of resources in the ppResources argument array and pResidencyStatus argument array.
Returns
The information returned by the pResidencyStatus argument array describes the residency status at the time that the QueryResourceResidency method was called.
Note??The residency status will constantly change.?If you call the QueryResourceResidency method during a device removed state, the pResidencyStatus argument will return the
Gets the residency status of an array of resources.
-An array of
An array of
The number of resources in the ppResources argument array and pResidencyStatus argument array.
Returns
The information returned by the pResidencyStatus argument array describes the residency status at the time that the QueryResourceResidency method was called.
Note??The residency status will constantly change.?If you call the QueryResourceResidency method during a device removed state, the pResidencyStatus argument will return the
Gets the residency status of an array of resources.
-An array of
An array of
The number of resources in the ppResources argument array and pResidencyStatus argument array.
Returns
The information returned by the pResidencyStatus argument array describes the residency status at the time that the QueryResourceResidency method was called.
Note??The residency status will constantly change.?If you call the QueryResourceResidency method during a device removed state, the pResidencyStatus argument will return the
Sets the GPU thread priority.
-A value that specifies the required GPU thread priority. This value must be between -7 and 7, inclusive, where 0 represents normal priority.
Return
The values for the Priority parameter function as follows:
To use the SetGPUThreadPriority method, you should have a comprehensive understanding of GPU scheduling. You should profile your application to ensure that it behaves as intended. If used inappropriately, the SetGPUThreadPriority method can impede rendering speed and result in a poor user experience.
-Gets the GPU thread priority.
-A reference to a variable that receives a value that indicates the current GPU thread priority. The value will be between -7 and 7, inclusive, where 0 represents normal priority.
Return
Inherited from objects that are tied to the device so that they can retrieve a reference to it.
-Windows?Phone?8: This API is supported.
-Retrieves the device.
-The reference id for the device.
The address of a reference to the device.
A code that indicates success or failure (see DXGI_ERROR).
The type of interface that is returned can be any interface published by the device. For example, it could be an
An
Windows?Phone?8: This API is supported.
-Sets application-defined data to the object and associates that data with a
A
The size of the object's data.
A reference to the object's data.
Returns one of the DXGI_ERROR values.
SetPrivateData makes a copy of the specified data and stores it with the object.
Private data that SetPrivateData stores in the object occupies the same storage space as private data that is stored by associated Direct3D objects (for example, by a Microsoft Direct3D?11 device through
The debug layer reports memory leaks by outputting a list of object interface references along with their friendly names. The default friendly name is "<unnamed>". You can set the friendly name so that you can determine if the corresponding object interface reference caused the leak. To set the friendly name, use the SetPrivateData method and the well-known private data
static const char c_szName[] = "My name"; - hr = pContext->SetPrivateData(, sizeof( c_szName ) - 1, c_szName ); -
You can use
Set an interface in the object's private data.
-A
The interface to set.
Returns one of the following DXGI_ERROR.
This API associates an interface reference with the object.
When the interface is set its reference count is incremented. When the data are overwritten (by calling SPD or SPDI with the same
Get a reference to the object's data.
-A
The size of the data.
Pointer to the data.
Returns one of the following DXGI_ERROR.
If the data returned is a reference to an
You can pass GUID_DeviceType in the Name parameter of GetPrivateData to retrieve the device type from the display adapter object (
To get the type of device on which the display adapter was created
On Windows?7 or earlier, this type is either a value from D3D10_DRIVER_TYPE or
Gets the parent of the object.
-The ID of the requested interface.
The address of a reference to the parent object.
Returns one of the DXGI_ERROR values.
An
Create a factory by calling CreateDXGIFactory.
Because you can create a Direct3D device without creating a swap chain, you might need to retrieve the factory that is used to create the device in order to create a swap chain. You can request the
* pDXGIDevice = nullptr; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); * pDXGIAdapter = nullptr; - hr = pDXGIDevice->GetAdapter( &pDXGIAdapter ); * pIDXGIFactory = nullptr; - pDXGIAdapter->GetParent(__uuidof( ), (void **)&pIDXGIFactory);
Windows?Phone?8: This API is supported.
-Enumerates the adapters (video cards).
-The index of the adapter to enumerate.
The address of a reference to an
Returns
When you create a factory, the factory enumerates the set of adapters that are available in the system. Therefore, if you change the adapters in a system, you must destroy and recreate the
When the EnumAdapters method succeeds and fills the ppAdapter parameter with the address of the reference to the adapter interface, EnumAdapters increments the adapter interface's reference count. When you finish using the adapter interface, call the Release method to decrement the reference count before you destroy the reference.
EnumAdapters first returns the adapter with the output on which the desktop primary is displayed. This adapter corresponds with an index of zero. EnumAdapters next returns other adapters with outputs. EnumAdapters finally returns adapters without outputs.
-Allows DXGI to monitor an application's message queue for the alt-enter key sequence (which causes the application to switch from windowed to full screen or vice versa).
-The handle of the window that is to be monitored. This parameter can be
One or more of the following values:
The combination of WindowHandle and Flags informs DXGI to stop monitoring window messages for the previously-associated window.
If the application switches to full-screen mode, DXGI will choose a full-screen resolution to be the smallest supported resolution that is larger or the same size as the current back buffer size.
Applications can make some changes to make the transition from windowed to full screen more efficient. For example, on a WM_SIZE message, the application should release any outstanding swap-chain back buffers, call
While windowed, the application can, if it chooses, restrict the size of its window's client area to sizes to which it is comfortable rendering. A fully flexible application would make no such restriction, but UI elements or other design considerations can, of course, make this flexibility untenable. If the application further chooses to restrict its window's client area to just those that match supported full-screen resolutions, the application can field WM_SIZING, then check against
Applications that want to handle mode changes or Alt+Enter themselves should call MakeWindowAssociation with the
Get the window through which the user controls the transition to and from full screen.
-A reference to a window handle.
[Starting with Direct3D 11.1, we recommend not to use CreateSwapChain anymore to create a swap chain. Instead, use CreateSwapChainForHwnd, CreateSwapChainForCoreWindow, or CreateSwapChainForComposition depending on how you want to create the swap chain.]
Creates a swap chain.
-
If you attempt to create a swap chain in full-screen mode, and full-screen mode is unavailable, the swap chain will be created in windowed mode and
If the buffer width or the buffer height is zero, the sizes will be inferred from the output window size in the swap-chain description.
Because the target output can't be chosen explicitly when the swap chain is created, we recommend not to create a full-screen swap chain. This can reduce presentation performance if the swap chain size and the output window size do not match. Here are two ways to ensure that the sizes match:
If the swap chain is in full-screen mode, before you release it you must use SetFullscreenState to switch it to windowed mode. For more information about releasing a swap chain, see the "Destroying a Swap Chain" section of DXGI Overview.
After the runtime renders the initial frame in full screen, the runtime might unexpectedly exit full screen during a call to
// Detect if newly created full-screen swap chain isn't actually full screen. -* pTarget; bFullscreen; - if (SUCCEEDED(pSwapChain->GetFullscreenState(&bFullscreen, &pTarget))) - { pTarget->Release(); - } - else bFullscreen = ; - // If not full screen, enable full screen again. - if (!bFullscreen) - { ShowWindow(hWnd, SW_MINIMIZE); ShowWindow(hWnd, SW_RESTORE); pSwapChain->SetFullscreenState(TRUE, null ); - } -
You can specify
However, to use stereo presentation and to change resize behavior for the flip model, applications must use the
Create an adapter interface that represents a software adapter.
-Handle to the software adapter's dll. HMODULE can be obtained with GetModuleHandle or LoadLibrary.
Address of a reference to an adapter (see
A software adapter is a DLL that implements the entirety of a device driver interface, plus emulation, if necessary, of kernel-mode graphics components for Windows. Details on implementing a software adapter can be found in the Windows Vista Driver Development Kit. This is a very complex development task, and is not recommended for general readers.
Calling this method will increment the module's reference count by one. The reference count can be decremented by calling FreeLibrary.
The typical calling scenario is to call LoadLibrary, pass the handle to CreateSoftwareAdapter, then immediately call FreeLibrary on the DLL and forget the DLL's HMODULE. Since the software adapter calls FreeLibrary when it is destroyed, the lifetime of the DLL will now be owned by the adapter, and the application is free of any further consideration of its lifetime.
-The
This interface is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
To create a factory, call the CreateDXGIFactory1 function.
Because you can create a Direct3D device without creating a swap chain, you might need to retrieve the factory that is used to create the device in order to create a swap chain.
- You can request the
-* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); * pDXGIAdapter; - hr = pDXGIDevice->GetParent(__uuidof( ), (void **)&pDXGIAdapter); * pIDXGIFactory; - pDXGIAdapter->GetParent(__uuidof( ), (void **)&pIDXGIFactory); -
Informs an application of the possible need to re-enumerate adapters.
-This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
-Enumerates both adapters (video cards) with or without outputs.
-The index of the adapter to enumerate.
The address of a reference to an
Returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
When you create a factory, the factory enumerates the set of adapters that are available in the system. Therefore, if you change the adapters in a system, you must destroy and recreate the
When the EnumAdapters1 method succeeds and fills the ppAdapter parameter with the address of the reference to the adapter interface, EnumAdapters1 increments the adapter interface's reference count. When you finish using the adapter interface, call the Release method to decrement the reference count before you destroy the reference.
EnumAdapters1 first returns the adapter with the output on which the desktop primary is displayed. This adapter corresponds with an index of zero. EnumAdapters1 next returns other adapters with outputs. EnumAdapters1 finally returns adapters without outputs.
-Informs an application of the possible need to re-enumerate adapters.
-IsCurrent returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
- The
To create a Microsoft DirectX Graphics Infrastructure (DXGI) 1.2 factory interface, pass
Because you can create a Direct3D device without creating a swap chain, you might need to retrieve the factory that is used to create the device in order to create a swap chain.
- You can request the
-* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); * pDXGIAdapter; - hr = pDXGIDevice->GetParent(__uuidof( ), (void **)&pDXGIAdapter); * pIDXGIFactory; - pDXGIAdapter->GetParent(__uuidof( ), (void **)&pIDXGIFactory); -
Determines whether to use stereo mode.
-We recommend that windowed applications call IsWindowedStereoEnabled before they attempt to use stereo. IsWindowedStereoEnabled returns TRUE if both of the following items are true:
The creation of a windowed stereo swap chain succeeds if the first requirement is met. However, if the adapter can't scan out stereo, the output on that adapter is reduced to mono.
The Direct3D 11.1 Simple Stereo 3D Sample shows how to add a stereoscopic 3D effect and how to respond to system stereo changes.
-Determines whether to use stereo mode.
-Indicates whether to use stereo mode. TRUE indicates that you can use stereo mode; otherwise,
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, IsWindowedStereoEnabled always returns
We recommend that windowed applications call IsWindowedStereoEnabled before they attempt to use stereo. IsWindowedStereoEnabled returns TRUE if both of the following items are true:
The creation of a windowed stereo swap chain succeeds if the first requirement is met. However, if the adapter can't scan out stereo, the output on that adapter is reduced to mono.
The Direct3D 11.1 Simple Stereo 3D Sample shows how to add a stereoscopic 3D effect and how to respond to system stereo changes.
-Creates a swap chain that is associated with an
CreateSwapChainForHwnd returns:
Platform Update for Windows?7:??
If you specify the width, height, or both (Width and Height members of
Because you can associate only one flip presentation model swap chain at a time with an
For info about how to choose a format for the swap chain's back buffer, see Converting data for the color space.
-Creates a swap chain that is associated with the CoreWindow object for the output window for the swap chain.
-CreateSwapChainForCoreWindow returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, CreateSwapChainForCoreWindow fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
If you specify the width, height, or both (Width and Height members of
Because you can associate only one flip presentation model swap chain (per layer) at a time with a CoreWindow, the Microsoft Direct3D?11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain. For more info about this situation, see Deferred Destruction Issues with Flip Presentation Swap Chains.
For info about how to choose a format for the swap chain's back buffer, see Converting data for the color space.
-Identifies the adapter on which a shared resource object was created.
-A handle to a shared resource object. The
A reference to a variable that receives a locally unique identifier (
GetSharedResourceAdapterLuid returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, GetSharedResourceAdapterLuid fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
You cannot share resources across adapters. Therefore, you cannot open a shared resource on an adapter other than the adapter on which the resource was created. Call GetSharedResourceAdapterLuid before you open a shared resource to ensure that the resource was created on the appropriate adapter. To open a shared resource, call the
Registers an application window to receive notification messages of changes of stereo status.
-The handle of the window to send a notification message to when stereo status change occurs.
Identifies the notification message to send.
A reference to a key value that an application can pass to the
RegisterStereoStatusWindow returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, RegisterStereoStatusWindow fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
Registers to receive notification of changes in stereo status by using event signaling.
-A handle to the event object that the operating system sets when notification of stereo status change occurs. The CreateEvent or OpenEvent function returns this handle.
A reference to a key value that an application can pass to the
RegisterStereoStatusEvent returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, RegisterStereoStatusEvent fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
Unregisters a window or an event to stop it from receiving notification when stereo status changes.
-A key value for the window or event to unregister. The
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, UnregisterStereoStatus has no effect. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Registers an application window to receive notification messages of changes of occlusion status.
-The handle of the window to send a notification message to when occlusion status change occurs.
Identifies the notification message to send.
A reference to a key value that an application can pass to the
RegisterOcclusionStatusWindow returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, RegisterOcclusionStatusWindow fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
Apps choose the Windows message that Windows sends when occlusion status changes.
-Registers to receive notification of changes in occlusion status by using event signaling.
-A handle to the event object that the operating system sets when notification of occlusion status change occurs. The CreateEvent or OpenEvent function returns this handle.
A reference to a key value that an application can pass to the
RegisterOcclusionStatusEvent returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, RegisterOcclusionStatusEvent fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
If you call RegisterOcclusionStatusEvent multiple times with the same event handle, RegisterOcclusionStatusEvent fails with
If you call RegisterOcclusionStatusEvent multiple times with the different event handles, RegisterOcclusionStatusEvent properly registers the events.
-Unregisters a window or an event to stop it from receiving notification when occlusion status changes.
-A key value for the window or event to unregister. The
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, UnregisterOcclusionStatus has no effect. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Creates a swap chain that you can use to send Direct3D content into the DirectComposition API or the Windows.UI.Xaml framework to compose in a window.
-CreateSwapChainForComposition returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, CreateSwapChainForComposition fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
You can use composition swap chains with either DirectComposition?s
The
For info about how to choose a format for the swap chain's back buffer, see Converting data for the color space.
-Enables creating Microsoft DirectX Graphics Infrastructure (DXGI) objects.
- Outputs the
Returns
For Direct3D 12, it's no longer possible to backtrack from a device to the
Provides an adapter which can be provided to
The globally unique identifier (
The address of an
Returns
For more information, see DXGI 1.4 Improvements.
-Identifies the type of DXGI adapter.
-The
Specifies no flags.
Value always set to 0. This flag is reserved.
Specifies a software adapter. For more info about this flag, see new info in Windows?8 about enumerating adapters.
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Identifies the type of DXGI adapter.
-The
Specifies no flags.
Value always set to 0. This flag is reserved.
Specifies a software adapter. For more info about this flag, see new info in Windows?8 about enumerating adapters.
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Forces this enumeration to compile to 32 bits in size. Without this value, some compilers would allow this enumeration to compile to a size other than 32 bits. This value is not used.
Identifies the alpha value, transparency behavior, of a surface.
-For more information about alpha mode, see
Indicates that the transparency behavior is not specified.
Indicates that the transparency behavior is premultiplied. Each color is first scaled by the alpha value. The alpha value itself is the same in both straight and premultiplied alpha. Typically, no color channel value is greater than the alpha channel value. If a color channel value in a premultiplied format is greater than the alpha channel, the standard source-over blending math results in an additive blend.
Indicates that the transparency behavior is not premultiplied. The alpha channel indicates the transparency of the color.
Indicates to ignore the transparency behavior.
Specifies color space types.
-This enum is used within DXGI in the CheckColorSpaceSupport, SetColorSpace1 and CheckOverlayColorSpaceSupport methods. It is also referenced in D3D11 video methods such as
The following color parameters are defined:
-Property | Value |
Colorspace | RGB |
Range | 0-255 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.709 |
?
This is the standard definition for sRGB. Note that this is often implemented with a linear segment, but in that case the exponent is corrected to stay aligned with a gamma 2.2 curve. This is usually used with 8 bit and 10 bit color channels. -
Property | Value |
Colorspace | RGB |
Range | 0-255 |
Gamma | 1.0 |
Siting | Image |
Primaries | BT.709 |
?
This is the standard definition for scRGB, and is usually used with 16 bit integer, 16 bit floating point, and 32 bit floating point channels. -
Property | Value |
Colorspace | RGB |
Range | 16-235 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.709 |
?
This is the standard definition for ITU-R Recommendation BT.709. Note that due to the inclusion of a linear segment, the transfer curve looks similar to a pure exponential gamma of 1.9. This is usually used with 8 bit and 10 bit color channels. -
Property | Value |
Colorspace | RGB |
Range | 16-235 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels. -
Reserved.
Property | Value |
Colorspace | YCbCr |
Range | 0-255 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.709 |
Transfer | BT.601 |
?
This definition is commonly used for JPG, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.601 |
?
This definition is commonly used for MPEG2, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 0-255 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.601 |
?
This is sometimes used for H.264 camera capture, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.709 |
?
This definition is commonly used for H.264 and HEVC, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 0-255 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.709 |
?
This is sometimes used for H.264 camera capture, and is usually used with 8, 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.2020 |
?
This definition may be used by HEVC, and is usually used with 10, 12, or 16 bit color channels. -
Property | Value |
Colorspace | YCbCr |
Range | 0-255 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | RGB |
Range | 0-255 |
Gamma | 2084 |
Siting | Image |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2084 |
Siting | Video |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | RGB |
Range | 16-235 |
Gamma | 2084 |
Siting | Image |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2.2 |
Siting | Video |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | YCbCr |
Range | 16-235 |
Gamma | 2084 |
Siting | Video |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
Property | Value |
Colorspace | RGB |
Range | 0-255 |
Gamma | 2.2 |
Siting | Image |
Primaries | BT.2020 |
?
This is usually used with 10, 12, or 16 bit color channels.
A custom color definition is used.
A custom color definition is used.
Identifies the granularity at which the graphics processing unit (GPU) can be preempted from performing its current compute task.
-You call the
Indicates the preemption granularity as a compute packet.
Indicates the preemption granularity as a dispatch (for example, a call to the
Indicates the preemption granularity as a thread group. A thread group is a part of a dispatch.
Indicates the preemption granularity as a thread in a thread group. A thread is a part of a thread group.
Indicates the preemption granularity as a compute instruction in a thread.
Flags that indicate how the back buffers should be rotated to fit the physical rotation of a monitor.
-Unspecified rotation.
Specifies no rotation.
Specifies 90 degrees of rotation.
Specifies 180 degrees of rotation.
Specifies 270 degrees of rotation.
Flags indicating how an image is stretched to fit a given monitor's resolution.
-Selecting the CENTERED or STRETCHED modes can result in a mode change even if you specify the native resolution of the display in the
This enum is used by the
Unspecified scaling.
Specifies no scaling. The image is centered on the display. This flag is typically used for a fixed-dot-pitch display (such as an LED display).
Specifies stretched scaling.
Flags indicating the method the raster uses to create an image on a surface.
-This enum is used by the
Scanline order is unspecified.
The image is created from the first scanline to the last without skipping any.
The image is created beginning with the upper field.
The image is created beginning with the lower field.
Status codes that can be returned by DXGI functions.
-The
#define _FACDXGI 0x87a - #define MAKE_DXGI_STATUS(code) MAKE_HRESULT(0, _FACDXGI, code) -
For example,
#define-MAKE_DXGI_STATUS(1) -
Specifies a range of hardware features, to be used when checking for feature support.
-This enum is used by the CheckFeatureSupport method.
-The display supports tearing, a requirement of variable refresh rate displays.
Resource data formats, including fully-typed and typeless formats. A list of modifiers at the bottom of the page more fully describes each format type.
-The format is not known.
A four-component, 128-bit typeless format that supports 32 bits per channel including alpha. ?
A four-component, 128-bit floating-point format that supports 32 bits per channel including alpha. 1,5,8
A four-component, 128-bit unsigned-integer format that supports 32 bits per channel including alpha. ?
A four-component, 128-bit signed-integer format that supports 32 bits per channel including alpha. ?
A three-component, 96-bit typeless format that supports 32 bits per color channel.
A three-component, 96-bit floating-point format that supports 32 bits per color channel.5,8
A three-component, 96-bit unsigned-integer format that supports 32 bits per color channel.
A three-component, 96-bit signed-integer format that supports 32 bits per color channel.
A four-component, 64-bit typeless format that supports 16 bits per channel including alpha.
A four-component, 64-bit floating-point format that supports 16 bits per channel including alpha.5,7
A four-component, 64-bit unsigned-normalized-integer format that supports 16 bits per channel including alpha.
A four-component, 64-bit unsigned-integer format that supports 16 bits per channel including alpha.
A four-component, 64-bit signed-normalized-integer format that supports 16 bits per channel including alpha.
A four-component, 64-bit signed-integer format that supports 16 bits per channel including alpha.
A two-component, 64-bit typeless format that supports 32 bits for the red channel and 32 bits for the green channel.
A two-component, 64-bit floating-point format that supports 32 bits for the red channel and 32 bits for the green channel.5,8
A two-component, 64-bit unsigned-integer format that supports 32 bits for the red channel and 32 bits for the green channel.
A two-component, 64-bit signed-integer format that supports 32 bits for the red channel and 32 bits for the green channel.
A two-component, 64-bit typeless format that supports 32 bits for the red channel, 8 bits for the green channel, and 24 bits are unused.
A 32-bit floating-point component, and two unsigned-integer components (with an additional 32 bits). This format supports 32-bit depth, 8-bit stencil, and 24 bits are unused.?
A 32-bit floating-point component, and two typeless components (with an additional 32 bits). This format supports 32-bit red channel, 8 bits are unused, and 24 bits are unused.?
A 32-bit typeless component, and two unsigned-integer components (with an additional 32 bits). This format has 32 bits unused, 8 bits for green channel, and 24 bits are unused.
A four-component, 32-bit typeless format that supports 10 bits for each color and 2 bits for alpha.
A four-component, 32-bit unsigned-normalized-integer format that supports 10 bits for each color and 2 bits for alpha.
A four-component, 32-bit unsigned-integer format that supports 10 bits for each color and 2 bits for alpha.
Three partial-precision floating-point numbers encoded into a single 32-bit value (a variant of s10e5, which is sign bit, 10-bit mantissa, and 5-bit biased (15) exponent). There are no sign bits, and there is a 5-bit biased (15) exponent for each channel, 6-bit mantissa for R and G, and a 5-bit mantissa for B, as shown in the following illustration.5,7
A four-component, 32-bit typeless format that supports 8 bits per channel including alpha.
A four-component, 32-bit unsigned-normalized-integer format that supports 8 bits per channel including alpha.
A four-component, 32-bit unsigned-normalized integer sRGB format that supports 8 bits per channel including alpha.
A four-component, 32-bit unsigned-integer format that supports 8 bits per channel including alpha.
A four-component, 32-bit signed-normalized-integer format that supports 8 bits per channel including alpha.
A four-component, 32-bit signed-integer format that supports 8 bits per channel including alpha.
A two-component, 32-bit typeless format that supports 16 bits for the red channel and 16 bits for the green channel.
A two-component, 32-bit floating-point format that supports 16 bits for the red channel and 16 bits for the green channel.5,7
A two-component, 32-bit unsigned-normalized-integer format that supports 16 bits each for the green and red channels.
A two-component, 32-bit unsigned-integer format that supports 16 bits for the red channel and 16 bits for the green channel.
A two-component, 32-bit signed-normalized-integer format that supports 16 bits for the red channel and 16 bits for the green channel.
A two-component, 32-bit signed-integer format that supports 16 bits for the red channel and 16 bits for the green channel.
A single-component, 32-bit typeless format that supports 32 bits for the red channel.
A single-component, 32-bit floating-point format that supports 32 bits for depth.5,8
A single-component, 32-bit floating-point format that supports 32 bits for the red channel.5,8
A single-component, 32-bit unsigned-integer format that supports 32 bits for the red channel.
A single-component, 32-bit signed-integer format that supports 32 bits for the red channel.
A two-component, 32-bit typeless format that supports 24 bits for the red channel and 8 bits for the green channel.
A 32-bit z-buffer format that supports 24 bits for depth and 8 bits for stencil.
A 32-bit format, that contains a 24 bit, single-component, unsigned-normalized integer, with an additional typeless 8 bits. This format has 24 bits red channel and 8 bits unused.
A 32-bit format, that contains a 24 bit, single-component, typeless format, with an additional 8 bit unsigned integer component. This format has 24 bits unused and 8 bits green channel.
A two-component, 16-bit typeless format that supports 8 bits for the red channel and 8 bits for the green channel.
A two-component, 16-bit unsigned-normalized-integer format that supports 8 bits for the red channel and 8 bits for the green channel.
A two-component, 16-bit unsigned-integer format that supports 8 bits for the red channel and 8 bits for the green channel.
A two-component, 16-bit signed-normalized-integer format that supports 8 bits for the red channel and 8 bits for the green channel.
A two-component, 16-bit signed-integer format that supports 8 bits for the red channel and 8 bits for the green channel.
A single-component, 16-bit typeless format that supports 16 bits for the red channel.
A single-component, 16-bit floating-point format that supports 16 bits for the red channel.5,7
A single-component, 16-bit unsigned-normalized-integer format that supports 16 bits for depth.
A single-component, 16-bit unsigned-normalized-integer format that supports 16 bits for the red channel.
A single-component, 16-bit unsigned-integer format that supports 16 bits for the red channel.
A single-component, 16-bit signed-normalized-integer format that supports 16 bits for the red channel.
A single-component, 16-bit signed-integer format that supports 16 bits for the red channel.
A single-component, 8-bit typeless format that supports 8 bits for the red channel.
A single-component, 8-bit unsigned-normalized-integer format that supports 8 bits for the red channel.
A single-component, 8-bit unsigned-integer format that supports 8 bits for the red channel.
A single-component, 8-bit signed-normalized-integer format that supports 8 bits for the red channel.
A single-component, 8-bit signed-integer format that supports 8 bits for the red channel.
A single-component, 8-bit unsigned-normalized-integer format for alpha only.
A single-component, 1-bit unsigned-normalized integer format that supports 1 bit for the red channel. ?.
Three partial-precision floating-point numbers encoded into a single 32-bit value all sharing the same 5-bit exponent (variant of s10e5, which is sign bit, 10-bit mantissa, and 5-bit biased (15) exponent). There is no sign bit, and there is a shared 5-bit biased (15) exponent and a 9-bit mantissa for each channel, as shown in the following illustration. 2,6,7.
A four-component, 32-bit unsigned-normalized-integer format. This packed RGB format is analogous to the UYVY format. Each 32-bit block describes a pair of pixels: (R8, G8, B8) and (R8, G8, B8) where the R8/B8 values are repeated, and the G8 values are unique to each pixel. ?
Width must be even.
A four-component, 32-bit unsigned-normalized-integer format. This packed RGB format is analogous to the YUY2 format. Each 32-bit block describes a pair of pixels: (R8, G8, B8) and (R8, G8, B8) where the R8/B8 values are repeated, and the G8 values are unique to each pixel. ?
Width must be even.
Four-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format for sRGB data. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format for sRGB data. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Four-component block-compression format for sRGB data. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
One-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
One-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
One-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Two-component typeless block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Two-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Two-component block-compression format. For information about block-compression formats, see Texture Block Compression in Direct3D 11.
A three-component, 16-bit unsigned-normalized-integer format that supports 5 bits for blue, 6 bits for green, and 5 bits for red.
Direct3D 10 through Direct3D 11:??This value is defined for DXGI. However, Direct3D 10, 10.1, or 11 devices do not support this format.
Direct3D 11.1:??This value is not supported until Windows?8.
A four-component, 16-bit unsigned-normalized-integer format that supports 5 bits for each color channel and 1-bit alpha.
Direct3D 10 through Direct3D 11:??This value is defined for DXGI. However, Direct3D 10, 10.1, or 11 devices do not support this format.
Direct3D 11.1:??This value is not supported until Windows?8.
A four-component, 32-bit unsigned-normalized-integer format that supports 8 bits for each color channel and 8-bit alpha.
A four-component, 32-bit unsigned-normalized-integer format that supports 8 bits for each color channel and 8 bits unused.
A four-component, 32-bit 2.8-biased fixed-point format that supports 10 bits for each color channel and 2-bit alpha.
A four-component, 32-bit typeless format that supports 8 bits for each channel including alpha. ?
A four-component, 32-bit unsigned-normalized standard RGB format that supports 8 bits for each channel including alpha. ?
A four-component, 32-bit typeless format that supports 8 bits for each color channel, and 8 bits are unused. ?
A four-component, 32-bit unsigned-normalized standard RGB format that supports 8 bits for each color channel, and 8 bits are unused. ?
A typeless block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.
A block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.?
A block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.?
A typeless block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.
A block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.
A block-compression format. ? For information about block-compression formats, see Texture Block Compression in Direct3D 11.
Most common YUV 4:4:4 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
10-bit per channel packed YUV 4:4:4 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
16-bit per channel packed YUV 4:4:4 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
Most common YUV 4:2:0 video resource format. Valid luminance data view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width and height must be even. Direct3D 11 staging resources and initData parameters for this format use (rowPitch * (height + (height / 2))) bytes. The first (SysMemPitch * height) bytes are the Y plane, the remaining (SysMemPitch * (height / 2)) bytes are the UV plane.
An app using the YUY 4:2:0 formats must map the luma (Y) plane separately from the chroma (UV) planes. Developers do this by calling
Direct3D 11.1:??This value is not supported until Windows?8.
10-bit per channel planar YUV 4:2:0 video resource format. Valid luminance data view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width and height must be even. Direct3D 11 staging resources and initData parameters for this format use (rowPitch * (height + (height / 2))) bytes. The first (SysMemPitch * height) bytes are the Y plane, the remaining (SysMemPitch * (height / 2)) bytes are the UV plane.
An app using the YUY 4:2:0 formats must map the luma (Y) plane separately from the chroma (UV) planes. Developers do this by calling
Direct3D 11.1:??This value is not supported until Windows?8.
16-bit per channel planar YUV 4:2:0 video resource format. Valid luminance data view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width and height must be even. Direct3D 11 staging resources and initData parameters for this format use (rowPitch * (height + (height / 2))) bytes. The first (SysMemPitch * height) bytes are the Y plane, the remaining (SysMemPitch * (height / 2)) bytes are the UV plane.
An app using the YUY 4:2:0 formats must map the luma (Y) plane separately from the chroma (UV) planes. Developers do this by calling
Direct3D 11.1:??This value is not supported until Windows?8.
8-bit per channel planar YUV 4:2:0 video resource format. This format is subsampled where each pixel has its own Y value, but each 2x2 pixel block shares a single U and V value. The runtime requires that the width and height of all resources that are created with this format are multiples of 2. The runtime also requires that the left, right, top, and bottom members of any
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width and height must be even. Direct3D 11 staging resources and initData parameters for this format use (rowPitch * (height + (height / 2))) bytes.
An app using the YUY 4:2:0 formats must map the luma (Y) plane separately from the chroma (UV) planes. Developers do this by calling
Direct3D 11.1:??This value is not supported until Windows?8.
Most common YUV 4:2:2 video resource format. Valid view formats for this video resource format are
A unique valid view format for this video resource format is
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width must be even.
Direct3D 11.1:??This value is not supported until Windows?8.
10-bit per channel packed YUV 4:2:2 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width must be even.
Direct3D 11.1:??This value is not supported until Windows?8.
16-bit per channel packed YUV 4:2:2 video resource format. Valid view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width must be even.
Direct3D 11.1:??This value is not supported until Windows?8.
Most common planar YUV 4:1:1 video resource format. Valid luminance data view formats for this video resource format are
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Width must be a multiple of 4. Direct3D11 staging resources and initData parameters for this format use (rowPitch * height * 2) bytes. The first (SysMemPitch * height) bytes are the Y plane, the next ((SysMemPitch / 2) * height) bytes are the UV plane, and the remainder is padding.
Direct3D 11.1:??This value is not supported until Windows?8.
4-bit palletized YUV format that is commonly used for DVD subpicture.
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
4-bit palletized YUV format that is commonly used for DVD subpicture.
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
8-bit palletized format that is used for palletized RGB data when the processor processes ISDB-T data and for palletized YUV data when the processor processes BluRay data.
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
8-bit palletized format with 8 bits of alpha that is used for palletized YUV data when the processor processes BluRay data.
For more info about YUV formats for video rendering, see Recommended 8-Bit YUV Formats for Video Rendering.
Direct3D 11.1:??This value is not supported until Windows?8.
A four-component, 16-bit unsigned-normalized integer format that supports 4 bits for each channel including alpha.
Direct3D 11.1:??This value is not supported until Windows?8.
A video format; an 8-bit version of a hybrid planar 4:2:2 format.
An 8 bit YCbCrA 4:4 rendering format.
An 8 bit YCbCrA 4:4:4:4 rendering format.
Indicates options for presenting frames to the swap chain.
-This enum is used by the
Specifies that the presentation mode is a composition surface, meaning that the conversion from YUV to RGB is happening once per output refresh (for example, 60 Hz). When this value is returned, the media app should discontinue use of the decode swap chain and perform YUV to RGB conversion itself, reducing the frequency of YUV to RGB conversion to once per video frame.
Specifies that the presentation mode is an overlay surface, meaning that the YUV to RGB conversion is happening efficiently in hardware (once per video frame). When this value is returned, the media app can continue to use the decode swap chain. See
No presentation is specified.
An issue occurred that caused content protection to be invalidated in a swap-chain with hardware content protection, and is usually because the system ran out of hardware protected memory. The app will need to do one of the following:
Note that simply re-creating the swap chain or the device will usually have no impact as the DWM will continue to run out of memory and will return the same failure.
Identifies the granularity at which the graphics processing unit (GPU) can be preempted from performing its current graphics rendering task.
-You call the
The following figure shows granularity of graphics rendering tasks.
-Indicates the preemption granularity as a DMA buffer.
Indicates the preemption granularity as a graphics primitive. A primitive is a section in a DMA buffer and can be a group of triangles.
Indicates the preemption granularity as a triangle. A triangle is a part of a primitive.
Indicates the preemption granularity as a pixel. A pixel is a part of a triangle.
Indicates the preemption granularity as a graphics instruction. A graphics instruction operates on a pixel.
Specifies the header metadata type.
-This enum is used by the SetHDRMetaData method.
-Indicates there is no header metadata.
Indicates the header metadata is held by a
Get a reference to the data contained in the surface, and deny GPU access to the surface.
-Use
A reference to the surface data (see
CPU read-write flags. These flags can be combined with a logical OR.
Specifies the memory segment group to use.
-This enum is used by QueryVideoMemoryInfo and SetVideoMemoryReservation.
Refer to the remarks for
The grouping of segments which is considered local to the video adapter, and represents the fastest available memory to the GPU. Applications should target the local segment group as the target size for their working set.
The grouping of segments which is considered non-local to the video adapter, and may have slower performance than the local segment group.
Options for swap-chain color space.
-This enum is used by SetColorSpace.
-Specifies nominal range YCbCr, which isn't an absolute color space, but a way of encoding RGB info.
Specifies BT.709, which standardizes the format of high-definition television and has 16:9 (widescreen) aspect ratio.
Specifies xvYCC or extended-gamut YCC (also x.v.Color) color space that can be used in the video electronics of television sets to support a gamut 1.8 times as large as that of the sRGB color space.
Specifies flags for the OfferResources1 method.
- Identifies the importance of a resource?s content when you call the
Priority determines how likely the operating system is to discard an offered resource. Resources offered with lower priority are discarded first.
-Identifies the type of reference shape.
-The reference type is a monochrome mouse reference, which is a monochrome bitmap. The bitmap's size is specified by width and height in a 1 bits per pixel (bpp) device independent bitmap (DIB) format AND mask that is followed by another 1 bpp DIB format XOR mask of the same size.
The reference type is a color mouse reference, which is a color bitmap. The bitmap's size is specified by width and height in a 32 bpp ARGB DIB format.
The reference type is a masked color mouse reference. A masked color mouse reference is a 32 bpp ARGB format bitmap with the mask value in the alpha bits. The only allowed mask values are 0 and 0xFF. When the mask value is 0, the RGB value should replace the screen pixel. When the mask value is 0xFF, an XOR operation is performed on the RGB value and the screen pixel; the result replaces the screen pixel.
Specifies support for overlay color space.
-Overlay color space support is present.
Specifies overlay support to check for in a call to
Presents a rendered image to the user.
-Starting with Direct3D 11.1, consider using
For the best performance when flipping swap-chain buffers in a full-screen application, see Full-Screen Application Performance Hints.
Because calling Present might cause the render thread to wait on the message-pump thread, be careful when calling this method in an application that uses multiple threads. For more details, see Multithreading Considerations.
Differences between Direct3D 9 and Direct3D 10: Specifying |
?
For flip presentation model swap chains that you create with the
For info about how data values change when you present content to the screen, see Converting data for the color space.
-An integer that specifies how to synchronize presentation of a frame with the vertical blank. -
For the bit-block transfer (bitblt) model (
For the flip model (
For an example that shows how sync-interval values affect a flip presentation queue, see Remarks.
If the update region straddles more than one output (each represented by
An integer value that contains swap-chain presentation options. These options are defined by the DXGI_PRESENT constants.
Specifies result flags for the ReclaimResources1 method.
-Flags indicating the memory location of a resource.
-This enum is used by QueryResourceResidency.
-The resource is located in video memory.
At least some of the resource is located in CPU memory.
At least some of the resource has been paged out to the hard drive.
Set the priority for evicting the resource from memory.
-The eviction priority is a memory-management variable that is used by DXGI for determining how to populate overcommitted memory.
You can set priority levels other than the defined values when appropriate. For example, you can set a resource with a priority level of 0x78000001 to indicate that the resource is slightly above normal.
-The priority is one of the following values:
Value | Meaning |
---|---|
| The resource is unused and can be evicted as soon as another resource requires the memory that the resource occupies. |
| The eviction priority of the resource is low. The placement of the resource is not critical, and minimal work is performed to find a location for the resource. For example, if a GPU can render with a vertex buffer from either local or non-local memory with little difference in performance, that vertex buffer is low priority. Other more critical resources (for example, a render target or texture) can then occupy the faster memory. |
| The eviction priority of the resource is normal. The placement of the resource is important, but not critical, for performance. The resource is placed in its preferred location instead of a low-priority resource. |
| The eviction priority of the resource is high. The resource is placed in its preferred location instead of a low-priority or normal-priority resource. |
| The resource is evicted from memory only if there is no other way of resolving the memory requirement. |
?
Identifies resize behavior when the back-buffer size does not match the size of the target output.
-The
float aspectRatio = backBufferWidth / float(backBufferHeight); // Horizontal fill float scaledWidth = outputWidth; float scaledHeight = outputWidth / aspectRatio; if (scaledHeight >= outputHeight) { // Do vertical fill scaledWidth = outputHeight * aspectRatio; scaledHeight = outputHeight; } float offsetX = (outputWidth - scaledWidth) * 0.5f; float offsetY = (outputHeight - scaledHeight) * 0.5f; rect.left = static_cast<LONG>(offsetX); rect.top = static_cast<LONG>(offsetY); rect.right = static_cast<LONG>(offsetX + scaledWidth); rect.bottom = static_cast<LONG>(offsetY + scaledHeight); rect.left = std::max<LONG>(0, rect.left); rect.top = std::max<LONG>(0, rect.top); rect.right = std::min<LONG>(static_cast<LONG>(outputWidth), rect.right); rect.bottom = std::min<LONG>(static_cast<LONG>(outputHeight), rect.bottom);
-
Note that outputWidth and outputHeight are the pixel sizes of the presentation target size. In the case of CoreWindow, this requires converting the logicalWidth and logicalHeight values from DIPS to pixels using the window's DPI property.
-Directs DXGI to make the back-buffer contents scale to fit the presentation target size. This is the implicit behavior of DXGI when you call the
Directs DXGI to make the back-buffer contents appear without any scaling when the presentation target size is not equal to the back-buffer size. The top edges of the back buffer and presentation target are aligned together. If the WS_EX_LAYOUTRTL style is associated with the
This value specifies that all target areas outside the back buffer of a swap chain are filled with the background color that you specify in a call to
Directs DXGI to make the back-buffer contents scale to fit the presentation target size, while preserving the aspect ratio of the back-buffer. If the scaled back-buffer does not fill the presentation area, it will be centered with black borders.
This constant is supported on Windows Phone 8 and Windows 10.
Note that with legacy Win32 window swapchains, this works the same as
Specifies color space support for the swap chain.
-Color space support is present.
Overlay color space support is present.
Options for swap-chain behavior.
-This enumeration is used by the
This enumeration is also used by the
You don't need to set
Swap chains that you create with the
When you call
Set this flag to turn off automatic image rotation; that is, do not perform a rotation when transferring the contents of the front buffer to the monitor. Use this flag to avoid a bandwidth penalty when an application expects to handle rotation. This option is valid only during full-screen mode.
Set this flag to enable an application to switch modes by calling
Set this flag to enable an application to render using GDI on a swap chain or a surface. This will allow the application to call
Set this flag to indicate that the swap chain might contain protected content; therefore, the operating system supports the creation of the swap chain only when driver and hardware protection is used. If the driver and hardware do not support content protection, the call to create a resource for the swap chain fails.
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Set this flag to indicate that shared resources that are created within the swap chain must be protected by using the driver?s mechanism for restricting access to shared surfaces.
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Set this flag to restrict presented content to the local displays. Therefore, the presented content is not accessible via remote accessing or through the desktop duplication APIs.
This flag supports the window content protection features of Windows. Applications can use this flag to protect their own onscreen window content from being captured or copied through a specific set of public operating system features and APIs.
If you use this flag with windowed (
Direct3D 11:??This enumeration value is supported starting with Windows?8.
Set this flag to create a waitable object you can use to ensure rendering does not begin while a frame is still being presented. When this flag is used, the swapchain's latency must be set with the
Note??This enumeration value is supported starting with Windows?8.1.
Set this flag to create a swap chain in the foreground layer for multi-plane rendering. This flag can only be used with CoreWindow swap chains, which are created with CreateSwapChainForCoreWindow. Apps should not create foreground swap chains if
Note that
Note??This enumeration value is supported starting with Windows?8.1.
Set this flag to create a swap chain for full-screen video.
Note??This enumeration value is supported starting with Windows?8.1.
Set this flag to create a swap chain for YUV video.
Note??This enumeration value is supported starting with Windows?8.1.
Indicates that the swap chain should be created such that all underlying resources can be protected by the hardware. Resource creation will fail if hardware content protection is not supported.
This flag has the following restrictions:
Note??This enumeration value is supported starting with Windows?10.
Tearing support is a requirement to enable displays that support variable refresh rates to function properly when the application presents a swap chain tied to a full screen borderless window. Win32 apps can already achieve tearing in fullscreen exclusive mode by calling SetFullscreenState(TRUE), but the recommended approach for Win32 developers is to use this tearing flag instead.
To check for hardware support of this feature, refer to
Options for handling pixels in a display surface after calling
This enumeration is used by the
To use multisampling with
The primary difference between presentation models is how back-buffer contents get to the Desktop Window Manager (DWM) for composition. In the bitblt model, which is used with the
When you call
Regardless of whether the flip model is more efficient, an application still might choose the bitblt model because the bitblt model is the only way to mix GDI and DirectX presentation. In the flip model, the application must create the swap chain with
For more info about the flip-model swap chain and optimizing presentation, see Enhancing presentation with the flip model, dirty rectangles, and scrolled areas.
-Creates a DXGI 1.1 factory that you can use to generate other DXGI objects.
-The globally unique identifier (
Address of a reference to an
Returns
Use a DXGI 1.1 factory to generate objects that enumerate adapters, create swap chains, and associate a window with the alt+enter key sequence for toggling to and from the full-screen display mode.
If the CreateDXGIFactory1 function succeeds, the reference count on the
This entry point is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Note??Do not mix the use of DXGI 1.0 (Creates a DXGI 1.3 factory that you can use to generate other DXGI objects.
In Windows?8, any DXGI factory created while DXGIDebug.dll was present on the system would load and use it. Starting in Windows?8.1, apps explicitly request that DXGIDebug.dll be loaded instead. Use CreateDXGIFactory2 and specify the
Valid values include the
The globally unique identifier (
Address of a reference to an
Returns
This function accepts a flag indicating whether DXGIDebug.dll is loaded. The function otherwise behaves identically to CreateDXGIFactory1.
- The
This interface is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
A display sub-system is often referred to as a video card, however, on some machines the display sub-system is part of the mother board.
To enumerate the display sub-systems, use
Windows?Phone?8: This API is supported.
-Gets a DXGI 1.1 description of an adapter (or video card).
-This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Use the GetDesc1 method to get a DXGI 1.1 description of an adapter. To get a DXGI 1.0 description, use the
Gets a DXGI 1.1 description of an adapter (or video card).
-A reference to a
Returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Use the GetDesc1 method to get a DXGI 1.1 description of an adapter. To get a DXGI 1.0 description, use the
The
A display subsystem is often referred to as a video card; however, on some computers, the display subsystem is part of the motherboard.
To enumerate the display subsystems, use
To get an interface to the adapter for a particular device, use
To create a software adapter, use
Gets a Microsoft DirectX Graphics Infrastructure (DXGI) 1.2 description of an adapter or video card. This description includes information about the granularity at which the graphics processing unit (GPU) can be preempted from performing its current task.
-Use the GetDesc2 method to get a DXGI 1.2 description of an adapter. To get a DXGI 1.1 description, use the
The Windows Display Driver Model (WDDM) scheduler can preempt the GPU's execution of application tasks. The granularity at which the GPU can be preempted from performing its current task in the WDDM 1.1 or earlier driver model is a direct memory access (DMA) buffer for graphics tasks or a compute packet for compute tasks. The GPU can switch between tasks only after it completes the currently executing unit of work, a DMA buffer or a compute packet.
A DMA buffer is the largest independent unit of graphics work that the WDDM scheduler can submit to the GPU. This buffer contains a set of GPU instructions that the WDDM driver and GPU use. A compute packet is the largest independent unit of compute work that the WDDM scheduler can submit to the GPU. A compute packet contains dispatches (for example, calls to the
Gets a Microsoft DirectX Graphics Infrastructure (DXGI) 1.2 description of an adapter or video card. This description includes information about the granularity at which the graphics processing unit (GPU) can be preempted from performing its current task.
-A reference to a
Returns
Use the GetDesc2 method to get a DXGI 1.2 description of an adapter. To get a DXGI 1.1 description, use the
The Windows Display Driver Model (WDDM) scheduler can preempt the GPU's execution of application tasks. The granularity at which the GPU can be preempted from performing its current task in the WDDM 1.1 or earlier driver model is a direct memory access (DMA) buffer for graphics tasks or a compute packet for compute tasks. The GPU can switch between tasks only after it completes the currently executing unit of work, a DMA buffer or a compute packet.
A DMA buffer is the largest independent unit of graphics work that the WDDM scheduler can submit to the GPU. This buffer contains a set of GPU instructions that the WDDM driver and GPU use. A compute packet is the largest independent unit of compute work that the WDDM scheduler can submit to the GPU. A compute packet contains dispatches (for example, calls to the
This interface adds some memory residency methods, for budgeting and reserving physical memory.
-For more details, refer to the Residency section of the D3D12 documentation.
-Registers to receive notification of hardware content protection teardown events.
-A handle to the event object that the operating system sets when hardware content protection teardown occurs. The CreateEvent or OpenEvent function returns this handle.
A reference to a key value that an application can pass to the
Call
Unregisters an event to stop it from receiving notification of hardware content protection teardown events.
-A key value for the window or event to unregister. The
This method informs the process of the current budget and process usage.
-Specifies the device's physical adapter for which the video memory information is queried. For single-GPU operation, set this to zero. If there are multiple GPU nodes, set this to the index of the node (the device's physical adapter) for which the video memory information is queried. See Multi-Adapter.
Specifies a
Fills in a
Applications must explicitly manage their usage of physical memory explicitly and keep usage within the budget assigned to the application process. Processes that cannot kept their usage within their assigned budgets will likely experience stuttering, as they are intermittently frozen and paged-out to allow other processes to run.
-This method sends the minimum required physical memory for an application, to the OS.
-Specifies the device's physical adapter for which the video memory information is being set. For single-GPU operation, set this to zero. If there are multiple GPU nodes, set this to the index of the node (the device's physical adapter) for which the video memory information is being set. See Multi-Adapter.
Specifies a
Specifies a UINT64 that sets the minimum required physical memory, in bytes.
Returns
Applications are encouraged to set a video reservation to denote the amount of physical memory they cannot go without. This value helps the OS quickly minimize the impact of large memory pressure situations.
-This method establishes a correlation between a CPU synchronization object and the budget change event.
-Specifies a HANDLE for the event.
A key value for the window or event to unregister. The
Instead of calling QueryVideoMemoryInfo regularly, applications can use CPU synchronization objects to efficiently wake threads when budget changes occur.
-This method stops notifying a CPU synchronization object whenever a budget change occurs. An application may switch back to polling the information regularly.
-A key value for the window or event to unregister. The
An application may switch back to polling for the information regularly.
- The
A display subsystem is often referred to as a video card, however, on some machines the display subsystem is part of the motherboard.
To enumerate the display subsystems, use
To get an interface to the adapter for a particular device, use
To create a software adapter, use
Windows?Phone?8: This API is supported.
-Represents a swap chain that is used by desktop media apps to decode video data and show it on a DirectComposition surface.
-Decode swap chains are intended for use primarily with YUV surface formats. When using decode buffers created with an RGB surface format, the TargetRect and DestSize must be set equal to the buffer dimensions. SourceRect cannot exceed the buffer dimensions.
In clone mode, the decode swap chain is only guaranteed to be shown on the primary output.
Decode swap chains cannot be used with dirty rects.
-Gets or sets the source region that is used for the swap chain.
-Gets or sets the rectangle that defines the target region for the video processing blit operation.
-Gets or sets the color space used by the swap chain.
-Presents a frame on the output adapter. The frame is a subresource of the
This method returns
Sets the rectangle that defines the source region for the video processing blit operation.
The source rectangle is the portion of the input surface that is blitted to the destination surface. The source rectangle is given in pixel coordinates, relative to the input surface.
-A reference to a
This method returns
Sets the rectangle that defines the target region for the video processing blit operation.
The target rectangle is the area within the destination surface where the output will be drawn. The target rectangle is given in pixel coordinates, relative to the destination surface.
-A reference to a
This method returns
Sets the size of the destination surface to use for the video processing blit operation.
The destination rectangle is the portion of the output surface that receives the blit for this stream. The destination rectangle is given in pixel coordinates, relative to the output surface.
-The width of the destination size, in pixels.
The height of the destination size, in pixels.
This method returns
Gets the source region that is used for the swap chain.
-A reference to a
This method returns
Gets the rectangle that defines the target region for the video processing blit operation.
-A reference to a
This method returns
Gets the size of the destination surface to use for the video processing blit operation.
-A reference to a variable that receives the width in pixels.
A reference to a variable that receives the height in pixels.
This method returns
Sets the color space used by the swap chain.
-A reference to a combination of
This method returns
Gets the color space used by the swap chain.
-A combination of
An
This interface is not supported by Direct3D 12 devices. Direct3D 12 applications have direct control over their swapchain management, so better latency control should be handled by the application. You can make use of Waitable objects (refer to
This interface is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); -
Windows?Phone?8: This API is supported.
-Gets or sets the number of frames that the system is allowed to queue for rendering.
-This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Frame latency is the number of frames that are allowed to be stored in a queue before submission for rendering. Latency is often used to control how the CPU chooses between responding to user input and frames that are in the render queue. It is often beneficial for applications that have no user input (for example, video playback) to queue more than 3 frames of data.
-Sets the number of frames that the system is allowed to queue for rendering.
-The maximum number of back buffer frames that a driver can queue. The value defaults to 3, but can range from 1 to 16. A value of 0 will reset latency to the default. For multi-head devices, this value is specified per-head.
Returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Frame latency is the number of frames that are allowed to be stored in a queue before submission for rendering. Latency is often used to control how the CPU chooses between responding to user input and frames that are in the render queue. It is often beneficial for applications that have no user input (for example, video playback) to queue more than 3 frames of data.
-Gets the number of frames that the system is allowed to queue for rendering.
-This value is set to the number of frames that can be queued for render. This value defaults to 3, but can range from 1 to 16.
Returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Frame latency is the number of frames that are allowed to be stored in a queue before submission for rendering. Latency is often used to control how the CPU chooses between responding to user input and frames that are in the render queue. It is often beneficial for applications that have no user input (for example, video playback) to queue more than 3 frames of data.
- The
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); -
Windows?Phone?8: This API is supported.
-Allows the operating system to free the video memory of resources by discarding their content.
-The number of resources in the ppResources argument array.
An array of references to
A
OfferResources returns:
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the IDXGIDevice2::ReclaimResource method to reclaim the resource. You cannot call OfferResources to offer immutable resources.
To offer shared resources, call OfferResources on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that OfferResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Allows the operating system to free the video memory of resources by discarding their content.
-The number of resources in the ppResources argument array.
An array of references to
A
OfferResources returns:
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the IDXGIDevice2::ReclaimResource method to reclaim the resource. You cannot call OfferResources to offer immutable resources.
To offer shared resources, call OfferResources on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that OfferResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Allows the operating system to free the video memory of resources by discarding their content.
-The number of resources in the ppResources argument array.
An array of references to
A
OfferResources returns:
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the IDXGIDevice2::ReclaimResource method to reclaim the resource. You cannot call OfferResources to offer immutable resources.
To offer shared resources, call OfferResources on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that OfferResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Restores access to resources that were previously offered by calling
ReclaimResources returns:
After you call
To reclaim shared resources, call ReclaimResources only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that ReclaimResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Restores access to resources that were previously offered by calling
ReclaimResources returns:
After you call
To reclaim shared resources, call ReclaimResources only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that ReclaimResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Restores access to resources that were previously offered by calling
ReclaimResources returns:
After you call
To reclaim shared resources, call ReclaimResources only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Platform Update for Windows?7:??The runtime validates that ReclaimResources is used correctly on non-shared resources but doesn't perform the intended functionality. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Flushes any outstanding rendering commands and sets the specified event object to the signaled state after all previously submitted rendering commands complete.
-A handle to the event object. The CreateEvent or OpenEvent function returns this handle. All types of event objects (manual-reset, auto-reset, and so on) are supported.
The handle must have the EVENT_MODIFY_STATE access right. For more information about access rights, see Synchronization Object Security and Access Rights.
Returns
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, EnqueueSetEvent fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
EnqueueSetEvent calls the SetEvent function on the event object after all previously submitted rendering commands complete or the device is removed.
After an application calls EnqueueSetEvent, it can immediately call the WaitForSingleObject function to put itself to sleep until rendering commands complete.
You cannot use EnqueueSetEvent to determine work completion that is associated with presentation (
The
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice);
Windows?Phone?8: This API is supported.
-Trims the graphics memory allocated by the
For apps that render with DirectX, graphics drivers periodically allocate internal memory buffers in order to speed up subsequent rendering requests. These memory allocations count against the app's memory usage for PLM and in general lead to increased memory usage by the overall system.
Starting in Windows?8.1, apps that render with Direct2D and/or Direct3D (including CoreWindow and XAML interop) must call Trim in response to the PLM suspend callback. The Direct3D runtime and the graphics driver will discard internal memory buffers allocated for the app, reducing its memory footprint.
Calling this method does not change the rendering state of the graphics device and it has no effect on rendering operations. There is a brief performance hit when internal buffers are reallocated during the first rendering operations after the Trim call, therefore apps should only call Trim when going idle for a period of time (in response to PLM suspend, for example).
Apps should ensure that they call Trim as one of the last D3D operations done before going idle. Direct3D will normally defer the destruction of D3D objects. Calling Trim, however, forces Direct3D to destroy objects immediately. For this reason, it is not guaranteed that releasing the final reference on Direct3D objects after calling Trim will cause the object to be destroyed and memory to be deallocated before the app suspends.
Similar to
It is also prudent to release references on middleware before calling Trim, as that middleware may also need to release references - to Direct3D objects.
- An
The
The Direct3D create device functions return a Direct3D device object. This Direct3D device object implements the
* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); -
Windows?Phone?8: This API is supported.
-Allows the operating system to free the video memory of resources, including both discarding the content and de-committing the memory.
-The number of resources in the ppResources argument array.
An array of references to
A
Specifies the
This method returns an
OfferResources1 (an extension of the original
OfferResources1 and ReclaimResources1 may not be used interchangeably with OfferResources and ReclaimResources. -
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources1 to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources1 on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the ReclaimResources1 method to reclaim the resource. You cannot call OfferResources1 to offer immutable resources.
To offer shared resources, call OfferResources1 on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
The user mode display driver might not immediately offer the resources that you specified in a call to OfferResources1. The driver can postpone offering them until the next call to
Allows the operating system to free the video memory of resources, including both discarding the content and de-committing the memory.
-The number of resources in the ppResources argument array.
An array of references to
A
Specifies the
This method returns an
OfferResources1 (an extension of the original
OfferResources1 and ReclaimResources1 may not be used interchangeably with OfferResources and ReclaimResources. -
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources1 to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources1 on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the ReclaimResources1 method to reclaim the resource. You cannot call OfferResources1 to offer immutable resources.
To offer shared resources, call OfferResources1 on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
The user mode display driver might not immediately offer the resources that you specified in a call to OfferResources1. The driver can postpone offering them until the next call to
Allows the operating system to free the video memory of resources, including both discarding the content and de-committing the memory.
-The number of resources in the ppResources argument array.
An array of references to
A
Specifies the
This method returns an
OfferResources1 (an extension of the original
OfferResources1 and ReclaimResources1 may not be used interchangeably with OfferResources and ReclaimResources. -
The priority value that the Priority parameter specifies describes how valuable the caller considers the content to be. The operating system uses the priority value to discard resources in order of priority. The operating system discards a resource that is offered with low priority before it discards a resource that is offered with a higher priority.
If you call OfferResources1 to offer a resource while the resource is bound to the pipeline, the resource is unbound. You cannot call OfferResources1 on a resource that is mapped. After you offer a resource, the resource cannot be mapped or bound to the pipeline until you call the ReclaimResources1 method to reclaim the resource. You cannot call OfferResources1 to offer immutable resources.
To offer shared resources, call OfferResources1 on only one of the sharing devices. To ensure exclusive access to the resources, you must use an
The user mode display driver might not immediately offer the resources that you specified in a call to OfferResources1. The driver can postpone offering them until the next call to
Restores access to resources that were previously offered by calling
This method returns an
After you call OfferResources1 to offer one or more resources, you must call ReclaimResources1 before you can use those resources again.
To reclaim shared resources, call ReclaimResources1 only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Restores access to resources that were previously offered by calling
This method returns an
After you call OfferResources1 to offer one or more resources, you must call ReclaimResources1 before you can use those resources again.
To reclaim shared resources, call ReclaimResources1 only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
Restores access to resources that were previously offered by calling
This method returns an
After you call OfferResources1 to offer one or more resources, you must call ReclaimResources1 before you can use those resources again.
To reclaim shared resources, call ReclaimResources1 only on one of the sharing devices. To ensure exclusive access to the resources, you must use an
The
We recommend that you not use
Call QueryInterface from a factory object (
* pDXGIDisplayControl; - hr = g_pDXGIFactory->QueryInterface(__uuidof( ), (void **)&pDXGIDisplayControl);
The operating system processes changes to stereo-enabled configuration asynchronously. Therefore, these changes might not be immediately visible in every process that calls
Platform Update for Windows?7:?? Stereoscopic 3D display behavior isn?t available with the Platform Update for Windows?7. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Retrieves a Boolean value that indicates whether the operating system's stereoscopic 3D display behavior is enabled.
-You pass a Boolean value to the
Set a Boolean value to either enable or disable the operating system's stereoscopic 3D display behavior.
-Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, SetStereoEnabled doesn't change stereoscopic 3D display behavior because stereoscopic 3D display behavior isn?t available with the Platform Update for Windows?7. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Retrieves a Boolean value that indicates whether the operating system's stereoscopic 3D display behavior is enabled.
-IsStereoEnabled returns TRUE when the operating system's stereoscopic 3D display behavior is enabled and
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, IsStereoEnabled always returns
You pass a Boolean value to the
Set a Boolean value to either enable or disable the operating system's stereoscopic 3D display behavior.
-A Boolean value that either enables or disables the operating system's stereoscopic 3D display behavior. TRUE enables the operating system's stereoscopic 3D display behavior and
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, SetStereoEnabled doesn't change stereoscopic 3D display behavior because stereoscopic 3D display behavior isn?t available with the Platform Update for Windows?7. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
-Enables creating Microsoft DirectX Graphics Infrastructure (DXGI) objects.
-Gets the flags that were used when a Microsoft DirectX Graphics Infrastructure (DXGI) object was created.
-The GetCreationFlags method returns flags that were passed to the CreateDXGIFactory2 function, or were implicitly constructed by CreateDXGIFactory, CreateDXGIFactory1,
Gets the flags that were used when a Microsoft DirectX Graphics Infrastructure (DXGI) object was created.
-The creation flags.
The GetCreationFlags method returns flags that were passed to the CreateDXGIFactory2 function, or were implicitly constructed by CreateDXGIFactory, CreateDXGIFactory1,
This interface enables a single method to support variable refresh rate displays.
-Used to check for hardware feature support.
-Specifies one member of
Specifies a reference to a buffer that will be filled with data that describes the feature support.
The size, in bytes, of pFeatureSupportData.
This method returns an
Refer to the description of
Creates swap chains for desktop media apps that use DirectComposition surfaces to decode and display video.
- To create a Microsoft DirectX Graphics Infrastructure (DXGI) media factory interface, pass
Because you can create a Direct3D device without creating a swap chain, you might need to retrieve the factory that is used to create the device in order to create a swap chain. You can request the
-* pDXGIDevice; - hr = g_pd3dDevice->QueryInterface(__uuidof( ), (void **)&pDXGIDevice); * pDXGIAdapter; - hr = pDXGIDevice->GetParent(__uuidof( ), (void **)&pDXGIAdapter); * pIDXGIFactory; - pDXGIAdapter->GetParent(__uuidof( ), (void **)&pIDXGIFactory);
Creates a YUV swap chain for an existing DirectComposition surface handle.
-CreateSwapChainForCompositionSurfaceHandle returns:
Creates a YUV swap chain for an existing DirectComposition surface handle. The swap chain is created with pre-existing buffers and very few descriptive elements are required. Instead, this method requires a DirectComposition surface handle and an
CreateDecodeSwapChainForCompositionSurfaceHandle returns:
The
Enables performing bulk operations across all SurfaceImageSource objects created in the same process.
-Flushes all current GPU work for all SurfaceImageSource or VirtualSurfaceImageSource objects associated with the given device.
-If this method succeeds, it returns
The FlushAllSurfacesWithDevice method flushes current GPU work for all SurfaceImageSource objects that were created with device. This GPU work includes Direct2D rendering work and internal GPU work done by the framework associated with rendering. This is useful if an application has created multiple SurfaceImageSource objects and needs to flush the GPU work for all of these surfaces from the background rendering thread. By flushing this work from the background thread the work can be better parallelized, with work being done on the UI thread to improve performance.
You can call the FlushAllSurfacesWithDevice method from a non-UI thread.
-Provides the implementation of a shared fixed-size surface for Direct2D drawing.
Note??If the surface is larger than the screen size, useThis interface provides the native implementation of the SurfaceImageSource Windows runtime type. To obtain a reference to
Microsoft::WRL::ComPtr<-> m_sisNative; - // ... - IInspectable* sisInspectable = (IInspectable*) reinterpret_cast<IInspectable*>(surfaceImageSource); - sisInspectable->QueryInterface(__uuidof( ), (void **)&m_sisNative)
Sets the DXGI device, created with
Sets the DXGI device, created with
Pointer to the DXGI device interface.
If this method succeeds, it returns
Opens the supplied DXGI surface for drawing.
-The region of the surface that will be drawn into.
Receives the point (x,y) offset of the surface that will be drawn into.
Receives a reference to the surface for drawing.
If the app window that contains the SurfaceImageSource isn't active, like when it's suspended, calling the BeginDraw method returns an error.
-Closes the surface draw operation.
-If this method succeeds, it returns
Provides the implementation of a shared Microsoft DirectX surface which is displayed in a SurfaceImageSource or VirtualSurfaceImageSource.
-The
Microsoft::WRL::ComPtr<> m_sisD2DNative; - // ... - IInspectable* sisInspectable = (IInspectable*) reinterpret_cast<IInspectable*>(surfaceImageSource); - sisInspectable->QueryInterface(__uuidof( ), (void **)&m_sisD2DNative)
The
The
Only call the SetDevice, BeginDraw, and EndDraw methods on
In order to support batching updates to multiple surfaces to improve performance, you can pass an
To draw to the surface from a background thread, you must set any DirectX resources, including the Microsoft Direct3D device, Direct3D device context, Direct2D device, and Direct2D device context, to enable multithreading support.
You can call the BeginDraw, SuspendDraw, and ResumeDraw methods from any background thread to enable high-performance multithreaded drawing.
Always call the EndDraw method on the UI thread in order to synchronize updating the DirectX content with the current XAML UI thread frame. You can call BeginDraw on a background thread, call SuspendDraw when you're done drawing on the background thread, and call EndDraw on the UI thread.
Use SuspendDraw and ResumeDraw to suspend and resume drawing on any background or UI thread.
Handle the SurfaceContentsLost event to determine when you need to recreate content which may be lost if the system resets the GPU.
-Sets the Microsoft DirectX Graphics Infrastructure (DXGI) or Direct2D device, created with
Sets the Microsoft DirectX Graphics Infrastructure (DXGI) or Direct2D device, created with
Pointer to the DXGI device interface. You can pass an
This method fails when the SurfaceImageSource is larger than the maximum texture size supported by the Direct3D device. Apps should use VirtualSurfaceImageSource for surfaces larger than the maximum texture size supported by the Direct3D device.
Initiates an update to the associated SurfaceImageSource or VirtualSurfaceImageSource.
-If this method succeeds, it returns
Closes the surface draw operation.
-If this method succeeds, it returns
Always call the EndDraw method on the UI thread in order to synchronize updating the Microsoft DirectX content with the current XAML UI thread frame.
-Suspends the drawing operation.
-If this method succeeds, it returns
Resume the drawing operation.
-If this method succeeds, it returns
Sets the DirectX swap chain for SwapChainBackgroundPanel.
-Sets the DirectX swap chain for SwapChainBackgroundPanel.
-If this method succeeds, it returns
Provides interoperation between XAML and a DirectX swap chain. Unlike SwapChainBackgroundPanel, a SwapChainPanel can appear at any level in the XAML display tree, and more than 1 can be present in any given tree.
-This interface provides the native implementation of the Windows::UI::XAML::Control::SwapChainPanel Windows Runtime type. To obtain a reference to
Microsoft::WRL::ComPtr<-> m_swapChainNative; - // ... - IInspectable* panelInspectable = (IInspectable*) reinterpret_cast<IInspectable*>(swapChainPanel); - panelInspectable->QueryInterface(__uuidof( ), (void **)&m_swapChainNative);
Sets the DirectX swap chain for SwapChainPanel.
-Sets the DirectX swap chain for SwapChainPanel.
-If this method succeeds, it returns
Provides interoperation between XAML and a DirectX swap chain. Unlike SwapChainBackgroundPanel, a SwapChainPanel can appear at any level in the XAML display tree, and more than 1 can be present in any given tree.
-This interface provides the native implementation of the Windows::UI::XAML::Control::SwapChainPanel Windows Runtime type. To obtain a reference to
Microsoft::WRL::ComPtr<-> m_swapChainNative2; - // ... - IInspectable* panelInspectable = (IInspectable*) reinterpret_cast<IInspectable*>(swapChainPanel); - panelInspectable->QueryInterface(__uuidof( ), (void **)&m_swapChainNative2);
Sets the DirectX swap chain for SwapChainPanel using a handle to the swap chain.
-SetSwapChain(HANDLE swapChainHandle) allows a swap chain to be rendered by referencing a shared handle to the swap chain. This enables scenarios where a swap chain is created in one process and needs to be passed to another process.
XAML supports setting a DXGI swap chain as the content of a SwapChainPanel element. Apps accomplish this by querying for the
This process works for references to in process swap chains. However, this doesn?t work for VoIP apps, which use a two-process model to enable continuing calls on a background process when a foreground process is suspended or shut down. This two-process implementation requires the ability to pass a shared handle to a swap chain, rather than a reference, created on the background process to the foreground process to be rendered in a XAML SwapChainPanel in the foreground app.
<!-- XAML markup --> - <Page> <SwapChainPanel x:Name=?captureStreamDisplayPanel? /> - </Page> // Definitions - ComPtr<-> m_swapChain; - HANDLE m_swapChainHandle; - ComPtr< > m_d3dDevice; - ComPtr< > dxgiAdapter; - ComPtr< > dxgiFactory; - ComPtr< > dxgiFactoryMedia; - ComPtr< > dxgiDevice; - swapChainDesc = {0}; // Get DXGI factory (assume standard boilerplate has created D3D11Device) - m_d3dDevice.As(&dxgiDevice); - dxgiDevice->GetAdapter(&dxgiAdapter); - dxgiAdapter->GetParent(__uuidof( ), &dxgiFactory); // Create swap chain and get handle - (GENERIC_ALL, nullptr, &m_swapChainHandle); - dxgiFactory.As(&dxgiFactoryMedia); - dxgiFactoryMedia->CreateSwapChainForCompositionSurfaceHandle( m_d3dDevice.Get(), m_swapChainHandle, &swapChainDesc, nullptr, &m_swapChain - ); // Set swap chain to display in a SwapChainPanel - ComPtr< > panelNative; - reinterpret_cast< *>(captureStreamDisplayPanel)->QueryInterface(IID_PPV_ARGS(&panelNative))); - panelNative->SetSwapChainHandle(m_swapChainHandle);
Sets the DirectX swap chain for SwapChainPanel using a handle to the swap chain.
-If this method succeeds, it returns
SetSwapChain(HANDLE swapChainHandle) allows a swap chain to be rendered by referencing a shared handle to the swap chain. This enables scenarios where a swap chain is created in one process and needs to be passed to another process.
XAML supports setting a DXGI swap chain as the content of a SwapChainPanel element. Apps accomplish this by querying for the
This process works for references to in process swap chains. However, this doesn?t work for VoIP apps, which use a two-process model to enable continuing calls on a background process when a foreground process is suspended or shut down. This two-process implementation requires the ability to pass a shared handle to a swap chain, rather than a reference, created on the background process to the foreground process to be rendered in a XAML SwapChainPanel in the foreground app.
<!-- XAML markup --> - <Page> <SwapChainPanel x:Name=?captureStreamDisplayPanel? /> - </Page> // Definitions - ComPtr<-> m_swapChain; - HANDLE m_swapChainHandle; - ComPtr< > m_d3dDevice; - ComPtr< > dxgiAdapter; - ComPtr< > dxgiFactory; - ComPtr< > dxgiFactoryMedia; - ComPtr< > dxgiDevice; - swapChainDesc = {0}; // Get DXGI factory (assume standard boilerplate has created D3D11Device) - m_d3dDevice.As(&dxgiDevice); - dxgiDevice->GetAdapter(&dxgiAdapter); - dxgiAdapter->GetParent(__uuidof( ), &dxgiFactory); // Create swap chain and get handle - (GENERIC_ALL, nullptr, &m_swapChainHandle); - dxgiFactory.As(&dxgiFactoryMedia); - dxgiFactoryMedia->CreateSwapChainForCompositionSurfaceHandle( m_d3dDevice.Get(), m_swapChainHandle, &swapChainDesc, nullptr, &m_swapChain - ); // Set swap chain to display in a SwapChainPanel - ComPtr< > panelNative; - reinterpret_cast< *>(captureStreamDisplayPanel)->QueryInterface(IID_PPV_ARGS(&panelNative))); - panelNative->SetSwapChainHandle(m_swapChainHandle);
Provides an interface for the implementation of drawing behaviors when a VirtualSurfaceImageSource requests an update.
-This interface is implemented by the developer to provide specific drawing behaviors for updates to a VirtualSurfaceImageSource. Classes that implement this interface are provided to the
Gets the boundaries of the visible region of the shared surface.
-Invalidates a specific region of the shared surface for drawing.
-The region of the surface to invalidate.
If this method succeeds, it returns
Gets the total number of regions of the surface that must be updated.
-Receives the number of regions to update.
Gets the set of regions that must be updated on the shared surface.
-The number of regions that must be updated. You obtain this by calling GetUpdateRectCount.
Receives a list of regions that must be updated.
If this method succeeds, it returns
Gets the boundaries of the visible region of the shared surface.
-Receives a rectangle that specifies the visible region of the shared surface.
If this method succeeds, it returns
Registers for the callback that will perform the drawing when an update to the shared surface is requested.
-Pointer to an implementation of
If this method succeeds, it returns
Resizes the surface.
-The updated width of the surface.
The updated height of the surface.
If this method succeeds, it returns
Performs the drawing behaviors when an update to VirtualSurfaceImageSource is requested.
-This method is implemented by the developer.
-Performs the drawing behaviors when an update to VirtualSurfaceImageSource is requested.
-This method is implemented by the developer.
-Performs the drawing behaviors when an update to VirtualSurfaceImageSource is requested.
-If this method succeeds, it returns
This method is implemented by the developer.
-Represents a keyed mutex, which allows exclusive access to a shared resource that is used by multiple devices.
-The
An
For information about creating a keyed mutex, see the
Using a key, acquires exclusive rendering access to a shared resource.
-A value that indicates which device to give access to. This method will succeed when the device that currently owns the surface calls the
The time-out interval, in milliseconds. This method will return if the interval elapses, and the keyed mutex has not been released using the specified Key. If this value is set to zero, the AcquireSync method will test to see if the keyed mutex has been released and returns immediately. If this value is set to INFINITE, the time-out interval will never elapse.
Return
If the owning device attempted to create another keyed mutex on the same shared resource, AcquireSync returns E_FAIL.
AcquireSync can also return the following DWORD constants. Therefore, you should explicitly check for these constants. If you only use the SUCCEEDED macro on the return value to determine if AcquireSync succeeded, you will not catch these constants.
The AcquireSync method creates a lock to a surface that is shared between multiple devices, allowing only one device to render to a surface at a time. This method uses a key to determine which device currently has exclusive access to the surface.
When a surface is created using the D3D10_RESOURCE_MISC_SHARED_KEYEDMUTEX value of the D3D10_RESOURCE_MISC_FLAG enumeration, you must call the AcquireSync method before rendering to the surface. You must call the ReleaseSync method when you are done rendering to a surface.
To acquire a reference to the keyed mutex object of a shared resource, call the QueryInterface method of the resource and pass in the UUID of the
The AcquireSync method uses the key as follows, depending on the state of the surface:
Using a key, releases exclusive rendering access to a shared resource.
-A value that indicates which device to give access to. This method succeeds when the device that currently owns the surface calls the ReleaseSync method using the same value. This value can be any UINT64 value.
Returns
If the device attempted to release a keyed mutex that is not valid or owned by the device, ReleaseSync returns E_FAIL.
The ReleaseSync method releases a lock to a surface that is shared between multiple devices. This method uses a key to determine which device currently has exclusive access to the surface.
When a surface is created using the D3D10_RESOURCE_MISC_SHARED_KEYEDMUTEX value of the D3D10_RESOURCE_MISC_FLAG enumeration, you must call the
After you call the ReleaseSync method, the shared resource is unset from the rendering pipeline.
To acquire a reference to the keyed mutex object of a shared resource, call the QueryInterface method of the resource and pass in the UUID of the
An
To see the outputs available, use
Get a description of the output.
-On a high DPI desktop, GetDesc returns the visualized screen size unless the app is marked high DPI aware. For info about writing DPI-aware Win32 apps, see High DPI.
-Gets a description of the gamma-control capabilities.
-Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Gets or sets the gamma control settings.
-Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Gets statistics about recently rendered frames.
-This API is similar to
Note??Calling this method is only supported while in full-screen mode.? -
Get a description of the output.
-A reference to the output description (see
Returns a code that indicates success or failure.
On a high DPI desktop, GetDesc returns the visualized screen size unless the app is marked high DPI aware. For info about writing DPI-aware Win32 apps, see High DPI.
-[Starting with Direct3D 11.1, we recommend not to use GetDisplayModeList anymore to retrieve the matching display mode. Instead, use
Gets the display modes that match the requested format and other input options.
-Returns one of the following DXGI_ERROR. It is rare, but possible, that the display modes available can change immediately after calling this method, in which case
In general, when switching from windowed to full-screen mode, a swap chain automatically chooses a display mode that meets (or exceeds) the resolution, color depth and refresh rate of the swap chain. To exercise more control over the display mode, use this API to poll the set of display modes that are validated against monitor capabilities, or all modes that match the desktop (if the desktop settings are not validated against the monitor).
As shown, this API is designed to be called twice. First to get the number of modes available, and second to return a description of the modes.
UINT num = 0; --format = ; - UINT flags = ; pOutput->GetDisplayModeList( format, flags, &num, 0); ... * pDescs = new [num]; - pOutput->GetDisplayModeList( format, flags, &num, pDescs);
[Starting with Direct3D 11.1, we recommend not to use FindClosestMatchingMode anymore to find the display mode that most closely matches the requested display mode. Instead, use
Finds the display mode that most closely matches the requested display mode.
-Returns one of the following DXGI_ERROR.
FindClosestMatchingMode behaves similarly to the
Halt a thread until the next vertical blank occurs.
-Returns one of the following DXGI_ERROR.
A vertical blank occurs when the raster moves from the lower right corner to the upper left corner to begin drawing the next frame.
-Takes ownership of an output.
-A reference to the
Set to TRUE to enable other threads or applications to take ownership of the device; otherwise, set to
Returns one of the DXGI_ERROR values.
When you are finished with the output, call
TakeOwnership should not be called directly by applications, since results will be unpredictable. It is called implicitly by the DXGI swap chain object during full-screen transitions, and should not be used as a substitute for swap-chain methods.
-Releases ownership of the output.
-If you are not using a swap chain, get access to an output by calling
Gets a description of the gamma-control capabilities.
-A reference to a description of the gamma-control capabilities (see
Returns one of the DXGI_ERROR values.
Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Sets the gamma controls.
-A reference to a
Returns one of the DXGI_ERROR values.
Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Gets the gamma control settings.
-An array of gamma control settings (see
Returns one of the DXGI_ERROR values.
Note??Calling this method is only supported while in full-screen mode.?
For info about using gamma correction, see Using gamma correction.
-Changes the display mode.
-A reference to a surface (see
Returns one of the DXGI_ERROR values.
This method should only be called between
[Starting with Direct3D 11.1, we recommend not to use GetDisplaySurfaceData anymore to retrieve the current display surface. Instead, use
Gets a copy of the current display surface.
-Returns one of the DXGI_ERROR values.
Use
Gets statistics about recently rendered frames.
-A reference to frame statistics (see
If this function succeeds, it returns
This API is similar to
Note??Calling this method is only supported while in full-screen mode.? -
UINT num = 0;
- DXGI_FORMAT format = DXGI_FORMAT_R32G32B32A32_FLOAT;
- UINT flags = DXGI_ENUM_MODES_INTERLACED; pOutput->GetDisplayModeList( format, flags, &num, 0); ... DXGI_MODE_DESC * pDescs = new DXGI_MODE_DESC[num];
- pOutput->GetDisplayModeList( format, flags, &num, pDescs);
-
-
- An
To determine the outputs that are available from the adapter, use
[Starting with Direct3D 11.1, we recommend not to use GetDisplayModeList anymore to retrieve the matching display mode. Instead, use
Gets the display modes that match the requested format and other input options.
-Returns one of the following DXGI_ERROR. It is rare, but possible, that the display modes available can change immediately after calling this method, in which case
In general, when switching from windowed to full-screen mode, a swap chain automatically chooses a display mode that meets (or exceeds) the resolution, color depth and refresh rate of the swap chain. To exercise more control over the display mode, use this API to poll the set of display modes that are validated against monitor capabilities, or all modes that match the desktop (if the desktop settings are not validated against the monitor).
As shown, this API is designed to be called twice. First to get the number of modes available, and second to return a description of the modes.
UINT num = 0; --format = ; - UINT flags = ; pOutput->GetDisplayModeList( format, flags, &num, 0); ... * pDescs = new [num]; - pOutput->GetDisplayModeList( format, flags, &num, pDescs);
Finds the display mode that most closely matches the requested display mode.
-A reference to the
A reference to the
A reference to the Direct3D device interface. If this parameter is
Returns one of the error codes described in the DXGI_ERROR topic.
Direct3D devices require UNORM formats.
FindClosestMatchingMode1 finds the closest matching available display mode to the mode that you specify in pModeToMatch.
If you set the Stereo member in the
FindClosestMatchingMode1 resolves similarly ranked members of display modes (that is, all specified, or all unspecified, and so on) in the following order:
When FindClosestMatchingMode1 determines the closest value for a particular member, it uses previously matched members to filter the display mode list choices, and ignores other members. For example, when FindClosestMatchingMode1 matches Resolution, it already filtered the display mode list by a certain ScanlineOrdering, Scaling, and Format, while it ignores RefreshRate. This ordering doesn't define the absolute ordering for every usage scenario of FindClosestMatchingMode1, because the application can choose some values initially, which effectively changes the order of resolving members.
FindClosestMatchingMode1 matches members of the display mode one at a time, generally in a specified order.
If a member is unspecified, FindClosestMatchingMode1 gravitates toward the values for the desktop related to this output. If this output is not part of the desktop, FindClosestMatchingMode1 uses the default desktop output to find values. If an application uses a fully unspecified display mode, FindClosestMatchingMode1 typically returns a display mode that matches the desktop settings for this output. Because unspecified members are lower priority than specified members, FindClosestMatchingMode1 resolves unspecified members later than specified members.
-Copies the display surface (front buffer) to a user-provided resource.
-A reference to a resource interface that represents the resource to which GetDisplaySurfaceData1 copies the display surface.
Returns one of the error codes described in the DXGI_ERROR topic.
GetDisplaySurfaceData1 is similar to
GetDisplaySurfaceData1 returns an error if the input resource is not a 2D texture (represented by the
The original
You can call GetDisplaySurfaceData1 only when an output is in full-screen mode. If GetDisplaySurfaceData1 succeeds, it fills the destination resource.
Use
Creates a desktop duplication interface from the
If an application wants to duplicate the entire desktop, it must create a desktop duplication interface on each active output on the desktop. This interface does not provide an explicit way to synchronize the timing of each output image. Instead, the application must use the time stamp of each output, and then determine how to combine the images.
For DuplicateOutput to succeed, you must create pDevice from
If the current mode is a stereo mode, the desktop duplication interface provides the image for the left stereo image only.
By default, only four processes can use a
For improved performance, consider using DuplicateOutput1.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Gets the display modes that match the requested format and other input options.
-A
A combination of DXGI_ENUM_MODES-typed values that are combined by using a bitwise OR operation. The resulting value specifies options for display modes to include. You must specify
GetDisplayModeList1 is updated from GetDisplayModeList to return a list of
The GetDisplayModeList1 method does not enumerate stereo modes unless you specify the
In general, when you switch from windowed to full-screen mode, a swap chain automatically chooses a display mode that meets (or exceeds) the resolution, color depth, and refresh rate of the swap chain. To exercise more control over the display mode, use GetDisplayModeList1 to poll the set of display modes that are validated against monitor capabilities, or all modes that match the desktop (if the desktop settings are not validated against the monitor).
The following example code shows that you need to call GetDisplayModeList1 twice. First call GetDisplayModeList1 to get the number of modes available, and second call GetDisplayModeList1 to return a description of the modes.
UINT num = 0;
- format = ;
- UINT flags = ; pOutput->GetDisplayModeList1( format, flags, &num, 0); ... * pDescs = new [num];
- pOutput->GetDisplayModeList1( format, flags, &num, pDescs);
- An
To see the outputs available, use
Queries an adapter output for multiplane overlay support. If this API returns ?TRUE?, multiple swap chain composition takes place in a performant manner using overlay hardware. If this API returns false, apps should avoid using foreground swap chains (that is, avoid using swap chains created with the
TRUE if the output adapter is the primary adapter and it supports multiplane overlays, otherwise returns
See CreateSwapChainForCoreWindow for info on creating a foreground swap chain.
-[This documentation is preliminary and is subject to change.]
Queries an adapter output for multiplane overlay support.
-TRUE if the output adapter is the primary adapter and it supports multiplane overlays, otherwise returns
Represents an adapter output (such as a monitor). The
Checks for overlay support.
-A
A reference to the Direct3D device interface. CheckOverlaySupport returns only support info about this scan-out device.
A reference to a variable that receives a combination of
Represents an adapter output (such as a monitor). The
Checks for overlay color space support.
-A
A
A reference to the Direct3D device interface. CheckOverlayColorSpaceSupport returns only support info about this scan-out device.
A reference to a variable that receives a combination of
An
To see the outputs available, use
Allows specifying a list of supported formats for fullscreen surfaces that can be returned by the
This method allows directly receiving the original back buffer format used by a running fullscreen application. For comparison, using the original DuplicateOutput function always converts the fullscreen surface to a 32-bit BGRA format. In cases where the current fullscreen application is using a different buffer format, a conversion to 32-bit BGRA incurs a performance penalty. Besides the performance benefit of being able to skip format conversion, using DuplicateOutput1 also allows receiving the full gamut of colors in cases where a high-color format (such as R10G10B10A2) is being presented.
The pSupportedFormats array should only contain display scan-out formats. See Format Support for Direct3D Feature Level 11.0 Hardware for required scan-out formats at each feature level. If the current fullscreen buffer format is not contained in the pSupportedFormats array, DXGI will pick one of the supplied formats and convert the fullscreen buffer to that format before returning from
An
To see the outputs available, use
The
A collaboration application can use
An application can use
The following components of the operating system can generate the desktop image:
All current
Examples of situations in which
In these situations, the application must release the
While the application processes each desktop image, the operating system accumulates all the desktop image updates into a single update. For more information about desktop updates, see Updating the desktop image data.
The desktop image is always in the
The
Retrieves a description of a duplicated output. This description specifies the dimensions of the surface that contains the desktop image.
-After an application creates an
Retrieves a description of a duplicated output. This description specifies the dimensions of the surface that contains the desktop image.
-A reference to a
After an application creates an
Indicates that the application is ready to process the next desktop image.
-The time-out interval, in milliseconds. This interval specifies the amount of time that this method waits for a new frame before it returns to the caller. This method returns if the interval elapses, and a new desktop image is not available.
For more information about the time-out interval, see Remarks.
A reference to a memory location that receives the
A reference to a variable that receives the
AcquireNextFrame returns:
When AcquireNextFrame returns successfully, the calling application can access the desktop image that AcquireNextFrame returns in the variable at ppDesktopResource. - If the caller specifies a zero time-out interval in the TimeoutInMilliseconds parameter, AcquireNextFrame verifies whether there is a new desktop image available, returns immediately, and indicates its outcome with the return value. If the caller specifies an INFINITE time-out interval in the TimeoutInMilliseconds parameter, the time-out interval never elapses.
Note??You cannot cancel the wait that you specified in the TimeoutInMilliseconds parameter. Therefore, if you must periodically check for other conditions (for example, a terminate signal), you should specify a non-INFINITE time-out interval. After the time-out interval elapses, you can check for these other conditions and then call AcquireNextFrame again to wait for the next frame.?AcquireNextFrame acquires a new desktop frame when the operating system either updates the desktop bitmap image or changes the shape or position of a hardware reference. The new frame that AcquireNextFrame acquires might have only the desktop image updated, only the reference shape or position updated, or both.
-Gets information about dirty rectangles for the current desktop frame.
-The size in bytes of the buffer that the caller passed to the pDirtyRectsBuffer parameter.
A reference to an array of
Pointer to a variable that receives the number of bytes that GetFrameDirtyRects needs to store information about dirty regions in the buffer at pDirtyRectsBuffer.
For more information about returning the required buffer size, see Remarks.
GetFrameDirtyRects returns:
GetFrameDirtyRects stores a size value in the variable at pDirtyRectsBufferSizeRequired. This value specifies the number of bytes that GetFrameDirtyRects needs to store information about dirty regions. You can use this value in the following situations to determine the amount of memory to allocate for future buffers that you pass to pDirtyRectsBuffer:
The caller can also use the value returned at pDirtyRectsBufferSizeRequired to determine the number of
The buffer contains the list of dirty
Gets information about the moved rectangles for the current desktop frame.
-The size in bytes of the buffer that the caller passed to the pMoveRectBuffer parameter.
A reference to an array of
Pointer to a variable that receives the number of bytes that GetFrameMoveRects needs to store information about moved regions in the buffer at pMoveRectBuffer.
For more information about returning the required buffer size, see Remarks.
GetFrameMoveRects returns:
GetFrameMoveRects stores a size value in the variable at pMoveRectsBufferSizeRequired. This value specifies the number of bytes that GetFrameMoveRects needs to store information about moved regions. You can use this value in the following situations to determine the amount of memory to allocate for future buffers that you pass to pMoveRectBuffer:
The caller can also use the value returned at pMoveRectsBufferSizeRequired to determine the number of
The buffer contains the list of move RECTs for the current frame.
Note??To produce a visually accurate copy of the desktop, an application must first process all move RECTs before it processes dirty RECTs.? -Gets information about the new reference shape for the current desktop frame.
-The size in bytes of the buffer that the caller passed to the pPointerShapeBuffer parameter.
A reference to a buffer to which GetFramePointerShape copies and returns pixel data for the new reference shape.
Pointer to a variable that receives the number of bytes that GetFramePointerShape needs to store the new reference shape pixel data in the buffer at pPointerShapeBuffer.
For more information about returning the required buffer size, see Remarks.
Pointer to a
GetFramePointerShape returns:
GetFramePointerShape stores a size value in the variable at pPointerShapeBufferSizeRequired. This value specifies the number of bytes that pPointerShapeBufferSizeRequired needs to store the new reference shape pixel data. You can use the value in the following situations to determine the amount of memory to allocate for future buffers that you pass to pPointerShapeBuffer:
The pPointerShapeInfo parameter describes the new reference shape.
-Provides the CPU with efficient access to a desktop image if that desktop image is already in system memory.
-A reference to a
MapDesktopSurface returns:
You can successfully call MapDesktopSurface if the DesktopImageInSystemMemory member of the
Invalidates the reference to the desktop image that was retrieved by using
UnMapDesktopSurface returns:
Indicates that the application finished processing the frame.
-ReleaseFrame returns:
The application must release the frame before it acquires the next frame. After the frame is released, the surface that contains the desktop bitmap becomes invalid; you will not be able to use the surface in a DirectX graphics operation.
For performance reasons, we recommend that you release the frame just before you call the
Set the priority for evicting the resource from memory.
-The eviction priority is a memory-management variable that is used by DXGI for determining how to populate overcommitted memory.
You can set priority levels other than the defined values when appropriate. For example, you can set a resource with a priority level of 0x78000001 to indicate that the resource is slightly above normal.
-[Starting with Direct3D 11.1, we recommend not to use GetSharedHandle anymore to retrieve the handle to a shared resource. Instead, use
Gets the handle to a shared resource.
-GetSharedHandle returns a handle for the resource that you created as shared (that is, you set the
The creator of a shared resource must not destroy the resource until all intended entities have opened the resource. The validity of the handle is tied to the lifetime of the underlying video memory. If no resource objects exist on any devices that refer to this resource, the handle is no longer valid. To extend the lifetime of the handle and video memory, you must open the shared resource on a device.
GetSharedHandle can also return handles for resources that were passed into
GetSharedHandle fails if the resource to which it wants to get a handle is not shared.
-Get or sets the eviction priority.
-The eviction priority is a memory-management variable that is used by DXGI to determine how to manage overcommitted memory.
Priority levels other than the defined values are used when appropriate. For example, a resource with a priority level of 0x78000001 indicates that the resource is slightly above normal.
-[Starting with Direct3D 11.1, we recommend not to use GetSharedHandle anymore to retrieve the handle to a shared resource. Instead, use
Gets the handle to a shared resource.
-Returns one of the DXGI_ERROR values.
GetSharedHandle returns a handle for the resource that you created as shared (that is, you set the
The creator of a shared resource must not destroy the resource until all intended entities have opened the resource. The validity of the handle is tied to the lifetime of the underlying video memory. If no resource objects exist on any devices that refer to this resource, the handle is no longer valid. To extend the lifetime of the handle and video memory, you must open the shared resource on a device.
GetSharedHandle can also return handles for resources that were passed into
GetSharedHandle fails if the resource to which it wants to get a handle is not shared.
-Get the expected resource usage.
-A reference to a usage flag (see DXGI_USAGE). For Direct3D 10, a surface can be used as a shader input or a render-target output.
Returns one of the following DXGI_ERROR.
Set the priority for evicting the resource from memory.
-The priority is one of the following values:
Value | Meaning |
---|---|
| The resource is unused and can be evicted as soon as another resource requires the memory that the resource occupies. |
| The eviction priority of the resource is low. The placement of the resource is not critical, and minimal work is performed to find a location for the resource. For example, if a GPU can render with a vertex buffer from either local or non-local memory with little difference in performance, that vertex buffer is low priority. Other more critical resources (for example, a render target or texture) can then occupy the faster memory. |
| The eviction priority of the resource is normal. The placement of the resource is important, but not critical, for performance. The resource is placed in its preferred location instead of a low-priority resource. |
| The eviction priority of the resource is high. The resource is placed in its preferred location instead of a low-priority or normal-priority resource. |
| The resource is evicted from memory only if there is no other way of resolving the memory requirement. |
?
Returns one of the following DXGI_ERROR.
The eviction priority is a memory-management variable that is used by DXGI for determining how to populate overcommitted memory.
You can set priority levels other than the defined values when appropriate. For example, you can set a resource with a priority level of 0x78000001 to indicate that the resource is slightly above normal.
-Get the eviction priority.
-A reference to the eviction priority, which determines when a resource can be evicted from memory.
The following defined values are possible.
Value | Meaning |
---|---|
| The resource is unused and can be evicted as soon as another resource requires the memory that the resource occupies. |
| The eviction priority of the resource is low. The placement of the resource is not critical, and minimal work is performed to find a location for the resource. For example, if a GPU can render with a vertex buffer from either local or non-local memory with little difference in performance, that vertex buffer is low priority. Other more critical resources (for example, a render target or texture) can then occupy the faster memory. |
| The eviction priority of the resource is normal. The placement of the resource is important, but not critical, for performance. The resource is placed in its preferred location instead of a low-priority resource. |
| The eviction priority of the resource is high. The resource is placed in its preferred location instead of a low-priority or normal-priority resource. |
| The resource is evicted from memory only if there is no other way of resolving the memory requirement. |
?
Returns one of the following DXGI_ERROR.
The eviction priority is a memory-management variable that is used by DXGI to determine how to manage overcommitted memory.
Priority levels other than the defined values are used when appropriate. For example, a resource with a priority level of 0x78000001 indicates that the resource is slightly above normal.
- An
To determine the type of memory a resource is currently located in, use
You can retrieve the
* pDXGIResource; - hr = g_pd3dTexture2D->QueryInterface(__uuidof( ), (void **)&pDXGIResource);
Windows?Phone?8: This API is supported.
-Creates a subresource surface object.
-The index of the subresource surface object to enumerate.
The address of a reference to a
Returns
A subresource is a valid surface if the original resource would have been a valid surface had its array size been equal to 1.
Subresource surface objects implement the
CreateSubresourceSurface creates a subresource surface that is based on the resource interface on which CreateSubresourceSurface is called. For example, if the original resource interface object is a 2D texture, the created subresource surface is also a 2D texture.
You can use CreateSubresourceSurface to create parts of a stereo resource so you can use Direct2D on either the left or right part of the stereo resource.
-Creates a handle to a shared resource. You can then use the returned handle with multiple Direct3D devices.
-A reference to a
Set this parameter to
The lpSecurityDescriptor member of the structure specifies a SECURITY_DESCRIPTOR for the resource. Set this member to
The requested access rights to the resource. In addition to the generic access rights, DXGI defines the following values:
You can combine these values by using a bitwise OR operation.
The name of the resource to share. The name is limited to MAX_PATH characters. Name comparison is case sensitive. You will need the resource name if you call the
If lpName matches the name of an existing resource, CreateSharedHandle fails with
The name can have a "Global\" or "Local\" prefix to explicitly create the object in the global or session namespace. The remainder of the name can contain any character except the backslash character (\). For more information, see Kernel Object Namespaces. Fast user switching is implemented using Terminal Services sessions. Kernel object names must follow the guidelines outlined for Terminal Services so that applications can support multiple users.
The object can be created in a private namespace. For more information, see Object Namespaces.
A reference to a variable that receives the NT HANDLE value to the resource to share. You can use this handle in calls to access the resource.
CreateSharedHandle only returns the NT handle when you created the resource as shared and specified that it uses NT handles (that is, you set the
You can pass the handle that CreateSharedHandle returns in a call to the
Because the handle that CreateSharedHandle returns is an NT handle, you can use the handle with CloseHandle, DuplicateHandle, and so on. You can call CreateSharedHandle only once for a shared resource; later calls fail. If you need more handles to the same shared resource, call DuplicateHandle. When you no longer need the shared resource handle, call CloseHandle to close the handle, in order to avoid memory leaks.
If you pass a name for the resource to lpName when you call CreateSharedHandle to share the resource, you can subsequently pass this name in a call to the
If you created the resource as shared and did not specify that it uses NT handles, you cannot use CreateSharedHandle to get a handle for sharing because CreateSharedHandle will fail.
-The
An image-data object is a 2D section of memory, commonly called a surface. To get the surface from an output, call
The runtime automatically creates an
Get a description of the surface.
-Get a description of the surface.
-A reference to the surface description (see
Returns
Get a reference to the data contained in the surface, and deny GPU access to the surface.
-A reference to the surface data (see
CPU read-write flags. These flags can be combined with a logical OR.
Returns
Use
Get a reference to the data contained in the surface, and deny GPU access to the surface.
-Returns
Use
The
This interface is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
An image-data object is a 2D section of memory, commonly called a surface. To get the surface from an output, call
Any object that supports
The runtime automatically creates an
Returns a device context (DC) that allows you to render to a Microsoft DirectX Graphics Infrastructure (DXGI) surface using Windows Graphics Device Interface (GDI).
-A Boolean value that specifies whether to preserve Direct3D contents in the GDI DC. TRUE directs the runtime not to preserve Direct3D contents in the GDI DC; that is, the runtime discards the Direct3D contents.
A reference to an
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
After you use the GetDC method to retrieve a DC, you can render to the DXGI surface by using GDI. The GetDC method readies the surface for GDI rendering and allows inter-operation between DXGI and GDI technologies.
Keep the following in mind when using this method:
You can also call GetDC on the back buffer at index 0 of a swap chain by obtaining an
-* g_pSwapChain = null ; -* g_pSurface1 = null ; - ... - //Setup the device and and swapchain - g_pSwapChain->GetBuffer(0, __uuidof(), (void**) &g_pSurface1); - g_pSurface1->GetDC( , &g_hDC ); - ... - //Draw on the DC using GDI - ... - //When finish drawing release the DC - g_pSurface1->ReleaseDC( null );
Releases the GDI device context (DC) that is associated with the current surface and allows you to use Direct3D to render.
-A reference to a
You can pass a reference to an empty
If this method succeeds, it returns
This method is not supported by DXGI 1.0, which shipped in Windows?Vista and Windows Server?2008. DXGI 1.1 support is required, which is available on Windows?7, Windows Server?2008?R2, and as an update to Windows?Vista with Service Pack?2 (SP2) (KB 971644) and Windows Server?2008 (KB 971512).
Use the ReleaseDC method to release the DC and indicate that your application finished all GDI rendering to this surface. You must call the ReleaseDC method before you can use Direct3D to perform additional rendering.
Prior to resizing buffers you must release all outstanding DCs.
-The
An image-data object is a 2D section of memory, commonly called a surface. To get the surface from an output, call
Any object that supports
The runtime automatically creates an
You can call the
Gets the parent resource and subresource index that support a subresource surface.
-The globally unique identifier (
A reference to a buffer that receives a reference to the parent resource object for the subresource surface.
A reference to a variable that receives the index of the subresource surface.
Returns
For subresource surface objects that the
Current objects that implement
An
You can create a swap chain by
- calling
[Starting with Direct3D 11.1, we recommend not to use GetDesc anymore to get a description of the swap chain. Instead, use
Get a description of the swap chain.
-Get the output (the display monitor) that contains the majority of the client area of the target window.
-If the method succeeds, the output interface will be filled and its reference count incremented. When you are finished with it, be sure to release the interface to avoid a memory leak.
The output is also owned by the adapter on which the swap chain's device was created.
You cannot call GetContainingOutput on a swap chain that you created with
Gets the number of times that
For info about presentation statistics for a frame, see
Presents a rendered image to the user.
-An integer that specifies how to synchronize presentation of a frame with the vertical blank.
For the bit-block transfer (bitblt) model (
For the flip model (
For an example that shows how sync-interval values affect a flip presentation queue, see Remarks.
If the update region straddles more than one output (each represented by
An integer value that contains swap-chain presentation options. These options are defined by the DXGI_PRESENT constants.
Possible return values include:
Starting with Direct3D 11.1, consider using
For the best performance when flipping swap-chain buffers in a full-screen application, see Full-Screen Application Performance Hints.
Because calling Present might cause the render thread to wait on the message-pump thread, be careful when calling this method in an application that uses multiple threads. For more details, see Multithreading Considerations.
Differences between Direct3D 9 and Direct3D 10: Specifying |
?
For flip presentation model swap chains that you create with the
For info about how data values change when you present content to the screen, see Converting data for the color space.
-Accesses one of the swap-chain's back buffers.
-A zero-based buffer index.
If the swap chain's swap effect is
If the swap chain's swap effect is either
The type of interface used to manipulate the buffer.
A reference to a back-buffer interface.
Returns one of the following DXGI_ERROR.
Sets the display state to windowed or full screen.
-A Boolean value that specifies whether to set the display state to windowed or full screen. TRUE for full screen, and
If you pass TRUE to the Fullscreen parameter to set the display state to full screen, you can optionally set this parameter to a reference to an
This methods returns:
When this error is returned, an application can continue to run in windowed mode and try to switch to full-screen mode later.
DXGI may change the display state of a swap chain in response to end user or system requests.
We recommend that you create a windowed swap chain and allow the end user to change the swap chain to full screen through SetFullscreenState; that is, do not set the Windowed member of
Get the state associated with full-screen mode.
-A reference to a boolean whose value is either:
A reference to the output target (see
Returns one of the following DXGI_ERROR.
When the swap chain is in full-screen mode, a reference to the target output will be returned and its reference count will be incremented.
-[Starting with Direct3D 11.1, we recommend not to use GetDesc anymore to get a description of the swap chain. Instead, use
Get a description of the swap chain.
-Returns one of the following DXGI_ERROR.
Changes the swap chain's back buffer size, format, and number of buffers. This should be called when the application window is resized.
-The number of buffers in the swap chain (including all back and front buffers). This number can be different from the number of buffers with which you created the swap chain. This number can't be greater than DXGI_MAX_SWAP_CHAIN_BUFFERS. Set this number to zero to preserve the existing number of buffers in the swap chain. You can't specify less than two buffers for the flip presentation model.
The new width of the back buffer. If you specify zero, DXGI will use the width of the client area of the target window. You can't specify the width as zero if you called the
The new height of the back buffer. If you specify zero, DXGI will use the height of the client area of the target window. You can't specify the height as zero if you called the
A
A combination of
Returns
You can't resize a swap chain unless you release all outstanding references to its back buffers. You must release all of its direct and indirect references on the back buffers in order for ResizeBuffers to succeed.
Direct references are held by the application after it calls AddRef on a resource.
Indirect references are held by views to a resource, binding a view of the resource to a device context, a command list that used the resource, a command list that used a view to that resource, a command list that executed another command list that used the resource, and so on.
Before you call ResizeBuffers, ensure that the application releases all references (by calling the appropriate number of Release invocations) on the resources, any views to the resource, and any command lists that use either the resources or views, and ensure that neither the resource nor a view is still bound to a device context. You can use
For swap chains that you created with
We recommend that you call ResizeBuffers when a client window is resized (that is, when an application receives a WM_SIZE message).
The only difference between
Resizes the output target.
-A reference to a
Returns a code that indicates success or failure.
ResizeTarget resizes the target window when the swap chain is in windowed mode, and changes the display mode on the target output when the swap chain is in full-screen mode. Therefore, apps can call ResizeTarget to resize the target window (rather than a Microsoft Win32API such as SetWindowPos) without knowledge of the swap chain display mode.
If a Windows Store app calls ResizeTarget, it fails with
You cannot call ResizeTarget on a swap chain that you created with
Apps must still call
Get the output (the display monitor) that contains the majority of the client area of the target window.
-A reference to the output interface (see
Returns one of the following DXGI_ERROR.
If the method succeeds, the output interface will be filled and its reference count incremented. When you are finished with it, be sure to release the interface to avoid a memory leak.
The output is also owned by the adapter on which the swap chain's device was created.
You cannot call GetContainingOutput on a swap chain that you created with
Gets performance statistics about the last render frame.
-A reference to a
Returns one of the DXGI_ERROR values.
You cannot use GetFrameStatistics for swap chains that both use the bit-block transfer (bitblt) presentation model and draw in windowed mode.
You can only use GetFrameStatistics for swap chains that either use the flip presentation model or draw in full-screen mode. You set the
Gets the number of times that
Returns one of the DXGI_ERROR values.
For info about presentation statistics for a frame, see
Gets performance statistics about the last render frame.
-You cannot use GetFrameStatistics for swap chains that both use the bit-block transfer (bitblt) presentation model and draw in windowed mode.
You can only use GetFrameStatistics for swap chains that either use the flip presentation model or draw in full-screen mode. You set the
[Starting with Direct3D 11.1, we recommend not to use Present anymore to present a rendered image. Instead, use
Presents a rendered image to the user.
-Possible return values include:
Note??The Present method can return either
Starting with Direct3D 11.1, we recommend to instead use
For the best performance when flipping swap-chain buffers in a full-screen application, see Full-Screen Application Performance Hints.
Because calling Present might cause the render thread to wait on the message-pump thread, be careful when calling this method in an application that uses multiple threads. For more details, see Multithreading Considerations.
Differences between Direct3D 9 and Direct3D 10: Specifying |
?
For flip presentation model swap chains that you create with the
For info about how data values change when you present content to the screen, see Converting data for the color space.
-Provides presentation capabilities that are enhanced from
You can create a swap chain by
- calling
Gets a description of the swap chain.
-Gets a description of a full-screen swap chain.
-The semantics of GetFullscreenDesc are identical to that of the IDXGISwapchain::GetDesc method for
Retrieves the underlying
Applications call the
Determines whether a swap chain supports ?temporary mono.?
-Temporary mono is a feature where a stereo swap chain can be presented using only the content in the left buffer. To present using the left buffer as a mono buffer, an application calls the
Gets the output (the display monitor) to which you can restrict the contents of a present operation.
-If the method succeeds, the runtime fills the buffer at ppRestrictToOutput with a reference to the restrict-to output interface. This restrict-to output interface has its reference count incremented. When you are finished with it, be sure to release the interface to avoid a memory leak.
The output is also owned by the adapter on which the swap chain's device was created.
-Retrieves or sets the background color of the swap chain.
-Gets or sets the rotation of the back buffers for the swap chain.
-Gets a description of the swap chain.
-A reference to a
Returns
Gets a description of a full-screen swap chain.
-A reference to a
GetFullscreenDesc returns:
The semantics of GetFullscreenDesc are identical to that of the IDXGISwapchain::GetDesc method for
Retrieves the underlying
Returns
If pHwnd receives
Applications call the
Retrieves the underlying CoreWindow object for this swap-chain object.
-GetCoreWindow returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, GetCoreWindow fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
Applications call the
Presents a frame on the display screen.
-An integer that specifies how to synchronize presentation of a frame with the vertical blank.
For the bit-block transfer (bitblt) model (
For the flip model (
For an example that shows how sync-interval values affect a flip presentation queue, see Remarks.
If the update region straddles more than one output (each represented by
An integer value that contains swap-chain presentation options. These options are defined by the DXGI_PRESENT constants.
A reference to a
Possible return values include:
An app can use Present1 to optimize presentation by specifying scroll and dirty rectangles. When the runtime has information about these rectangles, the runtime can then perform necessary bitblts during presentation more efficiently and pass this metadata to the Desktop Window Manager (DWM). The DWM can then use the metadata to optimize presentation and pass the metadata to indirect displays and terminal servers to optimize traffic over the wire. An app must confine its modifications to only the dirty regions that it passes to Present1, as well as modify the entire dirty region to avoid undefined resource contents from being exposed.
For flip presentation model swap chains that you create with the
For info about how data values change when you present content to the screen, see Converting data for the color space.
For info about calling Present1 when your app uses multiple threads, see Multithread Considerations and Multithreading and DXGI.
-Determines whether a swap chain supports ?temporary mono.?
-Indicates whether to use the swap chain in temporary mono mode. TRUE indicates that you can use temporary-mono mode; otherwise,
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, IsTemporaryMonoSupported always returns
Temporary mono is a feature where a stereo swap chain can be presented using only the content in the left buffer. To present using the left buffer as a mono buffer, an application calls the
Gets the output (the display monitor) to which you can restrict the contents of a present operation.
- A reference to a buffer that receives a reference to the
Returns
If the method succeeds, the runtime fills the buffer at ppRestrictToOutput with a reference to the restrict-to output interface. This restrict-to output interface has its reference count incremented. When you are finished with it, be sure to release the interface to avoid a memory leak.
The output is also owned by the adapter on which the swap chain's device was created.
-Changes the background color of the swap chain.
-A reference to a DXGI_RGBA structure that specifies the background color to set.
SetBackgroundColor returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, SetBackgroundColor fails with E_NOTIMPL. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
The background color affects only swap chains that you create with
When you set the background color, it is not immediately realized. It takes effect in conjunction with your next call to the
When you call the
Retrieves the background color of the swap chain.
-A reference to a DXGI_RGBA structure that receives the background color of the swap chain.
GetBackgroundColor returns:
Sets the rotation of the back buffers for the swap chain.
-A
SetRotation returns:
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, SetRotation fails with
You can only use SetRotation to rotate the back buffers for flip-model swap chains that you present in windowed mode.
SetRotation isn't supported for rotating the back buffers for flip-model swap chains that you present in full-screen mode. In this situation, SetRotation doesn't fail, but you must ensure that you specify no rotation (
Gets the rotation of the back buffers for the swap chain.
-A reference to a variable that receives a
Returns
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, GetRotation fails with
Extends
You can create a swap chain by
- calling
Gets or sets the number of frames that the swap chain is allowed to queue for rendering.
-Returns a waitable handle that signals when the DXGI adapter has finished presenting a new frame.
Windows?8.1 introduces new APIs that allow lower-latency rendering by waiting until the previous frame is presented to the display before drawing the next frame. To use this method, first create the DXGI swap chain with the
Gets or sets the transform matrix that will be applied to a composition swap chain upon the next present.
Starting with Windows?8.1, Windows Store apps are able to place DirectX swap chain visuals in XAML pages using the SwapChainPanel element, which can be placed and sized arbitrarily. This exposes the DirectX swap chain visuals to touch scaling and translation scenarios using touch UI. The GetMatrixTransform and SetMatrixTransform methods are used to synchronize scaling of the DirectX swap chain with its associated SwapChainPanel element. Only simple scale/translation elements in the matrix are allowed ? the call will fail if the matrix contains skew/rotation elements.
-Sets the source region to be used for the swap chain.
Use SetSourceSize to specify the portion of the swap chain from which the operating system presents. This allows an effective resize without calling the more-expensive
This method can return:
Gets the source region used for the swap chain.
Use GetSourceSize to get the portion of the swap chain from which the operating system presents. The source rectangle is always defined by the region [0, 0, Width, Height]. Use SetSourceSize to set this portion of the swap chain.
-This method can return error codes that are described in the DXGI_ERROR topic.
Sets the number of frames that the swap chain is allowed to queue for rendering.
-The maximum number of back buffer frames that will be queued for the swap chain. This value is 3 by default.
Returns
This method is only valid for use on swap chains created with
Gets the number of frames that the swap chain is allowed to queue for rendering.
-The maximum number of back buffer frames that will be queued for the swap chain. This value is 1 by default, but should be set to 2 if the scene takes longer than it takes for one vertical refresh (typically about 16ms) to draw.
Returns
Returns a waitable handle that signals when the DXGI adapter has finished presenting a new frame.
Windows?8.1 introduces new APIs that allow lower-latency rendering by waiting until the previous frame is presented to the display before drawing the next frame. To use this method, first create the DXGI swap chain with the
A handle to the waitable object, or
Sets the transform matrix that will be applied to a composition swap chain upon the next present.
Starting with Windows?8.1, Windows Store apps are able to place DirectX swap chain visuals in XAML pages using the SwapChainPanel element, which can be placed and sized arbitrarily. This exposes the DirectX swap chain visuals to touch scaling and translation scenarios using touch UI. The GetMatrixTransform and SetMatrixTransform methods are used to synchronize scaling of the DirectX swap chain with its associated SwapChainPanel element. Only simple scale/translation elements in the matrix are allowed ? the call will fail if the matrix contains skew/rotation elements.
-SetMatrixTransform returns:
Gets the transform matrix that will be applied to a composition swap chain upon the next present.
Starting with Windows?8.1, Windows Store apps are able to place DirectX swap chain visuals in XAML pages using the SwapChainPanel element, which can be placed and sized arbitrarily. This exposes the DirectX swap chain visuals to touch scaling and translation scenarios using touch UI. The GetMatrixTransform and SetMatrixTransform methods are used to synchronize scaling of the DirectX swap chain with its associated SwapChainPanel element. Only simple scale/translation elements in the matrix are allowed ? the call will fail if the matrix contains skew/rotation elements.
-GetMatrixTransform returns:
[This documentation is preliminary and is subject to change.]
Gets the source region used for the swap chain.
Use GetSourceSize to get the portion of the swap chain from which the operating system presents. The source rectangle is always defined by the region [0, 0, Width, Height]. Use SetSourceSize to set this portion of the swap chain.
-This method can return error codes that are described in the DXGI_ERROR topic.
Extends
Gets the index of the swap chain's current back buffer.
-Sets the color space used by the swap chain.
-Gets the index of the swap chain's current back buffer.
-Returns the index of the current back buffer.
Checks the swap chain's support for color space.
-A
A reference to a variable that receives a combination of
Sets the color space used by the swap chain.
-A
This method returns
Changes the swap chain's back buffer size, format, and number of buffers, where the swap chain was created using a D3D12 command queue as an input device. This should be called when the application window is resized.
-The number of buffers in the swap chain (including all back and front buffers). This number can be different from the number of buffers with which you created the swap chain. This number can't be greater than DXGI_MAX_SWAP_CHAIN_BUFFERS. Set this number to zero to preserve the existing number of buffers in the swap chain. You can't specify less than two buffers for the flip presentation model.
The new width of the back buffer. If you specify zero, DXGI will use the width of the client area of the target window. You can't specify the width as zero if you called the
The new height of the back buffer. If you specify zero, DXGI will use the height of the client area of the target window. You can't specify the height as zero if you called the
A
A combination of
An array of UINTs, of total size BufferCount, where the value indicates which node the back buffer should be created on. Buffers created using ResizeBuffers1 with a non-null pCreationNodeMask array are visible to all nodes.
An array of command queues (
Returns
This method is only valid to call when the swapchain was created using a D3D12 command queue (
When a swapchain is created on a multi-GPU adapter, the backbuffers are all created on node 1 and only a single command queue is supported. ResizeBuffers1 enables applications to create backbuffers on different nodes, allowing a different command queue to be used with each node. These capabilities enable Alternate Frame Rendering (AFR) techniques to be used with the swapchain. See Direct3D 12 Multi-Adapters.
The only difference between
Also see the Remarks section in
Changes the swap chain's back buffer size, format, and number of buffers, where the swap chain was created using a D3D12 command queue as an input device. This should be called when the application window is resized.
-The number of buffers in the swap chain (including all back and front buffers). This number can be different from the number of buffers with which you created the swap chain. This number can't be greater than DXGI_MAX_SWAP_CHAIN_BUFFERS. Set this number to zero to preserve the existing number of buffers in the swap chain. You can't specify less than two buffers for the flip presentation model.
The new width of the back buffer. If you specify zero, DXGI will use the width of the client area of the target window. You can't specify the width as zero if you called the
The new height of the back buffer. If you specify zero, DXGI will use the height of the client area of the target window. You can't specify the height as zero if you called the
A
A combination of
An array of UINTs, of total size BufferCount, where the value indicates which node the back buffer should be created on. Buffers created using ResizeBuffers1 with a non-null pCreationNodeMask array are visible to all nodes.
An array of command queues (
Returns
This method is only valid to call when the swapchain was created using a D3D12 command queue (
When a swapchain is created on a multi-GPU adapter, the backbuffers are all created on node 1 and only a single command queue is supported. ResizeBuffers1 enables applications to create backbuffers on different nodes, allowing a different command queue to be used with each node. These capabilities enable Alternate Frame Rendering (AFR) techniques to be used with the swapchain. See Direct3D 12 Multi-Adapters.
The only difference between
Also see the Remarks section in
Changes the swap chain's back buffer size, format, and number of buffers, where the swap chain was created using a D3D12 command queue as an input device. This should be called when the application window is resized.
-The number of buffers in the swap chain (including all back and front buffers). This number can be different from the number of buffers with which you created the swap chain. This number can't be greater than DXGI_MAX_SWAP_CHAIN_BUFFERS. Set this number to zero to preserve the existing number of buffers in the swap chain. You can't specify less than two buffers for the flip presentation model.
The new width of the back buffer. If you specify zero, DXGI will use the width of the client area of the target window. You can't specify the width as zero if you called the
The new height of the back buffer. If you specify zero, DXGI will use the height of the client area of the target window. You can't specify the height as zero if you called the
A
A combination of
An array of UINTs, of total size BufferCount, where the value indicates which node the back buffer should be created on. Buffers created using ResizeBuffers1 with a non-null pCreationNodeMask array are visible to all nodes.
An array of command queues (
Returns
This method is only valid to call when the swapchain was created using a D3D12 command queue (
When a swapchain is created on a multi-GPU adapter, the backbuffers are all created on node 1 and only a single command queue is supported. ResizeBuffers1 enables applications to create backbuffers on different nodes, allowing a different command queue to be used with each node. These capabilities enable Alternate Frame Rendering (AFR) techniques to be used with the swapchain. See Direct3D 12 Multi-Adapters.
The only difference between
Also see the Remarks section in
An
You can create a swap chain by
- calling
This method sets High Dynamic Range (HDR) and Wide Color Gamut (WCG) header metadata.
-Specifies one member of the
Specifies the size of pMetaData, in bytes.
Specifies a void reference that references the metadata, if it exists. Refer to the
This method returns an
This method sets metadata to enable a monitor's output to be adjusted depending on its capabilities.
-This swap chain interface allows desktop media applications to request a seamless change to a specific refresh rate.
For example, a media application presenting video at a typical framerate of 23.997 frames per second can request a custom refresh rate of 24 or 48 Hz to eliminate jitter. If the request is approved, the app starts presenting frames at the custom refresh rate immediately - without the typical 'mode switch' a user would experience when changing the refresh rate themselves by using the control panel.
-Seamless changes to custom framerates can only be done on integrated panels. Custom frame rates cannot be applied to external displays. If the DXGI output adapter is attached to an external display then CheckPresentDurationSupport will return (0, 0) for upper and lower bounds, indicating that the device does not support seamless refresh rate changes.
Custom refresh rates can be used when displaying video with a dynamic framerate. However, the refresh rate change should be kept imperceptible to the user. A best practice for keeping the refresh rate transition imperceptible is to only set the custom framerate if the app determines it can present at that rate for least 5 seconds.
-Queries the system for a
Requests a custom presentation duration (custom refresh rate).
-Queries the system for a
This method returns
Requests a custom presentation duration (custom refresh rate).
-The custom presentation duration, specified in hundreds of nanoseconds.
This method returns
Queries the graphics driver for a supported frame present duration corresponding to a custom refresh rate.
-Indicates the frame duration to check. This value is the duration of one frame at the desired refresh rate, specified in hundreds of nanoseconds. For example, set this field to 167777 to check for 60 Hz refresh rate support.
A variable that will be set to the closest supported frame present duration that's smaller than the requested value, or zero if the device does not support any lower duration.
A variable that will be set to the closest supported frame present duration that's larger than the requested value, or zero if the device does not support any higher duration.
This method returns
If the DXGI output adapter does not support custom refresh rates (for example, an external display) then the display driver will set upper and lower bounds to (0, 0).
-Describes an adapter (or video card) by using DXGI 1.0.
-The
A string that contains the adapter description. On feature level 9 graphics hardware, GetDesc returns ?Software Adapter? for the description string.
The PCI ID of the hardware vendor. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the hardware vendor.
The PCI ID of the hardware device. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the hardware device.
The PCI ID of the sub system. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the sub system.
The PCI ID of the revision number of the adapter. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the revision number of the adapter.
The number of bytes of dedicated video memory that are not shared with the CPU.
The number of bytes of dedicated system memory that are not shared with the CPU. This memory is allocated from available system memory at boot time.
The number of bytes of shared system memory. This is the maximum value of system memory that may be consumed by the adapter during operation. Any incidental memory consumed by the driver as it manages and uses video memory is additional.
A unique value that identifies the adapter. See
Describes an adapter (or video card) using DXGI 1.1.
-The
A string that contains the adapter description. On feature level 9 graphics hardware, GetDesc1 returns ?Software Adapter? for the description string.
The PCI ID of the hardware vendor. On feature level 9 graphics hardware, GetDesc1 returns zeros for the PCI ID of the hardware vendor.
The PCI ID of the hardware device. On feature level 9 graphics hardware, GetDesc1 returns zeros for the PCI ID of the hardware device.
The PCI ID of the sub system. On feature level 9 graphics hardware, GetDesc1 returns zeros for the PCI ID of the sub system.
The PCI ID of the revision number of the adapter. On feature level 9 graphics hardware, GetDesc1 returns zeros for the PCI ID of the revision number of the adapter.
The number of bytes of dedicated video memory that are not shared with the CPU.
The number of bytes of dedicated system memory that are not shared with the CPU. This memory is allocated from available system memory at boot time.
The number of bytes of shared system memory. This is the maximum value of system memory that may be consumed by the adapter during operation. Any incidental memory consumed by the driver as it manages and uses video memory is additional.
A unique value that identifies the adapter. See
A value of the
Describes an adapter (or video card) that uses Microsoft DirectX Graphics Infrastructure (DXGI) 1.2.
-The
A string that contains the adapter description.
The PCI ID of the hardware vendor.
The PCI ID of the hardware device.
The PCI ID of the sub system.
The PCI ID of the revision number of the adapter.
The number of bytes of dedicated video memory that are not shared with the CPU.
The number of bytes of dedicated system memory that are not shared with the CPU. This memory is allocated from available system memory at boot time.
The number of bytes of shared system memory. This is the maximum value of system memory that may be consumed by the adapter during operation. Any incidental memory consumed by the driver as it manages and uses video memory is additional.
A unique value that identifies the adapter. See
A value of the
A value of the
A value of the
Describes an adapter (or video card) by using DXGI 1.0.
-The
A string that contains the adapter description. On feature level 9 graphics hardware, GetDesc returns ?Software Adapter? for the description string.
The PCI ID of the hardware vendor. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the hardware vendor.
The PCI ID of the hardware device. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the hardware device.
The PCI ID of the sub system. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the sub system.
The PCI ID of the revision number of the adapter. On feature level 9 graphics hardware, GetDesc returns zeros for the PCI ID of the revision number of the adapter.
The number of bytes of dedicated video memory that are not shared with the CPU.
The number of bytes of dedicated system memory that are not shared with the CPU. This memory is allocated from available system memory at boot time.
The number of bytes of shared system memory. This is the maximum value of system memory that may be consumed by the adapter during operation. Any incidental memory consumed by the driver as it manages and uses video memory is additional.
A unique value that identifies the adapter. See
Used with
Describes timing and presentation statistics for a frame.
-You initialize the
You can only use
The values in the PresentCount and PresentRefreshCount members indicate information about when a frame was presented on the display screen. You can use these values to determine whether a glitch occurred. The values in the SyncRefreshCount and SyncQPCTime members indicate timing information that you can use for audio and video synchronization or very precise animation. If the swap chain draws in full-screen mode, these values are based on when the computer booted. - If the swap chain draws in windowed mode, these values are based on when the swap chain is created.
-A value that represents the running total count of times that an image was presented to the monitor since the computer booted.
Note??The number of times that an image was presented to the monitor is not necessarily the same as the number of times that you calledA value that represents the running total count of v-blanks at which the last image was presented to the monitor and that have happened since the computer booted (for windowed mode, since the swap chain was created).
A value that represents the running total count of v-blanks when the scheduler last sampled the machine time by calling QueryPerformanceCounter and that have happened since the computer booted (for windowed mode, since the swap chain was created).
A value that represents the high-resolution performance counter timer. This value is the same as the value returned by the QueryPerformanceCounter function.
Reserved. Always returns 0.
Used to verify system approval for the app's custom present duration (custom refresh rate). Approval should be continuously verified on a frame-by-frame basis.
-This structure is used with the GetFrameStatisticsMedia method.
-A value that represents the running total count of times that an image was presented to the monitor since the computer booted.
Note??The number of times that an image was presented to the monitor is not necessarily the same as the number of times that you calledA value that represents the running total count of v-blanks at which the last image was presented to the monitor and that have happened since the computer booted (for windowed mode, since the swap chain was created).
A value that represents the running total count of v-blanks when the scheduler last sampled the machine time by calling QueryPerformanceCounter and that have happened since the computer booted (for windowed mode, since the swap chain was created).
A value that represents the high-resolution performance counter timer. This value is the same as the value returned by the QueryPerformanceCounter function.
Reserved. Always returns 0.
A value indicating the composition presentation mode. This value is used to determine whether the app should continue to use the decode swap chain. See
If the system approves an app's custom present duration request, this field is set to the approved custom present duration.
If the app's custom present duration request is not approved, this field is set to zero.
Controls the settings of a gamma curve.
-The
For info about using gamma correction, see Using gamma correction.
-A
A
An array of
Controls the gamma capabilities of an adapter.
-To get a list of the capabilities for controlling gamma correction, call
For info about using gamma correction, see Using gamma correction.
-True if scaling and offset operations are supported during gamma correction; otherwise, false.
A value describing the maximum range of the control-point positions.
A value describing the minimum range of the control-point positions.
A value describing the number of control points in the array.
An array of values describing control points; the maximum length of control points is 1025.
Describes the 10 bit display metadata, and is usually used for video. This is used to adjust the output to best match a display's capabilities.
-The X and Y coordinates of the parameters mean the xy chromacity coordinate in the CIE1931 color space. The values are normalized to 50000, so to get a value between 0.0 and 1.0, divide by 50000.
This structure is used in conjunction with the SetHDRMetaData method.
-The chromaticity coordinates of the 1.0 red value. Index 0 contains the X coordinate and index 1 contains the Y coordinate.
The chromaticity coordinates of the 1.0 green value. Index 0 contains the X coordinate and index 1 contains the Y coordinate.
The chromaticity coordinates of the 1.0 blue value. Index 0 contains the X coordinate and index 1 contains the Y coordinate.
The chromaticity coordinates of the white point. Index 0 contains the X coordinate and index 1 contains the Y coordinate.
The maximum number of nits of the display used to master the content. Units are 0.0001 nit, so if the value is 1 nit, the value should be 10,000.
The minimum number of nits (in units of 0.00001 nit) of the display used to master the content.
The maximum nit value (in units of 0.00001 nit) used anywhere in the content.
The per-frame average of the maximum nit values (in units of 0.00001 nit).
Describes a JPEG AC huffman table.
-The number of codes for each code length.
The Huffman code values, in order of increasing code length.
Describes a JPEG DC huffman table.
-The number of codes for each code length.
The Huffman code values, in order of increasing code length.
Describes a JPEG quantization table.
-An array of bytes containing the elements of the quantization table.
Describes a mapped rectangle that is used to access a surface.
-The
A value that describes the width, in bytes, of the surface.
A reference to the image buffer of the surface.
Describes a display mode.
-This structure is used by the GetDisplayModeList and FindClosestMatchingMode methods.
The following format values are valid for display modes and when you create a bit-block transfer (bitblt) model swap chain. The valid values depend on the feature level that you are working with.
Feature level >= 9.1
Feature level >= 10.0
Feature level >= 11.0
You can pass one of these format values to
Starting with Windows?8 for a flip model swap chain (that is, a swap chain that has the
Because of the relaxed render target creation rules that Direct3D 11 has for back buffers, applications can create a
A value that describes the resolution width. If you specify the width as zero when you call the
A value describing the resolution height. If you specify the height as zero when you call the
A
A
A member of the
A member of the
Describes a display mode and whether the display mode supports stereo.
-This structure is used by the GetDisplayModeList1 and FindClosestMatchingMode1 methods.
-A value that describes the resolution width.
A value that describes the resolution height.
A
A
A
A
Specifies whether the full-screen display mode is stereo. TRUE if stereo; otherwise,
Describes an output or physical connection between the adapter (video card) and a device.
-The
A string that contains the name of the output device.
A
True if the output is attached to the desktop; otherwise, false.
A member of the
An
Describes an output or physical connection between the adapter (video card) and a device.
-The
A string that contains the name of the output device.
A
True if the output is attached to the desktop; otherwise, false.
A member of the
An
The
This structure is used by GetDesc.
-The
A non-zero LastMouseUpdateTime indicates an update to either a mouse reference position or a mouse reference position and shape. That is, the mouse reference position is always valid for a non-zero LastMouseUpdateTime; however, the application must check the value of the PointerShapeBufferSize member to determine whether the shape was updated too.
If only the reference was updated (that is, the desktop image was not updated), the AccumulatedFrames, TotalMetadataBufferSize, and LastPresentTime members are set to zero.
An AccumulatedFrames value of one indicates that the application completed processing the last frame before a new desktop image was presented. If the AccumulatedFrames value is greater than one, more desktop image updates have occurred while the application processed the last desktop update. In this situation, the operating system accumulated the update regions. For more information about desktop updates, see Desktop Update Data.
A non-zero TotalMetadataBufferSize indicates the total size of the buffers that are required to store all the desktop update metadata. An application cannot determine the size of each type of metadata. The application must call the
The time stamp of the last update of the desktop image. The operating system calls the QueryPerformanceCounter function to obtain the value. A zero value indicates that the desktop image was not updated since an application last called the
The time stamp of the last update to the mouse. The operating system calls the QueryPerformanceCounter function to obtain the value. A zero value indicates that the position or shape of the mouse was not updated since an application last called the
The number of frames that the operating system accumulated in the desktop image surface since the calling application processed the last desktop image. For more information about this number, see Remarks.
Specifies whether the operating system accumulated updates by coalescing dirty regions. Therefore, the dirty regions might contain unmodified pixels. TRUE if dirty regions were accumulated; otherwise,
Specifies whether the desktop image might contain protected content that was already blacked out in the desktop image. TRUE if protected content was already blacked; otherwise,
A
Size in bytes of the buffers to store all the desktop update metadata for this frame. For more information about this size, see Remarks.
Size in bytes of the buffer to hold the new pixel data for the mouse shape. For more information about this size, see Remarks.
The
This structure is used by GetFrameMoveRects.
-The starting position of a rectangle.
The target region to which to move a rectangle.
The
The Position member is valid only if the Visible member?s value is set to TRUE.
-The position of the hardware cursor relative to the top-left of the adapter output.
Specifies whether the hardware cursor is visible. TRUE if visible; otherwise,
The
An application draws the cursor shape with the top-left-hand corner drawn at the position that the Position member of the
An application calls the
A
The width in pixels of the mouse cursor.
The height in scan lines of the mouse cursor.
The width in bytes of the mouse cursor.
The position of the cursor's hot spot relative to its upper-left pixel. An application does not use the hot spot when it determines where to draw the cursor shape.
Describes information about present that helps the operating system optimize presentation.
-This structure is used by the Present1 method.
The scroll rectangle and the list of dirty rectangles could overlap. In this situation, the dirty rectangles take priority. Applications can then have pieces of dynamic content on top of a scrolled area. For example, an application could scroll a page and play video at the same time.
The following diagram and coordinates illustrate this example.
DirtyRectsCount = 2
- pDirtyRects[ 0 ] = { 10, 30, 40, 50 } // Video
- pDirtyRects[ 1 ] = { 0, 70, 50, 80 } // New line
- *pScrollRect = { 0, 0, 50, 70 }
- *pScrollOffset = { 0, -10 }
-
Parts of the previous frame and content that the application renders are combined to produce the final frame that the operating system presents on the display screen. Most of the window is scrolled from the previous frame. The application must update the video frame with the new chunk of content that appears due to scrolling.
The dashed rectangle shows the scroll rectangle in the current frame. The scroll rectangle is specified by the pScrollRect member. - The arrow shows the scroll offset. The scroll offset is specified by the pScrollOffset member. - Filled rectangles show dirty rectangles that the application updated with new content. The filled rectangles are specified by the DirtyRectsCount and pDirtyRects members.
The scroll rectangle and offset are not supported for the
The actual implementation of composition and necessary bitblts is different for the bitblt model and the flip model. For more info about these models, see DXGI Flip Model.
For more info about the flip-model swap chain and optimizing presentation, see Enhancing presentation with the flip model, dirty rectangles, and scrolled areas.
-The number of updated rectangles that you update in the back buffer for the presented frame. The operating system uses this information to optimize presentation. You can set this member to 0 to indicate that you update the whole frame.
A list of updated rectangles that you update in the back buffer for the presented frame. An application must update every single pixel in each rectangle that it reports to the runtime; the application cannot assume that the pixels are saved from the previous frame. For more information about updating dirty rectangles, see Remarks. You can set this member to
A reference to the scrolled rectangle. The scrolled rectangle is the rectangle of the previous frame from which the runtime bit-block transfers (bitblts) content. The runtime also uses the scrolled rectangle to optimize presentation in terminal server and indirect display scenarios.
The scrolled rectangle also describes the destination rectangle, that is, the region on the current frame that is filled with scrolled content. You can set this member to
A reference to the offset of the scrolled area that goes from the source rectangle (of previous frame) to the destination rectangle (of current frame). You can set this member to
Describes the current video memory budgeting parameters.
-Use this structure with QueryVideoMemoryInfo.
Refer to the remarks for
Specifies the OS-provided video memory budget, in bytes, that the application should target. If CurrentUsage is greater than Budget, the application may incur stuttering or performance penalties due to background activity by the OS to provide other applications with a fair usage of video memory.
Specifies the application?s current video memory usage, in bytes.
The amount of video memory, in bytes, that the application has available for reservation. To reserve this video memory, the application should call
The amount of video memory, in bytes, that is reserved by the application. The OS uses the reservation as a hint to determine the application?s minimum working set. Applications should attempt to ensure that their video memory usage can be trimmed to meet this requirement.
Represents a rational number.
-This structure is a member of the
The
An unsigned integer value representing the top of the rational number.
An unsigned integer value representing the bottom of the rational number.
Describes multi-sampling parameters for a resource.
-This structure is a member of the
The default sampler mode, with no anti-aliasing, has a count of 1 and a quality level of 0.
If multi-sample antialiasing is being used, all bound render targets and depth buffers must have the same sample counts and quality levels.
Differences between Direct3D 10.0 and Direct3D 10.1 and between Direct3D 10.0 and Direct3D 11: Direct3D 10.1 has defined two standard quality levels: D3D10_STANDARD_MULTISAMPLE_PATTERN and D3D10_CENTER_MULTISAMPLE_PATTERN in the D3D10_STANDARD_MULTISAMPLE_QUALITY_LEVELS enumeration in D3D10_1.h. Direct3D 11 has defined two standard quality levels: |
?
-The number of multisamples per pixel.
The image quality level. The higher the quality, the lower the performance. The valid range is between zero and one less than the level returned by ID3D10Device::CheckMultisampleQualityLevels for Direct3D 10 or
For Direct3D 10.1 and Direct3D 11, you can use two special quality level values. For more information about these quality level values, see Remarks.
Represents a handle to a shared resource.
-To create a shared surface, pass a shared-resource handle into the
A handle to a shared resource.
Describes a surface.
-This structure is used by the GetDesc and CreateSurface methods.
-A value describing the surface width.
A value describing the surface height.
A member of the
A member of the
Describes a swap chain.
-This structure is used by the GetDesc and CreateSwapChain methods.
In full-screen mode, there is a dedicated front buffer; in windowed mode, the desktop is the front buffer.
If you create a swap chain with one buffer, specifying
For performance information about flipping swap-chain buffers in full-screen application, see Full-Screen Application Performance Hints.
-A
A
A member of the DXGI_USAGE enumerated type that describes the surface usage and CPU access options for the back buffer. The back buffer can be used for shader input or render-target output.
A value that describes the number of buffers in the swap chain. When you call
An
A Boolean value that specifies whether the output is in windowed mode. TRUE if the output is in windowed mode; otherwise,
We recommend that you create a windowed swap chain and allow the end user to change the swap chain to full screen through
For more information about choosing windowed verses full screen, see
A member of the
A member of the
Describes a swap chain.
-This structure is used by the CreateSwapChainForHwnd, CreateSwapChainForCoreWindow, CreateSwapChainForComposition, CreateSwapChainForCompositionSurfaceHandle, and GetDesc1 methods.
Note??You cannot cast aIn full-screen mode, there is a dedicated front buffer; in windowed mode, the desktop is the front buffer.
For a flip-model swap chain (that is, a swap chain that has the
A value that describes the resolution width. If you specify the width as zero when you call the
A value that describes the resolution height. If you specify the height as zero when you call the
A
Specifies whether the full-screen display mode or the swap-chain back buffer is stereo. TRUE if stereo; otherwise,
A
A DXGI_USAGE-typed value that describes the surface usage and CPU access options for the back buffer. The back buffer can be used for shader input or render-target output.
A value that describes the number of buffers in the swap chain. When you create a full-screen swap chain, you typically include the front buffer in this value.
A
A
A
A combination of
Describes full-screen mode for a swap chain.
-This structure is used by the CreateSwapChainForHwnd and GetFullscreenDesc methods.
-A
A member of the
A member of the
A Boolean value that specifies whether the swap chain is in windowed mode. TRUE if the swap chain is in windowed mode; otherwise,
The blend-state interface holds a description for blending state that you can bind to the output-merger stage.
-Blending applies a simple function to combine output values from a pixel shader with data in a render target. You have control over how the pixels are blended by using a predefined set of blending operations and preblending operations.
To create a blend-state object, call
Gets the description for blending state that you used to create the blend-state object.
-You use the description for blending state in a call to the
Gets the description for blending state that you used to create the blend-state object.
-A reference to a
You use the description for blending state in a call to the
The blend-state interface holds a description for blending state that you can bind to the output-merger stage. This blend-state interface supports logical operations as well as blending operations.
-Blending applies a simple function to combine output values from a pixel shader with data in a render target. You have control over how the pixels are blended by using a predefined set of blending operations and preblending operations.
To create a blend-state object, call
Gets the description for blending state that you used to create the blend-state object.
-You use the description for blending state in a call to the
Gets the description for blending state that you used to create the blend-state object.
-A reference to a
You use the description for blending state in a call to the
Describes the blend state that you use in a call to
Here are the default values for blend state.
State | Default Value |
---|---|
AlphaToCoverageEnable | |
IndependentBlendEnable | |
RenderTarget[0].BlendEnable | |
RenderTarget[0].SrcBlend | |
RenderTarget[0].DestBlend | |
RenderTarget[0].BlendOp | |
RenderTarget[0].SrcBlendAlpha | |
RenderTarget[0].DestBlendAlpha | |
RenderTarget[0].BlendOpAlpha | |
RenderTarget[0].RenderTargetWriteMask |
?
Note?? If the driver type is set to
Describes the blend state that you use in a call to
Here are the default values for blend state.
State | Default Value |
---|---|
AlphaToCoverageEnable | |
IndependentBlendEnable | |
RenderTarget[0].BlendEnable | |
RenderTarget[0].LogicOpEnable | |
RenderTarget[0].SrcBlend | |
RenderTarget[0].DestBlend | |
RenderTarget[0].BlendOp | |
RenderTarget[0].SrcBlendAlpha | |
RenderTarget[0].DestBlendAlpha | |
RenderTarget[0].BlendOpAlpha | |
RenderTarget[0].LogicOp | |
RenderTarget[0].RenderTargetWriteMask |
?
If the driver type is set to
When you set the LogicOpEnable member of the first element of the RenderTarget array (RenderTarget[0]) to TRUE, you must also set the BlendEnable member of RenderTarget[0] to
A buffer interface accesses a buffer resource, which is unstructured memory. Buffers typically store vertex or index data.
-There are three types of buffers: vertex, index, or a shader-constant buffer. Create a buffer resource by calling
A buffer must be bound to the pipeline before it can be accessed. Buffers can be bound to the input-assembler stage by calls to
Buffers can be bound to multiple pipeline stages simultaneously for reading. A buffer can also be bound to a single pipeline stage for writing; however, the same buffer cannot be bound for reading and writing simultaneously.
-Get the properties of a buffer resource.
-Get the properties of a buffer resource.
-Pointer to a resource description (see
Describes a buffer resource.
-This structure is used by
In addition to this structure, you can also use the CD3D11_BUFFER_DESC derived structure, which is defined in D3D11.h and behaves like an inherited class, to help create a buffer description.
If the bind flag is
Size of the buffer in bytes.
Identify how the buffer is expected to be read from and written to. Frequency of update is a key factor. The most common value is typically
Identify how the buffer will be bound to the pipeline. Flags (see
CPU access flags (see
Miscellaneous flags (see
The size of each element in the buffer structure (in bytes) when the buffer represents a structured buffer. For more info about structured buffers, see Structured Buffer.
The size value in StructureByteStride must match the size of the format that you use for views of the buffer. For example, if you use a shader resource view (SRV) to read a buffer in a pixel shader, the SRV format size must match the size value in StructureByteStride.
This interface encapsulates an HLSL class.
-This interface is created by calling
Gets the
For more information about using the
Windows?Phone?8: This API is supported.
-Gets a description of the current HLSL class.
- For more information about using the
An instance is not restricted to being used for a single type in a single shader. An instance is flexible and can be used for any shader that used the same type name or instance name when the instance was generated.
An instance does not replace the importance of reflection for a particular shader since a gotten instance will not know its slot location and a created instance only specifies a type name.
Windows?Phone?8: This API is supported.
- Gets the
For more information about using the
Windows?Phone?8: This API is supported.
-Gets a description of the current HLSL class.
- A reference to a
For more information about using the
An instance is not restricted to being used for a single type in a single shader. An instance is flexible and can be used for any shader that used the same type name or instance name when the instance was generated.
An instance does not replace the importance of reflection for a particular shader since a gotten instance will not know its slot location and a created instance only specifies a type name.
Windows?Phone?8: This API is supported.
-Gets the instance name of the current HLSL class.
-The instance name of the current HLSL class.
The length of the pInstanceName parameter.
GetInstanceName will return a valid name only for instances acquired using
For more information about using the
Windows?Phone?8: This API is supported.
-Gets the type of the current HLSL class.
-Type of the current HLSL class.
The length of the pTypeName parameter.
GetTypeName will return a valid name only for instances acquired using
For more information about using the
Windows?Phone?8: This API is supported.
-This interface encapsulates an HLSL dynamic linkage.
-A class linkage object can hold up to 64K gotten instances. A gotten instance is a handle that references a variable name in any shader that is created with that linkage object. When you create a shader with a class linkage object, the runtime gathers these instances and stores them in the class linkage object. For more information about how a class linkage object is used, see Storing Variables and Types for Shaders to Share.
An
Gets the class-instance object that represents the specified HLSL class.
-The name of a class for which to get the class instance.
The index of the class instance.
The address of a reference to an
For more information about using the
A class instance must have at least 1 data member in order to be available for the runtime to use with
Windows?Phone?8: This API is supported.
-Initializes a class-instance object that represents an HLSL class instance.
-The type name of a class to initialize.
Identifies the constant buffer that contains the class data.
The four-component vector offset from the start of the constant buffer where the class data will begin. Consequently, this is not a byte offset.
The texture slot for the first texture; there may be multiple textures following the offset.
The sampler slot for the first sampler; there may be multiple samplers following the offset.
The address of a reference to an
Returns
Instances can be created (or gotten) before or after a shader is created. Use the same shader linkage object to acquire a class instance and create the shader the instance is going to be used in.
For more information about using the
Windows?Phone?8: This API is supported.
-A compute-shader interface manages an executable program (a compute shader) that controls the compute-shader stage.
-The compute-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a compute-shader interface, call
This interface is defined in D3D11.h.
-This interface encapsulates methods for measuring GPU performance.
-A counter can be created with
This is a derived class of
Counter data is gathered by issuing an
Counters are best suited for profiling.
For a list of the types of performance counters, see
Get a counter description.
-Get a counter description.
-Pointer to a counter description (see
The depth-stencil-state interface holds a description for depth-stencil state that you can bind to the output-merger stage.
-To create a depth-stencil-state object, call
Gets the description for depth-stencil state that you used to create the depth-stencil-state object.
-You use the description for depth-stencil state in a call to the
Gets the description for depth-stencil state that you used to create the depth-stencil-state object.
-A reference to a
You use the description for depth-stencil state in a call to the
Describes depth-stencil state.
-Pass a reference to
Depth-stencil state controls how depth-stencil testing is performed by the output-merger stage.
The following table shows the default values of depth-stencil states.
State | Default Value |
---|---|
DepthEnable | TRUE |
DepthWriteMask | |
DepthFunc | |
StencilEnable | |
StencilReadMask | D3D11_DEFAULT_STENCIL_READ_MASK |
StencilWriteMask | D3D11_DEFAULT_STENCIL_WRITE_MASK |
FrontFace.StencilFunc and BackFace.StencilFunc | |
FrontFace.StencilDepthFailOp and BackFace.StencilDepthFailOp | |
FrontFace.StencilPassOp and BackFace.StencilPassOp | |
FrontFace.StencilFailOp and BackFace.StencilFailOp |
?
The formats that support stenciling are
Enable depth testing.
Identify a portion of the depth-stencil buffer that can be modified by depth data (see
A function that compares depth data against existing depth data. The function options are listed in
Enable stencil testing.
Identify a portion of the depth-stencil buffer for reading stencil data.
Identify a portion of the depth-stencil buffer for writing stencil data.
Identify how to use the results of the depth test and the stencil test for pixels whose surface normal is facing towards the camera (see
Identify how to use the results of the depth test and the stencil test for pixels whose surface normal is facing away from the camera (see
A depth-stencil-view interface accesses a texture resource during depth-stencil testing.
-To create a depth-stencil view, call
To bind a depth-stencil view to the pipeline, call
A depth-stencil-view interface accesses a texture resource during depth-stencil testing.
-To create a depth-stencil view, call
To bind a depth-stencil view to the pipeline, call
A depth-stencil-view interface accesses a texture resource during depth-stencil testing.
-To create a depth-stencil view, call
To bind a depth-stencil view to the pipeline, call
The device interface represents a virtual adapter; it is used to create resources.
- A device is created using
Windows?Phone?8: This API is supported.
- IDXGIResource* pOtherResource(NULL);
- hr = pOtherDeviceResource->QueryInterface( __uuidof(IDXGIResource), (void**)&pOtherResource );
- HANDLE sharedHandle;
- pOtherResource->GetSharedHandle(&sharedHandle);
- The only resources that can be shared are 2D non-mipmapped textures. To share a resource between a Direct3D 9 device and a Direct3D 10 device the texture must have been created using the pSharedHandle argument of {{CreateTexture}}. The shared Direct3D 9 handle is then passed to OpenSharedResource in the hResource argument. The following code illustrates the method calls involved.
- sharedHandle = NULL; // must be set to NULL to create, can use a valid handle here to open in D3D9
- pDevice9->CreateTexture(..., pTex2D_9, &sharedHandle);
- ...
- pDevice10->OpenSharedResource(sharedHandle, __uuidof(ID3D10Resource), (void**)(&tempResource10));
- tempResource10->QueryInterface(__uuidof(ID3D10Texture2D), (void**)(&pTex2D_10));
- tempResource10->Release();
- // now use pTex2D_10 with pDevice10
- Textures being shared from D3D9 to D3D10 have the following restrictions. Textures must be 2D Only 1 mip level is allowed Texture must have default usage Texture must be write only MSAA textures are not allowed Bind flags must have SHADER_RESOURCE and RENDER_TARGET set Only R10G10B10A2_UNORM, R16G16B16A16_FLOAT and R8G8B8A8_UNORM formats are allowed If a shared texture is updated on one device Gets information about the features
Gets information about the features
Gets information about whether the driver supports the nonpowers-of-2-unconditionally feature. TRUE for hardware at Direct3D 10 and higher feature levels.
-Gets information about whether a rendering device batches rendering commands and performs multipass rendering into tiles or bins over a render area. Certain API usage patterns that are fine TileBasedDefferredRenderers (TBDRs) can perform worse on non-TBDRs and vice versa. Applications that are careful about rendering can be friendly to both TBDR and non-TBDR architectures.
-Creates a device that uses Direct3D 11 functionality in Direct3D 12, specifying a pre-existing D3D12 device to use for D3D11 interop.
- Specifies a pre-existing D3D12 device to use for D3D11 interop. May not be
Any of those documented for D3D11CreateDeviceAndSwapChain. Specifies which runtime layers to enable (see the
An array of any of the following:
The first feature level which is less than or equal to the D3D12 device's feature level will be used to perform D3D11 validation. Creation will fail if no acceptable feature levels are provided. Providing
An array of unique queues for D3D11On12 to use. Valid queue types: 3D command queue.
The function signature PFN_D3D11ON12_CREATE_DEVICE is provided as a typedef, so that you can use dynamic linking techniques (GetProcAddress) instead of statically linking.
-Gets the feature level of the hardware device.
-Feature levels determine the capabilities of your device.
-Get the flags used during the call to create the device with
Get the reason why the device was removed.
-Gets an immediate context, which can play back command lists.
-The GetImmediateContext method returns an
The GetImmediateContext method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Get or sets the exception-mode flags.
-An exception-mode flag is used to elevate an error condition to a non-continuable exception.
-Creates a buffer (vertex buffer, index buffer, or shader-constant buffer).
- A reference to a
A reference to a
If you don't pass anything to pInitialData, the initial content of the memory for the buffer is undefined. In this case, you need to write the buffer content some other way before the resource is read.
Address of a reference to the
This method returns E_OUTOFMEMORY if there is insufficient memory to create the buffer. See Direct3D 11 Return Codes for other possible return values.
For example code, see How to: Create a Vertex Buffer, How to: Create an Index Buffer or How to: Create a Constant Buffer.
For a constant buffer (BindFlags of
The Direct3D 11.1 runtime, which is available on Windows?8 and later operating systems, provides the following new functionality for CreateBuffer:
You can create a constant buffer that is larger than the maximum constant buffer size that a shader can access (4096 32-bit*4-component constants ? 64KB). When you bind the constant buffer to the pipeline (for example, via PSSetConstantBuffers or PSSetConstantBuffers1), you can define a range of the buffer that the shader can access that fits within the 4096 constant limit.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher. On existing drivers that are implemented to feature level 10 and higher, a call to CreateBuffer to request a constant buffer that is larger than 4096 fails.
-Creates an array of 1D textures.
-If the method succeeds, the return code is
CreateTexture1D creates a 1D texture resource, which can contain a number of 1D subresources. The number of textures is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
For a 32 width texture with a full mipmap chain, the pInitialData array has the following 6 elements: -
Create an array of 2D textures.
-If the method succeeds, the return code is
CreateTexture2D creates a 2D texture resource, which can contain a number of 2D subresources. The number of textures is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
For a 32 x 32 texture with a full mipmap chain, the pInitialData array has the following 6 elements: -
Create a single 3D texture.
-If the method succeeds, the return code is
CreateTexture3D creates a 3D texture resource, which can contain a number of 3D subresources. The number of textures is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
Each element of pInitialData provides all of the slices that are defined for a given miplevel. For example, for a 32 x 32 x 4 volume texture with a full mipmap chain, the array has the following 6 elements:
Create a shader-resource view for accessing data in a resource.
- Pointer to the resource that will serve as input to a shader. This resource must have been created with the
Pointer to a shader-resource view description (see
Address of a reference to an
This method returns one of the following Direct3D 11 Return Codes.
A resource is made up of one or more subresources; a view identifies which subresources to allow the pipeline to access. In addition, each resource is bound to the pipeline using a view. A shader-resource view is designed to bind any buffer or texture resource to the shader stages using the following API methods:
Because a view is fully typed, this means that typeless resources become fully typed when bound to the pipeline.
Note?? To successfully create a shader-resource view from a typeless buffer (for example,The Direct3D 11.1 runtime, which is available starting with Windows?8, allows you to use CreateShaderResourceView for the following new purpose.
You can create shader-resource views of video resources so that Direct3D shaders can process those shader-resource views. These video resources are either Texture2D or Texture2DArray. The value in the ViewDimension member of the
The runtime read+write conflict prevention logic (which stops a resource from being bound as an SRV and RTV or UAV at the same time) treats views of different parts of the same video surface as conflicting for simplicity. Therefore, the runtime does not allow an application to read from luma while the application simultaneously renders to chroma in the same surface even though the hardware might allow these simultaneous operations.
Windows?Phone?8: This API is supported.
-Creates a view for accessing an unordered access resource.
-This method returns one of the Direct3D 11 Return Codes.
The Direct3D 11.1 runtime, which is available starting with Windows?8, allows you to use CreateUnorderedAccessView for the following new purpose.
You can create unordered-access views of video resources so that Direct3D shaders can process those unordered-access views. These video resources are either Texture2D or Texture2DArray. The value in the ViewDimension member of the
The runtime read+write conflict prevention logic (which stops a resource from being bound as an SRV and RTV or UAV at the same time) treats views of different parts of the same video surface as conflicting for simplicity. Therefore, the runtime does not allow an application to read from luma while the application simultaneously renders to chroma in the same surface even though the hardware might allow these simultaneous operations.
-Creates a render-target view for accessing resource data.
-Pointer to a
Pointer to a
Address of a reference to an
This method returns one of the Direct3D 11 Return Codes.
A render-target view can be bound to the output-merger stage by calling
The Direct3D 11.1 runtime, which is available starting with Windows?8, allows you to use CreateRenderTargetView for the following new purpose.
You can create render-target views of video resources so that Direct3D shaders can process those render-target views. These video resources are either Texture2D or Texture2DArray. The value in the ViewDimension member of the
The runtime read+write conflict prevention logic (which stops a resource from being bound as an SRV and RTV or UAV at the same time) treats views of different parts of the same video surface as conflicting for simplicity. Therefore, the runtime does not allow an application to read from luma while the application simultaneously renders to chroma in the same surface even though the hardware might allow these simultaneous operations.
-Create a depth-stencil view for accessing resource data.
-Pointer to the resource that will serve as the depth-stencil surface. This resource must have been created with the
Pointer to a depth-stencil-view description (see
Address of a reference to an
This method returns one of the following Direct3D 11 Return Codes.
A depth-stencil view can be bound to the output-merger stage by calling
Create an input-layout object to describe the input-buffer data for the input-assembler stage.
- An array of the input-assembler stage input data types; each type is described by an element description (see
The number of input-data types in the array of input-elements.
A reference to the compiled shader. The compiled shader code contains a input signature which is validated against the array of elements. See remarks.
Size of the compiled shader.
A reference to the input-layout object created (see
If the method succeeds, the return code is
After creating an input layout object, it must be bound to the input-assembler stage before calling a draw API.
Once an input-layout object is created from a shader signature, the input-layout object can be reused with any other shader that has an identical input signature (semantics included). This can simplify the creation of input-layout objects when you are working with many shaders with identical inputs.
If a data type in the input-layout declaration does not match the data type in a shader-input signature, CreateInputLayout will generate a warning during compilation. The warning is simply to call attention to the fact that the data may be reinterpreted when read from a register. You may either disregard this warning (if reinterpretation is intentional) or make the data types match in both declarations to eliminate the warning.
Windows?Phone?8: This API is supported.
-Create a vertex-shader object from a compiled shader.
-A reference to the compiled shader.
Size of the compiled vertex shader.
A reference to a class linkage interface (see
Address of a reference to a
This method returns one of the Direct3D 11 Return Codes.
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateVertexShader.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a vertex shader, you can successfully pass the compiled vertex shader to pShaderBytecode. That is, the call to CreateVertexShader succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateVertexShader fails. CreateVertexShader also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Create a geometry shader.
-A reference to the compiled shader.
Size of the compiled geometry shader.
A reference to a class linkage interface (see
Address of a reference to a
This method returns one of the following Direct3D 11 Return Codes.
After it is created, the shader can be set to the device by calling
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateGeometryShader.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a geometry shader, you can successfully pass the compiled geometry shader to pShaderBytecode. That is, the call to CreateGeometryShader succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateGeometryShader fails. CreateGeometryShader also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Creates a geometry shader that can write to streaming output buffers.
-A reference to the compiled geometry shader for a standard geometry shader plus stream output. For info on how to get this reference, see Getting a Pointer to a Compiled Shader.
To create the stream output without using a geometry shader, pass a reference to the output signature for the prior stage. To obtain this output signature, call the
Size of the compiled geometry shader.
Pointer to a
The number of entries in the stream output declaration ( ranges from 0 to
An array of buffer strides; each stride is the size of an element for that buffer.
The number of strides (or buffers) in pBufferStrides (ranges from 0 to
The index number of the stream to be sent to the rasterizer stage (ranges from 0 to
A reference to a class linkage interface (see
Address of a reference to an
This method returns one of the Direct3D 11 Return Codes.
For more info about using CreateGeometryShaderWithStreamOutput, see Create a Geometry-Shader Object with Stream Output.
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateGeometryShaderWithStreamOutput.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a geometry shader, you can successfully pass the compiled geometry shader to pShaderBytecode. That is, the call to CreateGeometryShaderWithStreamOutput succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateGeometryShaderWithStreamOutput fails. CreateGeometryShaderWithStreamOutput also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Windows?Phone?8: This API is supported.
-Create a pixel shader.
-A reference to the compiled shader.
Size of the compiled pixel shader.
A reference to a class linkage interface (see
Address of a reference to a
This method returns one of the following Direct3D 11 Return Codes.
After creating the pixel shader, you can set it to the device using
Create a hull shader.
-This method returns one of the Direct3D 11 Return Codes.
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateHullShader.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a hull shader, you can successfully pass the compiled hull shader to pShaderBytecode. That is, the call to CreateHullShader succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateHullShader fails. CreateHullShader also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Create a domain shader .
-This method returns one of the following Direct3D 11 Return Codes.
The Direct3D 11.1 runtime, which is available starting with Windows?8, provides the following new functionality for CreateDomainShader.
The following shader model 5.0 instructions are available to just pixel shaders and compute shaders in the Direct3D 11.0 runtime. For the Direct3D 11.1 runtime, because unordered access views (UAV) are available at all shader stages, you can use these instructions in all shader stages.
Therefore, if you use the following shader model 5.0 instructions in a domain shader, you can successfully pass the compiled domain shader to pShaderBytecode. That is, the call to CreateDomainShader succeeds.
If you pass a compiled shader to pShaderBytecode that uses any of the following instructions on a device that doesn?t support UAVs at every shader stage (including existing drivers that are not implemented to support UAVs at every shader stage), CreateDomainShader fails. CreateDomainShader also fails if the shader tries to use a UAV slot beyond the set of UAV slots that the hardware supports.
Create a compute shader.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the compute shader. See Direct3D 11 Return Codes for other possible return values.
For an example, see How To: Create a Compute Shader and HDRToneMappingCS11 Sample.
-Creates class linkage libraries to enable dynamic shader linkage.
-A reference to a class-linkage interface reference (see
This method returns one of the following Direct3D 11 Return Codes.
The
Create a blend-state object that encapsules blend state for the output-merger stage.
- Pointer to a blend-state description (see
Address of a reference to the blend-state object created (see
This method returns E_OUTOFMEMORY if there is insufficient memory to create the blend-state object. See Direct3D 11 Return Codes for other possible return values.
An application can create up to 4096 unique blend-state objects. For each object created, the runtime checks to see if a previous object has the same state. If such a previous object exists, the runtime will return a reference to previous instance instead of creating a duplicate object.
Windows?Phone?8: This API is supported.
-Create a depth-stencil state object that encapsulates depth-stencil test information for the output-merger stage.
-Pointer to a depth-stencil state description (see
Address of a reference to the depth-stencil state object created (see
This method returns one of the following Direct3D 11 Return Codes.
4096 unique depth-stencil state objects can be created on a device at a time.
If an application attempts to create a depth-stencil-state interface with the same state as an existing interface, the same interface will be returned and the total number of unique depth-stencil state objects will stay the same.
-Create a rasterizer state object that tells the rasterizer stage how to behave.
-Pointer to a rasterizer state description (see
Address of a reference to the rasterizer state object created (see
This method returns E_OUTOFMEMORY if there is insufficient memory to create the compute shader. See Direct3D 11 Return Codes for other possible return values.
4096 unique rasterizer state objects can be created on a device at a time.
If an application attempts to create a rasterizer-state interface with the same state as an existing interface, the same interface will be returned and the total number of unique rasterizer state objects will stay the same.
-Create a sampler-state object that encapsulates sampling information for a texture.
-Pointer to a sampler state description (see
Address of a reference to the sampler state object created (see
This method returns one of the following Direct3D 11 Return Codes.
4096 unique sampler state objects can be created on a device at a time.
If an application attempts to create a sampler-state interface with the same state as an existing interface, the same interface will be returned and the total number of unique sampler state objects will stay the same.
-This interface encapsulates methods for querying information from the GPU.
-Pointer to a query description (see
Address of a reference to the query object created (see
This method returns E_OUTOFMEMORY if there is insufficient memory to create the query object. See Direct3D 11 Return Codes for other possible return values.
Creates a predicate.
-Pointer to a query description where the type of query must be a
Address of a reference to a predicate (see
This method returns one of the following Direct3D 11 Return Codes.
Create a counter object for measuring GPU performance.
-Pointer to a counter description (see
Address of a reference to a counter (see
If this function succeeds, it will return
E_INVALIDARG is returned whenever an out-of-range well-known or device-dependent counter is requested, or when the simulataneously active counters have been exhausted.
Creates a deferred context, which can record command lists.
-Reserved for future use. Pass 0.
Upon completion of the method, the passed reference to an
Returns
A deferred context is a thread-safe context that you can use to record graphics commands on a thread other than the main rendering thread. Using a deferred context, you can record graphics commands into a command list that is encapsulated by the
You can create multiple deferred contexts.
Note?? If you use theFor more information about deferred contexts, see Immediate and Deferred Rendering.
Windows?Phone?8: This API is supported.
-Give a device access to a shared resource created on a different device.
-A resource handle. See remarks.
The globally unique identifier (
Address of a reference to the resource we are gaining access to.
This method returns one of the following Direct3D 11 Return Codes.
The REFIID, or
The unique handle of the resource is obtained differently depending on the type of device that originally created the resource.
To share a resource between two Direct3D 11 devices the resource must have been created with the
The REFIID, or
When sharing a resource between two Direct3D 10/11 devices the unique handle of the resource can be obtained by querying the resource for the
* pOtherResource( null ); - hr = pOtherDeviceResource->QueryInterface( __uuidof(), (void**)&pOtherResource ); - HANDLE sharedHandle; - pOtherResource->GetSharedHandle(&sharedHandle);
The only resources that can be shared are 2D non-mipmapped textures.
To share a resource between a Direct3D 9 device and a Direct3D 11 device the texture must have been created using the pSharedHandle argument of CreateTexture. The shared Direct3D 9 handle is then passed to OpenSharedResource in the hResource argument.
The following code illustrates the method calls involved.
sharedHandle =null ; // must be set tonull to create, can use a valid handle here to open in D3D9 - pDevice9->CreateTexture(..., pTex2D_9, &sharedHandle); - ... - pDevice11->OpenSharedResource(sharedHandle, __uuidof(), (void**)(&tempResource11)); - tempResource11->QueryInterface(__uuidof( ), (void**)(&pTex2D_11)); - tempResource11->Release(); - // now use pTex2D_11 with pDevice11
Textures being shared from D3D9 to D3D11 have the following restrictions.
If a shared texture is updated on one device
Get the support of a given format on the installed video device.
-A
A bitfield of
Get the number of quality levels available during multisampling.
-The texture format. See
The number of samples during multisampling.
Number of quality levels supported by the adapter. See remarks.
When multisampling a texture, the number of quality levels available for an adapter is dependent on the texture format used and the number of samples requested. The maximum number of quality levels is defined by
Furthermore, the definition of a quality level is up to each hardware vendor to define, however no facility is provided by Direct3D to help discover this information.
Note that FEATURE_LEVEL_10_1 devices are required to support 4x MSAA for all render targets except R32G32B32A32 and R32G32B32. FEATURE_LEVEL_11_0 devices are required to support 4x MSAA for all render target formats, and 8x MSAA for all render target formats except R32G32B32A32 formats.
-Get a counter's information.
-Get the type, name, units of measure, and a description of an existing counter.
- Pointer to a counter description (see
Pointer to the data type of a counter (see
Pointer to the number of hardware counters that are needed for this counter type to be created. All instances of the same counter type use the same hardware counters.
String to be filled with a brief name for the counter. May be
Length of the string returned to szName. Can be
Name of the units a counter measures, provided the memory the reference points to has enough room to hold the string. Can be
Length of the string returned to szUnits. Can be
A description of the counter, provided the memory the reference points to has enough room to hold the string. Can be
Length of the string returned to szDescription. Can be
This method returns one of the following Direct3D 11 Return Codes.
Length parameters can be
Windows?Phone?8: This API is supported.
-Gets information about the features that are supported by the current graphics driver.
-A member of the
Upon completion of the method, the passed structure is filled with data that describes the feature support.
The size of the structure passed to the pFeatureSupportData parameter.
Returns
To query for multi-threading support, pass the
Calling CheckFeatureSupport with Feature set to
Get application-defined data from a device.
-Guid associated with the data.
A reference to a variable that on input contains the size, in bytes, of the buffer that pData points to, and on output contains the size, in bytes, of the amount of data that GetPrivateData retrieved.
A reference to a buffer that GetPrivateData fills with data from the device if pDataSize points to a value that specifies a buffer large enough to hold the data.
This method returns one of the codes described in the topic Direct3D 11 Return Codes.
Set data to a device and associate that data with a guid.
-Guid associated with the data.
Size of the data.
Pointer to the data to be stored with this device. If pData is
This method returns one of the following Direct3D 11 Return Codes.
The data stored in the device with this method can be retrieved with
The data and guid set with this method will typically be application-defined.
The debug layer reports memory leaks by outputting a list of object interface references along with their friendly names. The default friendly name is "<unnamed>". You can set the friendly name so that you can determine if the corresponding object interface reference caused the leak. To set the friendly name, use the SetPrivateData method and the
static const char c_szName[] = "My name"; - hr = pContext->SetPrivateData(-, sizeof( c_szName ) - 1, c_szName ); -
Associate an
Guid associated with the interface.
Pointer to an
This method returns one of the following Direct3D 11 Return Codes.
Gets the feature level of the hardware device.
-A member of the
Feature levels determine the capabilities of your device.
-Get the flags used during the call to create the device with
A bitfield containing the flags used to create the device. See
Get the reason why the device was removed.
-Possible return values include:
For more detail on these return codes, see DXGI_ERROR.
Gets an immediate context, which can play back command lists.
-Upon completion of the method, the passed reference to an
The GetImmediateContext method returns an
The GetImmediateContext method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Get the exception-mode flags.
-A value that contains one or more exception flags; each flag specifies a condition which will cause an exception to be raised. The flags are listed in D3D11_RAISE_FLAG. A default value of 0 means there are no flags.
This method returns one of the following Direct3D 11 Return Codes.
Set an exception-mode flag to elevate an error condition to a non-continuable exception.
Whenever an error occurs, a Direct3D device enters the DEVICEREMOVED state and if the appropriate exception flag has been set, an exception is raised. A raised exception is designed to terminate an application. Before termination, the last chance an application has to persist data is by using an UnhandledExceptionFilter (see Structured Exception Handling). In general, UnhandledExceptionFilters are leveraged to try to persist data when an application is crashing (to disk, for example). Any code that executes during an UnhandledExceptionFilter is not guaranteed to reliably execute (due to possible process corruption). Any data that the UnhandledExceptionFilter manages to persist, before the UnhandledExceptionFilter crashes again, should be treated as suspect, and therefore inspected by a new, non-corrupted process to see if it is usable.
-Get the exception-mode flags.
-A value that contains one or more exception flags; each flag specifies a condition which will cause an exception to be raised. The flags are listed in D3D11_RAISE_FLAG. A default value of 0 means there are no flags.
An exception-mode flag is used to elevate an error condition to a non-continuable exception.
-The device interface represents a virtual adapter; it is used to create resources.
{ , , , , , , ,};
- Gets an immediate context, which can play back command lists.
-GetImmediateContext1 returns an
GetImmediateContext1 increments the reference count of the immediate context by one. So, call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Gets an immediate context, which can play back command lists.
-Upon completion of the method, the passed reference to an
GetImmediateContext1 returns an
GetImmediateContext1 increments the reference count of the immediate context by one. So, call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Creates a deferred context, which can record command lists.
-Reserved for future use. Pass 0.
Upon completion of the method, the passed reference to an
Returns
A deferred context is a thread-safe context that you can use to record graphics commands on a thread other than the main rendering thread. By using a deferred context, you can record graphics commands into a command list that is encapsulated by the
You can create multiple deferred contexts.
Note?? If you use theFor more information about deferred contexts, see Immediate and Deferred Rendering.
Windows?Phone?8: This API is supported.
-Creates a blend-state object that encapsulates blend state for the output-merger stage and allows the configuration of logic operations.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the blend-state object. See Direct3D 11 Return Codes for other possible return values.
The logical operations (those that enable bitwise logical operations between pixel shader output and render target contents, refer to
An app can create up to 4096 unique blend-state objects. For each object created, the runtime checks to see if a previous object has the same state. If such a previous object exists, the runtime will return a reference to previous instance instead of creating a duplicate object.
-Creates a rasterizer state object that informs the rasterizer stage how to behave and forces the sample count while UAV rendering or rasterizing.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the rasterizer state object. See Direct3D 11 Return Codes for other possible return values.
An app can create up to 4096 unique rasterizer state objects. For each object created, the runtime checks to see if a previous object has the same state. If such a previous object exists, the runtime will return a reference to previous instance instead of creating a duplicate object.
-Creates a context state object that holds all Microsoft Direct3D state and some Direct3D behavior.
- A combination of
If you set the single-threaded flag for both the context state object and the device, you guarantee that you will call the whole set of context methods and device methods only from one thread. You therefore do not need to use critical sections to synchronize access to the device context, and the runtime can avoid working with those processor-intensive critical sections.
A reference to an array of
{, , , , , , ,};
The number of elements in pFeatureLevels. Unlike
The SDK version. You must set this parameter to
The globally unique identifier (
A reference to a variable that receives a
The address of a reference to an
This method returns one of the Direct3D 11 Return Codes.
The REFIID value of the emulated interface is a __uuidof(
gets the
Call the
When a context state object is active, the runtime disables certain methods on the device and context interfaces. For example, a context state object that is created with __uuidof(
will cause the runtime to turn off most of the Microsoft Direct3D?10 device interfaces, and a context state object that is created with __uuidof(ID3D10Device1)
or __uuidof(ID3D10Device)
will cause the runtime to turn off most of the
For example, suppose the tessellation stage is made active through the
The following table shows the methods that are active and inactive for each emulated interface.
Emulated interface | Active device or immediate context interfaces | Inactive device or immediate context interfaces |
---|---|---|
| | ID3D10Device |
ID3D10Device1 or ID3D10Device | ID3D10Device ID3D10Device1 | |
?
The following table shows the immediate context methods that the runtime disables when the indicated context state objects are active.
Methods of __uuidof(ID3D10Device1) or __uuidof(ID3D10Device) is active | Methods of ID3D10Device when __uuidof( is active |
---|---|
ClearDepthStencilView | ClearDepthStencilView |
ClearRenderTargetView | ClearRenderTargetView |
ClearState | ClearState |
ClearUnorderedAccessViewUint | |
ClearUnorderedAccessViewFloat | |
CopyResource | CopyResource |
CopyStructureCount | |
CopySubresourceRegion | CopySubresourceRegion |
CSGetConstantBuffers | |
CSGetSamplers | |
CSGetShader | |
CSGetShaderResources | |
CSGetUnorderedAccessViews | |
CSSetConstantBuffers | |
CSSetSamplers | |
CSSetShader | |
CSSetShaderResources | |
CSSetUnorderedAccessViews | |
Dispatch | |
DispatchIndirect | |
CreateBlendState | |
Draw | Draw |
DrawAuto | DrawAuto |
DrawIndexed | DrawIndexed |
DrawIndexedInstanced | DrawIndexedInstanced |
DrawIndexedInstancedIndirect | |
DrawInstanced | DrawInstanced |
DrawInstancedIndirect | |
DSGetConstantBuffers | |
DSGetSamplers | |
DSGetShader | |
DSGetShaderResources | |
DSSetConstantBuffers | |
DSSetSamplers | |
DSSetShader | |
DSSetShaderResources | |
ExecuteCommandList | |
FinishCommandList | |
Flush | Flush |
GenerateMips | GenerateMips |
GetPredication | GetPredication |
GetResourceMinLOD | |
GetType | |
GetTextFilterSize | |
GSGetConstantBuffers | GSGetConstantBuffers |
GSGetSamplers | GSGetSamplers |
GSGetShader | GSGetShader |
GSGetShaderResources | GSGetShaderResources |
GSSetConstantBuffers | GSSetConstantBuffers |
GSSetSamplers | GSSetSamplers |
GSSetShader | GSSetShader |
GSSetShaderResources | GSSetShaderResources |
HSGetConstantBuffers | |
HSGetSamplers | |
HSGetShader | |
HSGetShaderResources | |
HSSetConstantBuffers | |
HSSetSamplers | |
HSSetShader | |
HSSetShaderResources | |
IAGetIndexBuffer | IAGetIndexBuffer |
IAGetInputLayout | IAGetInputLayout |
IAGetPrimitiveTopology | IAGetPrimitiveTopology |
IAGetVertexBuffers | IAGetVertexBuffers |
IASetIndexBuffer | IASetIndexBuffer |
IASetInputLayout | IASetInputLayout |
IASetPrimitiveTopology | IASetPrimitiveTopology |
IASetVertexBuffers | IASetVertexBuffers |
OMGetBlendState | OMGetBlendState |
OMGetDepthStencilState | OMGetDepthStencilState |
OMGetRenderTargets | OMGetRenderTargets |
OMGetRenderTargetsAndUnorderedAccessViews | |
OMSetBlendState | OMSetBlendState |
OMSetDepthStencilState | OMSetDepthStencilState |
OMSetRenderTargets | OMSetRenderTargets |
OMSetRenderTargetsAndUnorderedAccessViews | |
PSGetConstantBuffers | PSGetConstantBuffers |
PSGetSamplers | PSGetSamplers |
PSGetShader | PSGetShader |
PSGetShaderResources | PSGetShaderResources |
PSSetConstantBuffers | PSSetConstantBuffers |
PSSetSamplers | PSSetSamplers |
PSSetShader | PSSetShader |
PSSetShaderResources | PSSetShaderResources |
ResolveSubresource | ResolveSubresource |
RSGetScissorRects | RSGetScissorRects |
RSGetState | RSGetState |
RSGetViewports | RSGetViewports |
RSSetScissorRects | RSSetScissorRects |
RSSetState | RSSetState |
RSSetViewports | RSSetViewports |
SetPredication | SetPredication |
SetResourceMinLOD | |
SetTextFilterSize | |
SOGetTargets | SOGetTargets |
SOSetTargets | SOSetTargets |
UpdateSubresource | UpdateSubresource |
VSGetConstantBuffers | VSGetConstantBuffers |
VSGetSamplers | VSGetSamplers |
VSGetShader | VSGetShader |
VSGetShaderResources | VSGetShaderResources |
VSSetConstantBuffers | VSSetConstantBuffers |
VSSetSamplers | VSSetSamplers |
VSSetShader | VSSetShader |
VSSetShaderResources | VSSetShaderResources |
?
The following table shows the immediate context methods that the runtime does not disable when the indicated context state objects are active.
Methods of __uuidof(ID3D10Device1) or __uuidof(ID3D10Device) is active | Methods of ID3D10Device when __uuidof( is active |
---|---|
Begin | |
End | |
GetCreationFlags | |
GetPrivateData | |
GetContextFlags | |
GetData | |
Map | |
Unmap |
?
The following table shows the ID3D10Device interface methods that the runtime does not disable because they are not immediate context methods.
Methods of ID3D10Device |
---|
CheckCounter |
CheckCounterInfo |
Create*, like CreateQuery |
GetDeviceRemovedReason |
GetExceptionMode |
OpenSharedResource |
SetExceptionMode |
SetPrivateData |
SetPrivateDataInterface |
?
Windows?Phone?8: This API is supported.
-Give a device access to a shared resource created on a different device.
-A resource handle. See remarks.
The globally unique identifier (
Address of a reference to the resource we are gaining access to.
This method returns one of the following Direct3D 11 Return Codes.
The REFIID, or
The unique handle of the resource is obtained differently depending on the type of device that originally created the resource.
To share a resource between two Direct3D 11 devices the resource must have been created with the
The REFIID, or
When sharing a resource between two Direct3D 10/11 devices the unique handle of the resource can be obtained by querying the resource for the
* pOtherResource( null ); - hr = pOtherDeviceResource->QueryInterface( __uuidof(), (void**)&pOtherResource ); - HANDLE sharedHandle; - pOtherResource->GetSharedHandle(&sharedHandle);
The only resources that can be shared are 2D non-mipmapped textures.
To share a resource between a Direct3D 9 device and a Direct3D 11 device the texture must have been created using the pSharedHandle argument of CreateTexture. The shared Direct3D 9 handle is then passed to OpenSharedResource in the hResource argument.
The following code illustrates the method calls involved.
sharedHandle =null ; // must be set tonull to create, can use a valid handle here to open in D3D9 - pDevice9->CreateTexture(..., pTex2D_9, &sharedHandle); - ... - pDevice11->OpenSharedResource(sharedHandle, __uuidof(), (void**)(&tempResource11)); - tempResource11->QueryInterface(__uuidof( ), (void**)(&pTex2D_11)); - tempResource11->Release(); - // now use pTex2D_11 with pDevice11
Textures being shared from D3D9 to D3D11 have the following restrictions.
If a shared texture is updated on one device
Gives a device access to a shared resource that is referenced by name and that was created on a different device. You must have previously created the resource as shared and specified that it uses NT handles (that is, you set the
This method returns one of the Direct3D 11 return codes. This method also returns E_ACCESSDENIED if the permissions to access the resource aren't valid.
Platform Update for Windows?7:??On Windows?7 or Windows Server?2008?R2 with the Platform Update for Windows?7 installed, OpenSharedResourceByName fails with E_NOTIMPL because NTHANDLES are used. For more info about the Platform Update for Windows?7, see Platform Update for Windows 7.
The behavior of OpenSharedResourceByName is similar to the behavior of the
To share a resource between two devices
The device interface represents a virtual adapter; it is used to create resources.
Gets an immediate context, which can play back command lists.
-The GetImmediateContext2 method returns an
The GetImmediateContext2 method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Gets an immediate context, which can play back command lists.
-The GetImmediateContext2 method returns an
The GetImmediateContext2 method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Creates a deferred context, which can record command lists.
- Returns
A deferred context is a thread-safe context that you can use to record graphics commands on a thread other than the main rendering thread. By using a deferred context, you can record graphics commands into a command list that is encapsulated by the
You can create multiple deferred contexts.
Note?? If you use theFor more information about deferred contexts, see Immediate and Deferred Rendering.
-Gets info about how a tiled resource is broken into tiles.
-A reference to the tiled resource to get info about.
A reference to a variable that receives the number of tiles needed to store the entire tiled resource.
A reference to a
A reference to a
A reference to a variable that contains the number of tiles in the subresource. On input, this is the number of subresources to query tilings for; on output, this is the number that was actually retrieved at pSubresourceTilingsForNonPackedMips (clamped to what's available).
The number of the first subresource tile to get. GetResourceTiling ignores this parameter if the number that pNumSubresourceTilings points to is 0.
A reference to a
If subresource tiles are part of packed mipmaps, GetResourceTiling sets the members of
For more info about tiled resources, see Tiled resources.
-Get the number of quality levels available during multisampling.
-The texture format during multisampling.
The number of samples during multisampling.
A combination of D3D11_CHECK_MULTISAMPLE_QUALITY_LEVELS_FLAGS values that are combined by using a bitwise OR operation. Currently, only
A reference to a variable the receives the number of quality levels supported by the adapter. See Remarks.
When you multisample a texture, the number of quality levels available for an adapter is dependent on the texture format that you use and the number of samples that you request. The maximum number of quality levels is defined by
Furthermore, the definition of a quality level is up to each hardware vendor to define, however no facility is provided by Direct3D to help discover this information.
Note that FEATURE_LEVEL_10_1 devices are required to support 4x MSAA for all render targets except R32G32B32A32 and R32G32B32. FEATURE_LEVEL_11_0 devices are required to support 4x MSAA for all render target formats, and 8x MSAA for all render target formats except R32G32B32A32 formats.
-The device interface represents a virtual adapter; it is used to create resources.
Gets an immediate context, which can play back command lists.
- The GetImmediateContext3 method outputs an
The GetImmediateContext3 method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Creates a 2D texture.
-If the method succeeds, the return code is
CreateTexture2D1 creates a 2D texture resource, which can contain a number of 2D subresources. The number of subresources is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
For a 32 x 32 texture with a full mipmap chain, the pInitialData array has the following 6 elements: -
Creates a 3D texture.
-If the method succeeds, the return code is
CreateTexture3D1 creates a 3D texture resource, which can contain a number of 3D subresources. The number of textures is specified in the texture description. All textures in a resource must have the same format, size, and number of mipmap levels.
All resources are made up of one or more subresources. To load data into the texture, applications can supply the data initially as an array of
Each element of pInitialData provides all of the slices that are defined for a given miplevel. For example, for a 32 x 32 x 4 volume texture with a full mipmap chain, the array has the following 6 elements:
Creates a rasterizer state object that informs the rasterizer stage how to behave and forces the sample count while UAV rendering or rasterizing.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the rasterizer state object. See Direct3D 11 Return Codes for other possible return values.
Creates a shader-resource view for accessing data in a resource.
-Pointer to the resource that will serve as input to a shader. This resource must have been created with the
A reference to a
A reference to a memory block that receives a reference to a
This method returns E_OUTOFMEMORY if there is insufficient memory to create the shader-resource view. See Direct3D 11 Return Codes for other possible return values.
Creates a view for accessing an unordered access resource.
-This method returns E_OUTOFMEMORY if there is insufficient memory to create the unordered-access view. See Direct3D 11 Return Codes for other possible return values.
Creates a render-target view for accessing resource data.
-Pointer to a
Pointer to a
A reference to a memory block that receives a reference to a
This method returns one of the Direct3D 11 Return Codes.
A render-target view can be bound to the output-merger stage by calling
Creates a query object for querying information from the graphics processing unit (GPU).
-Pointer to a
A reference to a memory block that receives a reference to a
This method returns E_OUTOFMEMORY if there is insufficient memory to create the query object. See Direct3D 11 Return Codes for other possible return values.
Gets an immediate context, which can play back command lists.
- The GetImmediateContext3 method outputs an
The GetImmediateContext3 method increments the reference count of the immediate context by one. Therefore, you must call Release on the returned interface reference when you are done with it to avoid a memory leak.
-Creates a deferred context, which can record command lists.
- Returns
Copies data into a
The provided resource must be a
This API is intended for calling at high frequency. Callers can reduce memory by making iterative calls that update progressive regions of the texture, while provide a small buffer during each call. It is most efficient to specify large enough regions, though, because this enables D3D to fill whole cache lines in the texture before returning.
For efficiency, ensure the bounds and alignment of the extents within the box are ( 64 / [bytes per pixel] ) pixels horizontally. Vertical bounds and alignment should be 2 rows, except when 1-byte-per-pixel formats are used, in which case 4 rows are recommended. Single depth slices per call are handled efficiently. It is recommended but not necessary to provide references and strides which are 128-byte aligned.
When writing to sub mipmap levels, it is recommended to use larger width and heights than described above. This is because small mipmap levels may actually be stored within a larger block of memory, with an opaque amount of offsetting which can interfere with alignment to cache lines.
- Copies data from a
The provided resource must be a
This API is intended for calling at high frequency. Callers can reduce memory by making iterative calls that update progressive regions of the texture, while provide a small buffer during each call. It is most efficient to specify large enough regions, though, because this enables D3D to fill whole cache lines in the texture before returning.
For efficiency, ensure the bounds and alignment of the extents within the box are ( 64 / [Bytes per pixel] ) pixels horizontally. Vertical bounds and alignment should be 2 rows, except when 1-byte-per-pixel formats are used, in which case 4 rows are recommended. Single depth slices per call are handled efficiently. It is recommended but not necessary to provide references and strides which are 128-byte aligned.
When reading from sub mipmap levels, it is recommended to use larger width and heights than described above. This is because small mipmap levels may actually be stored within a larger block of memory, with an opaque amount of offseting which can interfere with alignment to cache lines.
-The device interface represents a virtual adapter; it is used to create resources.
Note??The latest version of this interface is A device is created using
Windows?Phone?8: This API is supported.
-The device interface represents a virtual adapter; it is used to create resources.
A device-child interface accesses data used by a device.
-There are several types of device child interfaces, all of which inherit this interface. They include shaders, state objects, and input layouts.
Windows?Phone?8: This API is supported.
-Get a reference to the device that created this interface.
-Any returned interfaces will have their reference count incremented by one, so be sure to call ::release() on the returned reference(s) before they are freed or else you will have a memory leak.
-Get a reference to the device that created this interface.
-Address of a reference to a device (see
Any returned interfaces will have their reference count incremented by one, so be sure to call ::release() on the returned reference(s) before they are freed or else you will have a memory leak.
-Get application-defined data from a device child.
-Guid associated with the data.
A reference to a variable that on input contains the size, in bytes, of the buffer that pData points to, and on output contains the size, in bytes, of the amount of data that GetPrivateData retrieved.
A reference to a buffer that GetPrivateData fills with data from the device child if pDataSize points to a value that specifies a buffer large enough to hold the data.
This method returns one of the Direct3D 11 Return Codes.
The data stored in the device child is set by calling
Windows?Phone?8: This API is supported.
-Set application-defined data to a device child and associate that data with an application-defined guid.
-Guid associated with the data.
Size of the data.
Pointer to the data to be stored with this device child. If pData is
This method returns one of the following Direct3D 11 Return Codes.
The data stored in the device child with this method can be retrieved with
The debug layer reports memory leaks by outputting a list of object interface references along with their friendly names. The default friendly name is "<unnamed>". You can set the friendly name so that you can determine if the corresponding object interface reference caused the leak. To set the friendly name, use the SetPrivateData method and the
static const char c_szName[] = "My name"; - hr = pContext->SetPrivateData(-, sizeof( c_szName ) - 1, c_szName ); -
Associate an
Guid associated with the interface.
Pointer to an
This method returns one of the following Direct3D 11 Return Codes.
When this method is called ::addref() will be called on the
The
Bind an array of shader resources to the compute-shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources(ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a render target, then the method will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10. -
-Sets an array of views for an unordered resource.
-Index of the first element in the zero-based array to begin setting (ranges from 0 to D3D11_1_UAV_SLOT_COUNT - 1). D3D11_1_UAV_SLOT_COUNT is defined as 64.
Number of views to set (ranges from 0 to D3D11_1_UAV_SLOT_COUNT - StartSlot).
A reference to an array of
An array of append and consume buffer offsets. A value of -1 indicates to keep the current offset. Any other values set the hidden counter for that appendable and consumable UAV. pUAVInitialCounts is only relevant for UAVs that were created with either
Windows?Phone?8: This API is supported.
-Set a compute shader to the device.
-Pointer to a compute shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a compute shader to the device.
-Pointer to a compute shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a compute shader to the device.
-Pointer to a compute shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set an array of sampler states to the compute-shader stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets the constant buffers used by the compute-shader stage.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the CSSetConstantBuffers1 method instead.
-Get the compute-shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Gets an array of views for an unordered resource.
-Index of the first element in the zero-based array to return (ranges from 0 to D3D11_1_UAV_SLOT_COUNT - 1).
Number of views to get (ranges from 0 to D3D11_1_UAV_SLOT_COUNT - StartSlot).
A reference to an array of interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the compute shader currently set on the device.
-Address of a reference to a Compute shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler state interfaces from the compute-shader stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the constant buffers used by the compute-shader stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-The
D3D11_BOX sourceRegion;
- sourceRegion.left = 120;
- sourceRegion.right = 200;
- sourceRegion.top = 100;
- sourceRegion.bottom = 220;
- sourceRegion.front = 0;
- sourceRegion.back = 1; pd3dDeviceContext->CopySubresourceRegion( pDestTexture, 0, 10, 20, 0, pSourceTexture, 0, &sourceRegion );
-
- Notice, that for a 2D texture, front and back are set to 0 and 1 respectively.
- Gets a reference to the data contained in a subresource, and denies the GPU access to that subresource.
-A reference to a
Index number of the subresource.
Specifies the CPU's read and write permissions for a resource. For possible values, see
Flag that specifies what the CPU should do when the GPU is busy. This flag is optional.
A reference to the mapped subresource (see
This method also throws an exception with the code
For more information about these error codes, see DXGI_ERROR.
If you call Map on a deferred context, you can only pass
The Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, can map shader resource views (SRVs) of dynamic buffers with
Gets the type of device context.
-Gets the initialization flags associated with the current deferred context.
-The GetContextFlags method gets the flags that were supplied to the ContextFlags parameter of
Draw indexed, non-instanced primitives.
-Number of indices to draw.
The location of the first index read by the GPU from the index buffer.
A value added to each index before reading a vertex from the vertex buffer.
A draw API submits work to the rendering pipeline.
If the sum of both indices is negative, the result of the function call is undefined.
-Draw non-indexed, non-instanced primitives.
-Number of vertices to draw.
Index of the first vertex, which is usually an offset in a vertex buffer.
Draw submits work to the rendering pipeline.
The vertex data for a draw call normally comes from a vertex buffer that is bound to the pipeline.
Even without any vertex buffer bound to the pipeline, you can generate your own vertex data in your vertex shader by using the SV_VertexID system-value semantic to determine the current vertex that the runtime is processing.
-Gets a reference to the data contained in a subresource, and denies the GPU access to that subresource.
-This method returns one of the Direct3D 11 Return Codes.
This method also returns
This method also returns
For more information about these error codes, see DXGI_ERROR.
If you call Map on a deferred context, you can only pass
For info about how to use Map, see How to: Use dynamic resources.
-Invalidate the reference to a resource and reenable the GPU's access to that resource.
- A reference to a
A subresource to be unmapped.
For info about how to use Unmap, see How to: Use dynamic resources.
Windows?Phone?8: This API is supported.
-Draw indexed, instanced primitives.
-Number of indices read from the index buffer for each instance.
Number of instances to draw.
The location of the first index read by the GPU from the index buffer.
A value added to each index before reading a vertex from the vertex buffer.
A value added to each index before reading per-instance data from a vertex buffer.
A draw API submits work to the rendering pipeline.
Instancing may extend performance by reusing the same geometry to draw multiple objects in a scene. One example of instancing could be to draw the same object with different positions and colors. Instancing requires multiple vertex buffers: at least one for per-vertex data and a second buffer for per-instance data.
-Draw non-indexed, instanced primitives.
-Number of vertices to draw.
Number of instances to draw.
Index of the first vertex.
A value added to each index before reading per-instance data from a vertex buffer.
A draw API submits work to the rendering pipeline.
Instancing may extend performance by reusing the same geometry to draw multiple objects in a scene. One example of instancing could be to draw the same object with different positions and colors.
The vertex data for an instanced draw call normally comes from a vertex buffer that is bound to the pipeline. However, you could also provide the vertex data from a shader that has instanced data identified with a system-value semantic (SV_InstanceID).
-Mark the beginning of a series of commands.
-A reference to an
Use
Mark the end of a series of commands.
-A reference to an
Use
Get data from the graphics processing unit (GPU) asynchronously.
-A reference to an
Address of memory that will receive the data. If
Size of the data to retrieve or 0. Must be 0 when pData is
Optional flags. Can be 0 or any combination of the flags enumerated by
This method returns one of the Direct3D 11 Return Codes. A return value of
Queries in a deferred context are limited to predicated drawing. That is, you cannot call
GetData retrieves the data that the runtime collected between calls to
If DataSize is 0, GetData is only used to check status.
An application gathers counter data by calling
Set a rendering predicate.
-A reference to the
If TRUE, rendering will be affected by when the predicate's conditions are met. If
The predicate must be in the "issued" or "signaled" state to be used for predication. While the predicate is set for predication, calls to
Use this method to denote that subsequent rendering and resource manipulation commands are not actually performed if the resulting predicate data of the predicate is equal to the PredicateValue. However, some predicates are only hints, so they may not actually prevent operations from being performed.
The primary usefulness of predication is to allow an application to issue rendering and resource manipulation commands without taking the performance hit of spinning, waiting for
Rendering and resource manipulation commands for Direct3D?11 include these Draw, Dispatch, Copy, Update, Clear, Generate, and Resolve operations.
You can set a rendering predicate on an immediate or a deferred context. For info about immediate and deferred contexts, see Immediate and Deferred Rendering.
-Draw geometry of an unknown size.
-A draw API submits work to the rendering pipeline. This API submits work of an unknown size that was processed by the input assembler, vertex shader, and stream-output stages; the work may or may not have gone through the geometry-shader stage.
After data has been streamed out to stream-output stage buffers, those buffers can be again bound to the Input Assembler stage at input slot 0 and DrawAuto will draw them without the application needing to know the amount of data that was written to the buffers. A measurement of the amount of data written to the SO stage buffers is maintained internally when the data is streamed out. This means that the CPU does not need to fetch the measurement before re-binding the data that was streamed as input data. Although this amount is tracked internally, it is still the responsibility of applications to use input layouts to describe the format of the data in the SO stage buffers so that the layouts are available when the buffers are again bound to the input assembler.
The following diagram shows the DrawAuto process.
Calling DrawAuto does not change the state of the streaming-output buffers that were bound again as inputs.
DrawAuto only works when drawing with one input buffer bound as an input to the IA stage at slot 0. Applications must create the SO buffer resource with both binding flags,
This API does not support indexing or instancing.
If an application needs to retrieve the size of the streaming-output buffer, it can query for statistics on streaming output by using
Draw indexed, instanced, GPU-generated primitives.
- A reference to an
Offset in pBufferForArgs to the start of the GPU generated primitives.
When an application creates a buffer that is associated with the
Windows?Phone?8: This API is supported.
-Draw instanced, GPU-generated primitives.
-A reference to an
Offset in pBufferForArgs to the start of the GPU generated primitives.
When an application creates a buffer that is associated with the
Execute a command list from a thread group.
-The number of groups dispatched in the x direction. ThreadGroupCountX must be less than or equal to
The number of groups dispatched in the y direction. ThreadGroupCountY must be less than or equal to
The number of groups dispatched in the z direction. ThreadGroupCountZ must be less than or equal to
You call the Dispatch method to execute commands in a compute shader. A compute shader can be run on many threads in parallel, within a thread group. Index a particular thread, within a thread group using a 3D vector given by (x,y,z).
In the following illustration, assume a thread group with 50 threads where the size of the group is given by (5,5,2). A single thread is identified from a thread group with 50 threads in it, using the vector (4,1,1).
The following illustration shows the relationship between the parameters passed to
Execute a command list over one or more thread groups.
-A reference to an
A byte-aligned offset between the start of the buffer and the arguments.
You call the DispatchIndirect method to execute commands in a compute shader.
When an application creates a buffer that is associated with the
Copy a region from a source resource to a destination resource.
-A reference to the destination resource (see
Destination subresource index.
The x-coordinate of the upper left corner of the destination region.
The y-coordinate of the upper left corner of the destination region. For a 1D subresource, this must be zero.
The z-coordinate of the upper left corner of the destination region. For a 1D or 2D subresource, this must be zero.
A reference to the source resource (see
Source subresource index.
A reference to a 3D box (see
An empty box results in a no-op. A box is empty if the top value is greater than or equal to the bottom value, or the left value is greater than or equal to the right value, or the front value is greater than or equal to the back value. When the box is empty, CopySubresourceRegion doesn't perform a copy operation.
The source box must be within the size of the source resource. The destination offsets, (x, y, and z), allow the source box to be offset when writing into the destination resource; however, the dimensions of the source box and the offsets must be within the size of the resource. If you try and copy outside the destination resource or specify a source box that is larger than the source resource, the behavior of CopySubresourceRegion is undefined. If you created a device that supports the debug layer, the debug output reports an error on this invalid CopySubresourceRegion call. Invalid parameters to CopySubresourceRegion cause undefined behavior and might result in incorrect rendering, clipping, no copy, or even the removal of the rendering device.
If the resources are buffers, all coordinates are in bytes; if the resources are textures, all coordinates are in texels. D3D11CalcSubresource is a helper function for calculating subresource indexes.
CopySubresourceRegion performs the copy on the GPU (similar to a memcpy by the CPU). As a consequence, the source and destination resources:
CopySubresourceRegion only supports copy; it does not support any stretch, color key, or blend. CopySubresourceRegion can reinterpret the resource data between a few format types. For more info, see Format Conversion using Direct3D 10.1.
If your app needs to copy an entire resource, we recommend to use
CopySubresourceRegion is an asynchronous call, which may be added to the command-buffer queue, this attempts to remove pipeline stalls that may occur when copying data. For more information about pipeline stalls, see performance considerations.
Note??Applies only to feature level 9_x hardware If you useCopy the entire contents of the source resource to the destination resource using the GPU.
-A reference to the
A reference to the
This method is unusual in that it causes the GPU to perform the copy operation (similar to a memcpy by the CPU). As a result, it has a few restrictions designed for improving performance. For instance, the source and destination resources:
CopyResource only supports copy; it doesn't support any stretch, color key, or blend. CopyResource can reinterpret the resource data between a few format types. For more info, see Format Conversion using Direct3D 10.1.
You can't use an Immutable resource as a destination. You can use a depth-stencil resource as either a source or a destination provided that the feature level is
The method is an asynchronous call, which may be added to the command-buffer queue. This attempts to remove pipeline stalls that may occur when copying data. For more info, see performance considerations.
We recommend to use
The CPU copies data from memory to a subresource created in non-mappable memory.
-A reference to the destination resource (see
A zero-based index, that identifies the destination subresource. See D3D11CalcSubresource for more details.
A reference to a box that defines the portion of the destination subresource to copy the resource data into. Coordinates are in bytes for buffers and in texels for textures. If
An empty box results in a no-op. A box is empty if the top value is greater than or equal to the bottom value, or the left value is greater than or equal to the right value, or the front value is greater than or equal to the back value. When the box is empty, UpdateSubresource doesn't perform an update operation.
A reference to the source data in memory.
The size of one row of the source data.
The size of one depth slice of source data.
For a shader-constant buffer; set pDstBox to
A resource cannot be used as a destination if:
When UpdateSubresource returns, the application is free to change or even free the data pointed to by pSrcData because the method has already copied/snapped away the original contents.
The performance of UpdateSubresource depends on whether or not there is contention for the destination resource. For example, contention for a vertex buffer resource occurs when the application executes a Draw call and later calls UpdateSubresource on the same vertex buffer before the Draw call is actually executed by the GPU.
To better understand the source row pitch and source depth pitch parameters, the following illustration shows a 3D volume texture.
Each block in this visual represents an element of data, and the size of each element is dependent on the resource's format. For example, if the resource format is
To calculate the source row pitch and source depth pitch for a given resource, use the following formulas:
In the case of this example 3D volume texture where the size of each element is 16 bytes, the formulas are as follows:
The following illustration shows the resource as it is laid out in memory.
For example, the following code snippet shows how to specify a destination region in a 2D texture. Assume the destination texture is 512x512 and the operation will copy the data pointed to by pData to [(120,100)..(200,220)] in the destination texture. Also assume that rowPitch has been initialized with the proper value (as explained above). front and back are set to 0 and 1 respectively, because by having front equal to back, the box is technically empty.
destRegion; - destRegion.left = 120; - destRegion.right = 200; - destRegion.top = 100; - destRegion.bottom = 220; - destRegion.front = 0; - destRegion.back = 1; pd3dDeviceContext->UpdateSubresource( pDestTexture, 0, &destRegion, pData, rowPitch, 0 ); -
The 1D case is similar. The following snippet shows how to specify a destination region in a 1D texture. Use the same assumptions as above, except that the texture is 512 in length.
destRegion; - destRegion.left = 120; - destRegion.right = 200; - destRegion.top = 0; - destRegion.bottom = 1; - destRegion.front = 0; - destRegion.back = 1; pd3dDeviceContext->UpdateSubresource( pDestTexture, 0, &destRegion, pData, rowPitch, 0 ); -
For info about various resource types and how UpdateSubresource might work with each resource type, see Introduction to a Resource in Direct3D 11.
-Copies data from a buffer holding variable length data.
-Pointer to
Offset from the start of pDstBuffer to write 32-bit UINT structure (vertex) count from pSrcView.
Pointer to an
Set all the elements in a render target to one value.
-Pointer to the render target.
A 4-component array that represents the color to fill the render target with.
Applications that wish to clear a render target to a specific integer value bit pattern should render a screen-aligned quad instead of using this method. The reason for this is because this method accepts as input a floating point value, which may not have the same bit pattern as the original integer.
Differences between Direct3D 9 and Direct3D 11/10: Unlike Direct3D 9, the full extent of the resource view is always cleared. Viewport and scissor settings are not applied. |
?
When using D3D_FEATURE_LEVEL_9_x, ClearRenderTargetView only clears the first array slice in the render target view. This can impact (for example) cube map rendering scenarios. Applications should create a render target view for each face or array slice, then clear each view individually.
-Clears an unordered access resource with bit-precise values.
-This API copies the lower ni bits from each array element i to the corresponding channel, where ni is the number of bits in the ith channel of the resource format (for example, R8G8B8_FLOAT has 8 bits for the first 3 channels). This works on any UAV with no format conversion. For a raw or structured buffer view, only the first array element value is used.
-Clears an unordered access resource with a float value.
-This API works on FLOAT, UNORM, and SNORM unordered access views (UAVs), with format conversion from FLOAT to *NORM where appropriate. On other UAVs, the operation is invalid and the call will not reach the driver.
-Clears the depth-stencil resource.
-Pointer to the depth stencil to be cleared.
Identify the type of data to clear (see
Clear the depth buffer with this value. This value will be clamped between 0 and 1.
Clear the stencil buffer with this value.
Differences between Direct3D 9 and Direct3D 11/10: Unlike Direct3D 9, the full extent of the resource view is always cleared. Viewport and scissor settings are not applied. |
?
-Generates mipmaps for the given shader resource.
-A reference to an
You can call GenerateMips on any shader-resource view to generate the lower mipmap levels for the shader resource. GenerateMips uses the largest mipmap level of the view to recursively generate the lower levels of the mip and stops with the smallest level that is specified by the view. If the base resource wasn't created with
Feature levels 9.1, 9.2, and 9.3 can't support automatic generation of mipmaps for 3D (volume) textures.
Video adapters that support feature level 9.1 and higher support generating mipmaps if you use any of these formats:
- - - - - - -
Video adapters that support feature level 9.2 and higher support generating mipmaps if you use any of these formats in addition to any of the formats for feature level 9.1:
- - - - -
Video adapters that support feature level 9.3 and higher support generating mipmaps if you use any of these formats in addition to any of the formats for feature levels 9.1 and 9.2:
- DXGI_FORMAT_B4G4R4A4 (optional) -
Video adapters that support feature level 10 and higher support generating mipmaps if you use any of these formats in addition to any of the formats for feature levels 9.1, 9.2, and 9.3:
(optional) - - - - - - - - - - - - - - - (optional) -
For all other unsupported formats, GenerateMips will silently fail.
-Sets the minimum level-of-detail (LOD) for a resource.
-A reference to an
The level-of-detail, which ranges between 0 and the maximum number of mipmap levels of the resource. For example, the maximum number of mipmap levels of a 1D texture is specified in the MipLevels member of the
To use a resource with SetResourceMinLOD, you must set the
For Direct3D 10 and Direct3D 10.1, when sampling from a texture resource in a shader, the sampler can define a minimum LOD clamp to force sampling from less detailed mip levels. For Direct3D 11, this functionality is extended from the sampler to the entire resource. Therefore, the application can specify the highest-resolution mip level of a resource that is available for access. This restricts the set of mip levels that are required to be resident in GPU memory, thereby saving memory.
The set of mip levels resident per-resource in GPU memory can be specified by the user.
Minimum LOD affects all of the resident mip levels. Therefore, only the resident mip levels can be updated and read from.
All methods that access texture resources must adhere to minimum LOD clamps.
Empty-set accesses are handled as out-of-bounds cases.
-Gets the minimum level-of-detail (LOD).
-A reference to an
Returns the minimum LOD.
Copy a multisampled resource into a non-multisampled resource.
-Destination resource. Must be a created with the
A zero-based index, that identifies the destination subresource. Use D3D11CalcSubresource to calculate the index.
Source resource. Must be multisampled.
The source subresource of the source resource.
A
This API is most useful when re-using the resulting rendertarget of one render pass as an input to a second render pass.
The source and destination resources must be the same resource type and have the same dimensions. In addition, they must have compatible formats. There are three scenarios for this:
Scenario | Requirements |
---|---|
Source and destination are prestructured and typed | Both the source and destination must have identical formats and that format must be specified in the Format parameter. |
One resource is prestructured and typed and the other is prestructured and typeless | The typed resource must have a format that is compatible with the typeless resource (i.e. the typed resource is |
Source and destination are prestructured and typeless | Both the source and desintation must have the same typeless format (i.e. both must have For example, given the
|
?
-Queues commands from a command list onto a device.
- A reference to an
A Boolean flag that determines whether the target context state is saved prior to and restored after the execution of a command list. Use TRUE to indicate that the runtime needs to save and restore the state. Use
Use this method to play back a command list that was recorded by a deferred context on any thread.
A call to ExecuteCommandList of a command list from a deferred context onto the immediate context is required for the recorded commands to be executed on the graphics processing unit (GPU). A call to ExecuteCommandList of a command list from a deferred context onto another deferred context can be used to merge recorded lists. But to run the commands from the merged deferred command list on the GPU, you need to execute them on the immediate context.
This method performs some runtime validation related to queries. Queries that are begun in a device context cannot be manipulated indirectly by executing a command list (that is, Begin or End was invoked against the same query by the deferred context which generated the command list). If such a condition occurs, the ExecuteCommandList method does not execute the command list. However, the state of the device context is still maintained, as would be expected (
Windows?Phone?8: This API is supported.
-Get the rendering predicate state.
-Address of a boolean to fill with the predicate comparison value.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Restore all default settings.
-This method resets any device context to the default settings. This sets all input/output resource slots, shaders, input layouts, predications, scissor rectangles, depth-stencil state, rasterizer state, blend state, sampler state, and viewports to
For a scenario where you would like to clear a list of commands recorded so far, call
Sends queued-up commands in the command buffer to the graphics processing unit (GPU).
-Most applications don't need to call this method. If an application calls this method when not necessary, it incurs a performance penalty. Each call to Flush incurs a significant amount of overhead.
When Microsoft Direct3D state-setting, present, or draw commands are called by an application, those commands are queued into an internal command buffer. Flush sends those commands to the GPU for processing. Typically, the Direct3D runtime sends these commands to the GPU automatically whenever the runtime determines that they need to be sent, such as when the command buffer is full or when an application maps a resource. Flush sends the commands manually.
We recommend that you use Flush when the CPU waits for an arbitrary amount of time (such as when you call the Sleep function).
Because Flush operates asynchronously, it can return either before or after the GPU finishes executing the queued graphics commands. However, the graphics commands eventually always complete. You can call the
Microsoft Direct3D?11 defers the destruction of objects. Therefore, an application can't rely upon objects immediately being destroyed. By calling Flush, you destroy any objects whose destruction was deferred. If an application requires synchronous destruction of an object, we recommend that the application release all its references, call
Gets the type of device context.
-A member of
Gets the initialization flags associated with the current deferred context.
-The GetContextFlags method gets the flags that were supplied to the ContextFlags parameter of
Create a command list and record graphics commands into it.
- A Boolean flag that determines whether the runtime saves deferred context state before it executes FinishCommandList and restores it afterwards. Use TRUE to indicate that the runtime needs to save and restore the state. Use
Upon completion of the method, the passed reference to an
Returns
Create a command list from a deferred context and record commands into it by calling FinishCommandList. Play back a command list with an immediate context by calling
Immediate context state is cleared before and after a command list is executed. A command list has no concept of inheritance. Each call to FinishCommandList will record only the state set since any previous call to FinishCommandList.
For example, the state of a device context is its render state or pipeline state. To retrieve device context state, an application can call
For more information about how to use FinishCommandList, see How to: Record a Command List.
Windows?Phone?8: This API is supported.
-The
Bind a single vertex buffer to the input-assembler stage.
-The first input slot for binding. The first vertex buffer is explicitly bound to the start slot; this causes each additional vertex buffer in the array to be implicitly bound to each subsequent input slot. The maximum of 16 or 32 input slots (ranges from 0 to
A
For information about creating vertex buffers, see Create a Vertex Buffer.
Calling this method using a buffer that is currently bound for writing (i.e. bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Bind an array of vertex buffers to the input-assembler stage.
-The first input slot for binding. The first vertex buffer is explicitly bound to the start slot; this causes each additional vertex buffer in the array to be implicitly bound to each subsequent input slot. The maximum of 16 or 32 input slots (ranges from 0 to
A reference to an array of
For information about creating vertex buffers, see Create a Vertex Buffer.
Calling this method using a buffer that is currently bound for writing (i.e. bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Bind an array of vertex buffers to the input-assembler stage.
-The first input slot for binding. The first vertex buffer is explicitly bound to the start slot; this causes each additional vertex buffer in the array to be implicitly bound to each subsequent input slot. The maximum of 16 or 32 input slots (ranges from 0 to
A reference to an array of vertex buffers (see
Pointer to an array of stride values; one stride value for each buffer in the vertex-buffer array. Each stride is the size (in bytes) of the elements that are to be used from that vertex buffer.
Pointer to an array of offset values; one offset value for each buffer in the vertex-buffer array. Each offset is the number of bytes between the first element of a vertex buffer and the first element that will be used.
For information about creating vertex buffers, see Create a Vertex Buffer.
Calling this method using a buffer that is currently bound for writing (i.e. bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get or sets a reference to the input-layout object that is bound to the input-assembler stage.
-For information about creating an input-layout object, see Creating the Input-Layout Object.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get or sets information about the primitive type, and data order that describes input data for the input assembler stage.
-Bind an input-layout object to the input-assembler stage.
-A reference to the input-layout object (see
Input-layout objects describe how vertex buffer data is streamed into the IA pipeline stage. To create an input-layout object, call
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Bind an array of vertex buffers to the input-assembler stage.
-For info about creating vertex buffers, see How to: Create a Vertex Buffer.
Calling this method using a buffer that is currently bound for writing (that is, bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
Windows?Phone?8: This API is supported.
-Bind an index buffer to the input-assembler stage.
- A reference to an
A
Offset (in bytes) from the start of the index buffer to the first index to use.
For information about creating index buffers, see How to: Create an Index Buffer.
Calling this method using a buffer that is currently bound for writing (i.e. bound to the stream output pipeline stage) will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
Windows?Phone?8: This API is supported.
-Bind information about the primitive type, and data order that describes input data for the input assembler stage.
-The type of primitive and ordering of the primitive data (see D3D11_PRIMITIVE_TOPOLOGY).
Windows?Phone?8: This API is supported.
-Get a reference to the input-layout object that is bound to the input-assembler stage.
-A reference to the input-layout object (see
For information about creating an input-layout object, see Creating the Input-Layout Object.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the vertex buffers bound to the input-assembler stage.
-The input slot of the first vertex buffer to get. The first vertex buffer is explicitly bound to the start slot; this causes each additional vertex buffer in the array to be implicitly bound to each subsequent input slot. The maximum of 16 or 32 input slots (ranges from 0 to
The number of vertex buffers to get starting at the offset. The number of buffers (plus the starting slot) cannot exceed the total number of IA-stage input slots.
A reference to an array of vertex buffers returned by the method (see
Pointer to an array of stride values returned by the method; one stride value for each buffer in the vertex-buffer array. Each stride value is the size (in bytes) of the elements that are to be used from that vertex buffer.
Pointer to an array of offset values returned by the method; one offset value for each buffer in the vertex-buffer array. Each offset is the number of bytes between the first element of a vertex buffer and the first element that will be used.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get a reference to the index buffer that is bound to the input-assembler stage.
-A reference to an index buffer returned by the method (see
Specifies format of the data in the index buffer (see
Offset (in bytes) from the start of the index buffer, to the first index to use.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get information about the primitive type, and data order that describes input data for the input assembler stage.
-A reference to the type of primitive, and ordering of the primitive data (see D3D11_PRIMITIVE_TOPOLOGY).
The
Bind one or more render targets atomically and the depth-stencil buffer to the output-merger stage.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT. It is invalid to try to set the same subresource to multiple render target slots. Any render targets not defined by this call are set to
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, then all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-Binds resources to the output-merger stage.
-Number of render-target views (ppRenderTargetViews) and depth-stencil view (ppDepthStencilView) to bind. If you set NumViews to D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL (0xffffffff), this method does not modify the currently bound render-target views (RTVs) and also does not modify depth-stencil view (DSV).
Pointer to an array of
Pointer to a
Index into a zero-based array to begin setting unordered-access views (ranges from 0 to
For the Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - 1. D3D11_1_UAV_SLOT_COUNT is defined as 64.
For pixel shaders, UAVStartSlot should be equal to the number of render-target views being bound.
Number of unordered-access views (UAVs) in ppUnorderedAccessView. If you set NumUAVs to D3D11_KEEP_UNORDERED_ACCESS_VIEWS (0xffffffff), this method does not modify the currently bound unordered-access views.
For the Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - UAVStartSlot.
Pointer to an array of
An array of append and consume buffer offsets. A value of -1 indicates to keep the current offset. Any other values set the hidden counter for that appendable and consumable UAV. pUAVInitialCounts is relevant only for UAVs that were created with either
For pixel shaders, the render targets and unordered-access views share the same resource slots when being written out. This means that UAVs must be given an offset so that they are placed in the slots after the render target views that are being bound.
Note??RTVs, DSV, and UAVs cannot be set independently; they all need to be set at the same time.
Two RTVs conflict if they share a subresource (and therefore share the same resource).
Two UAVs conflict if they share a subresource (and therefore share the same resource).
An RTV conflicts with a UAV if they share a subresource or share a bind point.
OMSetRenderTargetsAndUnorderedAccessViews operates properly in the following situations:
NumViews != D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL and NumUAVs != D3D11_KEEP_UNORDERED_ACCESS_VIEWS
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews performs the following tasks:
NumViews == D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only UAVs.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppUnorderedAccessView.
OMSetRenderTargetsAndUnorderedAccessViews ignores ppDepthStencilView, and the current depth-stencil view remains bound.
NumUAVs == D3D11_KEEP_UNORDERED_ACCESS_VIEWS
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only RTVs and DSV.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppRenderTargetViews and ppDepthStencilView.
OMSetRenderTargetsAndUnorderedAccessViews ignores UAVStartSlot.
Binds resources to the output-merger stage.
-Number of render-target views (ppRenderTargetViews) and depth-stencil view (ppDepthStencilView) to bind. If you set NumViews to D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL (0xffffffff), this method does not modify the currently bound render-target views (RTVs) and also does not modify depth-stencil view (DSV).
Pointer to an array of
Pointer to a
Index into a zero-based array to begin setting unordered-access views (ranges from 0 to
For the Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - 1. D3D11_1_UAV_SLOT_COUNT is defined as 64.
For pixel shaders, UAVStartSlot should be equal to the number of render-target views being bound.
Number of unordered-access views (UAVs) in ppUnorderedAccessView. If you set NumUAVs to D3D11_KEEP_UNORDERED_ACCESS_VIEWS (0xffffffff), this method does not modify the currently bound unordered-access views.
For the Direct3D 11.1 runtime, which is available starting with Windows Developer Preview, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - UAVStartSlot.
Pointer to an array of
An array of append and consume buffer offsets. A value of -1 indicates to keep the current offset. Any other values set the hidden counter for that appendable and consumable UAV. pUAVInitialCounts is relevant only for UAVs that were created with either
For pixel shaders, the render targets and unordered-access views share the same resource slots when being written out. This means that UAVs must be given an offset so that they are placed in the slots after the render target views that are being bound.
Note??RTVs, DSV, and UAVs cannot be set independently; they all need to be set at the same time.
Two RTVs conflict if they share a subresource (and therefore share the same resource).
Two UAVs conflict if they share a subresource (and therefore share the same resource).
An RTV conflicts with a UAV if they share a subresource or share a bind point.
OMSetRenderTargetsAndUnorderedAccessViews operates properly in the following situations:
NumViews != D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL and NumUAVs != D3D11_KEEP_UNORDERED_ACCESS_VIEWS
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews performs the following tasks:
NumViews == D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only UAVs.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppUnorderedAccessView.
OMSetRenderTargetsAndUnorderedAccessViews ignores ppDepthStencilView, and the current depth-stencil view remains bound.
NumUAVs == D3D11_KEEP_UNORDERED_ACCESS_VIEWS
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only RTVs and DSV.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppRenderTargetViews and ppDepthStencilView.
OMSetRenderTargetsAndUnorderedAccessViews ignores UAVStartSlot.
Bind one or more render targets atomically and the depth-stencil buffer to the output-merger stage.
-The maximum number of active render targets a device can have active at any given time is set by a #define in D3D11.h called
If any subresources are also currently bound for reading in a different stage or writing (perhaps in a different part of the pipeline), those bind points will be set to
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
If the render-target views were created from an array resource type, all of the render-target views must have the same array size. This restriction also applies to the depth-stencil view, its array size must match that of the render-target views being bound.
The pixel shader must be able to simultaneously render to at least eight separate render targets. All of these render targets must access the same type of resource: Buffer, Texture1D, Texture1DArray, Texture2D, Texture2DArray, Texture3D, or TextureCube. All render targets must have the same size in all dimensions (width and height, and depth for 3D or array size for *Array types). If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same). Each render target can have a different data format. These render target formats are not required to have identical bit-per-element counts.
Any combination of the eight slots for render targets can have a render target set or not set.
The same resource view cannot be bound to multiple render target slots simultaneously. However, you can set multiple non-overlapping resource views of a single resource as simultaneous multiple render targets.
-Binds resources to the output-merger stage.
- Number of render targets to bind (ranges between 0 and
Pointer to an array of
Pointer to a
Index into a zero-based array to begin setting unordered-access views (ranges from 0 to
For the Direct3D 11.1 runtime, which is available starting with Windows?8, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - 1. D3D11_1_UAV_SLOT_COUNT is defined as 64.
For pixel shaders, UAVStartSlot should be equal to the number of render-target views being bound.
Number of unordered-access views (UAVs) in ppUnorderedAccessViews. If you set NumUAVs to D3D11_KEEP_UNORDERED_ACCESS_VIEWS (0xffffffff), this method does not modify the currently bound unordered-access views.
For the Direct3D 11.1 runtime, which is available starting with Windows?8, this value can range from 0 to D3D11_1_UAV_SLOT_COUNT - UAVStartSlot.
Pointer to an array of
An array of append and consume buffer offsets. A value of -1 indicates to keep the current offset. Any other values set the hidden counter for that appendable and consumable UAV. pUAVInitialCounts is relevant only for UAVs that were created with either
For pixel shaders, the render targets and unordered-access views share the same resource slots when being written out. This means that UAVs must be given an offset so that they are placed in the slots after the render target views that are being bound.
Note??RTVs, DSV, and UAVs cannot be set independently; they all need to be set at the same time.?Two RTVs conflict if they share a subresource (and therefore share the same resource).
Two UAVs conflict if they share a subresource (and therefore share the same resource).
An RTV conflicts with a UAV if they share a subresource or share a bind point.
OMSetRenderTargetsAndUnorderedAccessViews operates properly in the following situations:
NumRTVs != D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL and NumUAVs != D3D11_KEEP_UNORDERED_ACCESS_VIEWS
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews performs the following tasks:
NumRTVs == D3D11_KEEP_RENDER_TARGETS_AND_DEPTH_STENCIL
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only UAVs.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppUnorderedAccessViews.
OMSetRenderTargetsAndUnorderedAccessViews ignores ppDepthStencilView, and the current depth-stencil view remains bound.
NumUAVs == D3D11_KEEP_UNORDERED_ACCESS_VIEWS
In this situation, OMSetRenderTargetsAndUnorderedAccessViews binds only RTVs and DSV.
The following conditions must be true for OMSetRenderTargetsAndUnorderedAccessViews to succeed and for the runtime to pass the bind information to the driver:
OMSetRenderTargetsAndUnorderedAccessViews unbinds the following items:
OMSetRenderTargetsAndUnorderedAccessViews binds ppRenderTargetViews and ppDepthStencilView.
OMSetRenderTargetsAndUnorderedAccessViews ignores UAVStartSlot.
Windows?Phone?8: This API is supported.
-Set the blend state of the output-merger stage.
-Pointer to a blend-state interface (see
Array of blend factors, one for each RGBA component. The blend factors modulate values for the pixel shader, render target, or both. If you created the blend-state object with
32-bit sample coverage. The default value is 0xffffffff. See remarks.
Blend state is used by the output-merger stage to determine how to blend together two RGB pixel values and two alpha values. The two RGB pixel values and two alpha values are the RGB pixel value and alpha value that the pixel shader outputs and the RGB pixel value and alpha value already in the output render target. The blend option controls the data source that the blending stage uses to modulate values for the pixel shader, render target, or both. The blend operation controls how the blending stage mathematically combines these modulated values.
To create a blend-state interface, call
Passing in
State | Default Value |
---|---|
AlphaToCoverageEnable | |
IndependentBlendEnable | |
RenderTarget[0].BlendEnable | |
RenderTarget[0].SrcBlend | |
RenderTarget[0].DestBlend | |
RenderTarget[0].BlendOp | |
RenderTarget[0].SrcBlendAlpha | |
RenderTarget[0].DestBlendAlpha | |
RenderTarget[0].BlendOpAlpha | |
RenderTarget[0].RenderTargetWriteMask |
?
A sample mask determines which samples get updated in all the active render targets. The mapping of bits in a sample mask to samples in a multisample render target is the responsibility of an individual application. A sample mask is always applied; it is independent of whether multisampling is enabled, and does not depend on whether an application uses multisample render targets.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets the depth-stencil state of the output-merger stage.
-Pointer to a depth-stencil state interface (see
Reference value to perform against when doing a depth-stencil test. See remarks.
To create a depth-stencil state interface, call
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get references to the resources bound to the output-merger stage.
-Number of render targets to retrieve.
Pointer to an array of
Pointer to a
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get references to the resources bound to the output-merger stage.
-The number of render-target views to retrieve.
Pointer to an array of
Pointer to a
Index into a zero-based array to begin retrieving unordered-access views (ranges from 0 to
Number of unordered-access views to return in ppUnorderedAccessViews. This number ranges from 0 to
Pointer to an array of
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
Windows?Phone?8: This API is supported.
-Set the blend state of the output-merger stage.
-Pointer to a blend-state interface (see
Array of blend factors, one for each RGBA component. The blend factors modulate values for the pixel shader, render target, or both. If you created the blend-state object with
32-bit sample coverage. The default value is 0xffffffff. See remarks.
Blend state is used by the output-merger stage to determine how to blend together two RGB pixel values and two alpha values. The two RGB pixel values and two alpha values are the RGB pixel value and alpha value that the pixel shader outputs and the RGB pixel value and alpha value already in the output render target. The blend option controls the data source that the blending stage uses to modulate values for the pixel shader, render target, or both. The blend operation controls how the blending stage mathematically combines these modulated values.
To create a blend-state interface, call
Passing in
State | Default Value |
---|---|
AlphaToCoverageEnable | |
IndependentBlendEnable | |
RenderTarget[0].BlendEnable | |
RenderTarget[0].SrcBlend | |
RenderTarget[0].DestBlend | |
RenderTarget[0].BlendOp | |
RenderTarget[0].SrcBlendAlpha | |
RenderTarget[0].DestBlendAlpha | |
RenderTarget[0].BlendOpAlpha | |
RenderTarget[0].RenderTargetWriteMask |
?
A sample mask determines which samples get updated in all the active render targets. The mapping of bits in a sample mask to samples in a multisample render target is the responsibility of an individual application. A sample mask is always applied; it is independent of whether multisampling is enabled, and does not depend on whether an application uses multisample render targets.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Gets the depth-stencil state of the output-merger stage.
- Address of a reference to a depth-stencil state interface (see
Pointer to the stencil reference value used in the depth-stencil test.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
Windows?Phone?8: This API is supported.
-The
All scissor rects must be set atomically as one operation. Any scissor rects not defined by the call are disabled.
The scissor rectangles will only be used if ScissorEnable is set to true in the rasterizer state (see
Which scissor rectangle to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader (see shader semantic syntax). If a geometry shader does not make use of the SV_ViewportArrayIndex semantic then Direct3D will use the first scissor rectangle in the array.
Each scissor rectangle in the array corresponds to a viewport in an array of viewports (see
All scissor rects must be set atomically as one operation. Any scissor rects not defined by the call are disabled.
The scissor rectangles will only be used if ScissorEnable is set to true in the rasterizer state (see
Which scissor rectangle to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader (see shader semantic syntax). If a geometry shader does not make use of the SV_ViewportArrayIndex semantic then Direct3D will use the first scissor rectangle in the array.
Each scissor rectangle in the array corresponds to a viewport in an array of viewports (see
All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
-All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
-All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
Gets or sets a reference to the data contained in a subresource, and denies the GPU access to that subresource.
- If you call Map on a deferred context, you can only pass
For info about how to use Map, see How to: Use dynamic resources.
-Set the rasterizer state for the rasterizer stage of the pipeline.
-To create a rasterizer state interface, call
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Bind an array of viewports to the rasterizer stage of the pipeline.
-Number of viewports to bind.
An array of
All viewports must be set atomically as one operation. Any viewports not defined by the call are disabled.
Which viewport to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader; if a geometry shader does not specify the semantic, Direct3D will use the first viewport in the array.
Note??Even though you specify float values to the members of theBind an array of scissor rectangles to the rasterizer stage.
-Number of scissor rectangles to bind.
An array of scissor rectangles (see D3D11_RECT).
All scissor rects must be set atomically as one operation. Any scissor rects not defined by the call are disabled.
The scissor rectangles will only be used if ScissorEnable is set to true in the rasterizer state (see
Which scissor rectangle to use is determined by the SV_ViewportArrayIndex semantic output by a geometry shader (see shader semantic syntax). If a geometry shader does not make use of the SV_ViewportArrayIndex semantic then Direct3D will use the first scissor rectangle in the array.
Each scissor rectangle in the array corresponds to a viewport in an array of viewports (see
Windows?Phone?8: This API is supported.
-Gets a reference to the data contained in a subresource, and denies the GPU access to that subresource.
- If you call Map on a deferred context, you can only pass
For info about how to use Map, see How to: Use dynamic resources.
-Gets the array of viewports bound to the rasterizer stage.
- A reference to a variable that, on input, specifies the number of viewports (ranges from 0 to D3D11_VIEWPORT_AND_SCISSORRECT_OBJECT_COUNT_PER_PIPELINE) in the pViewports array; on output, the variable contains the actual number of viewports that are bound to the rasterizer stage. If pViewports is
An array of
Windows?Phone?8: This API is supported.
-Get the array of scissor rectangles bound to the rasterizer stage.
-The number of scissor rectangles (ranges between 0 and D3D11_VIEWPORT_AND_SCISSORRECT_OBJECT_COUNT_PER_PIPELINE) bound; set pRects to
An array of scissor rectangles (see D3D11_RECT). If NumRects is greater than the number of scissor rects currently bound, then unused members of the array will contain 0.
The
Set the target output buffers for the stream-output stage of the pipeline.
-The number of buffer to bind to the device. A maximum of four output buffers can be set. If less than four are defined by the call, the remaining buffer slots are set to
The array of output buffers (see
Array of offsets to the output buffers from ppSOTargets, one offset for each buffer. The offset values must be in bytes.
An offset of -1 will cause the stream output buffer to be appended, continuing after the last location written to the buffer in a previous stream output pass.
Calling this method using a buffer that is currently bound for writing will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set the target output buffers for the stream-output stage of the pipeline.
- The number of buffer to bind to the device. A maximum of four output buffers can be set. If less than four are defined by the call, the remaining buffer slots are set to
The array of output buffers (see
Array of offsets to the output buffers from ppSOTargets, one offset for each buffer. The offset values must be in bytes.
An offset of -1 will cause the stream output buffer to be appended, continuing after the last location written to the buffer in a previous stream output pass.
Calling this method using a buffer that is currently bound for writing will effectively bind
The debug layer will generate a warning whenever a resource is prevented from being bound simultaneously as an input and an output, but this will not prevent invalid data from being used by the runtime.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
Windows?Phone?8: This API is supported.
-Get the target output buffers for the stream-output stage of the pipeline.
-Number of buffers to get.
An array of output buffers (see
A maximum of four output buffers can be retrieved.
The offsets to the output buffers pointed to in the returned ppSOTargets array may be assumed to be -1 (append), as defined for use in
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
Windows?Phone?8: This API is supported.
-The device context interface represents a device context; it is used to render commands.
Copies a region from a source resource to a destination resource.
-A reference to the destination resource.
Destination subresource index.
The x-coordinate of the upper-left corner of the destination region.
The y-coordinate of the upper-left corner of the destination region. For a 1D subresource, this must be zero.
The z-coordinate of the upper-left corner of the destination region. For a 1D or 2D subresource, this must be zero.
A reference to the source resource.
Source subresource index.
A reference to a 3D box that defines the region of the source subresource that CopySubresourceRegion1 can copy. If
An empty box results in a no-op. A box is empty if the top value is greater than or equal to the bottom value, or the left value is greater than or equal to the right value, or the front value is greater than or equal to the back value. When the box is empty, CopySubresourceRegion1 doesn't perform a copy operation.
A
If the display driver supports overlapping, the source and destination subresources can be identical, and the source and destination regions can overlap each other. For existing display drivers that don?t support overlapping, the runtime drops calls with identical source and destination subresources, regardless of whether the regions overlap. To determine whether the display driver supports overlapping, check the CopyWithOverlap member of
The CPU copies data from memory to a subresource created in non-mappable memory.
-A reference to the destination resource.
A zero-based index that identifies the destination subresource. See D3D11CalcSubresource for more details.
A reference to a box that defines the portion of the destination subresource to copy the resource data into. Coordinates are in bytes for buffers and in texels for textures. If
An empty box results in a no-op. A box is empty if the top value is greater than or equal to the bottom value, or the left value is greater than or equal to the right value, or the front value is greater than or equal to the back value. When the box is empty, UpdateSubresource1 doesn't perform an update operation.
A reference to the source data in memory.
The size of one row of the source data.
The size of one depth slice of source data.
A
If you call UpdateSubresource1 to update a constant buffer, pass any region, and the driver has not been implemented to Windows?8, the runtime drops the call (except feature level 9.1, 9.2, and 9.3 where the runtime emulates support). The runtime also drops the call if you update a constant buffer with a partial region whose extent is not aligned to 16-byte granularity (16 bytes being a full constant). When the runtime drops the call, the runtime doesn't call the corresponding device driver interface (DDI).
When you record a call to UpdateSubresource with an offset pDstBox in a software command list, the offset in pDstBox is incorrectly applied to pSrcData when you play back the command list. The new-for-Windows?8UpdateSubresource1 fixes this issue. In a call to UpdateSubresource1, pDstBox does not affect pSrcData.
For info about various resource types and how UpdateSubresource1 might work with each resource type, see Introduction to a Resource in Direct3D 11.
Note??Applies only to feature level 9_x hardware If you use UpdateSubresource1 orDiscards a resource from the device context.
-A reference to the
DiscardResource informs the graphics processing unit (GPU) that the existing content in the resource that pResource points to is no longer needed.
-Discards a resource view from the device context.
-A reference to the
DiscardView informs the graphics processing unit (GPU) that the existing content in the resource view that pResourceView points to is no longer needed. The view can be an SRV, RTV, UAV, or DSV. DiscardView is a variation on the DiscardResource method. DiscardView allows you to discard a subset of a resource that is in a view (such as a single miplevel). More importantly, DiscardView provides a convenience because often views are what are being bound and unbound at the pipeline. Some pipeline bindings do not have views, such as stream output. In that situation, DiscardResource can do the job for any resource.
-Sets the constant buffers that the vertex shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to VSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to VSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the vertex shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to VSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to VSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the vertex shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to VSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to VSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the hull-shader stage of the pipeline uses.
-The runtime drops the call to HSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to HSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If the pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the hull-shader stage of the pipeline uses.
-The runtime drops the call to HSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to HSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If the pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the hull-shader stage of the pipeline uses.
-The runtime drops the call to HSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to HSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If the pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the domain-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to DSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to DSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the domain-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to DSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to DSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the domain-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to DSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to DSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the geometry shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to GSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to GSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the geometry shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to GSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to GSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the geometry shader pipeline stage uses.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to GSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to GSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the pixel shader pipeline stage uses, and enables the shader to access other parts of the buffer.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
To enable the shader to access other parts of the buffer, call PSSetConstantBuffers1 instead of PSSetConstantBuffers. PSSetConstantBuffers1 has additional parameters pFirstConstant and pNumConstants.
The runtime drops the call to PSSetConstantBuffers1 if the numbers of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders. The maximum constant buffer size that is supported by shaders holds 4096 constants, where each constant has four 32-bit components.
The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the following window (range):
[value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]
That is, the window is the range is from (value in an element of pFirstConstant) to (value in an element of pFirstConstant + value in an element of pNumConstants).
The runtime also drops the call to PSSetConstantBuffers1 on existing drivers that do not support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the pixel shader pipeline stage uses, and enables the shader to access other parts of the buffer.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
To enable the shader to access other parts of the buffer, call PSSetConstantBuffers1 instead of PSSetConstantBuffers. PSSetConstantBuffers1 has additional parameters pFirstConstant and pNumConstants.
The runtime drops the call to PSSetConstantBuffers1 if the numbers of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders. The maximum constant buffer size that is supported by shaders holds 4096 constants, where each constant has four 32-bit components.
The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the following window (range):
[value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]
That is, the window is the range is from (value in an element of pFirstConstant) to (value in an element of pFirstConstant + value in an element of pNumConstants).
The runtime also drops the call to PSSetConstantBuffers1 on existing drivers that do not support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the pixel shader pipeline stage uses, and enables the shader to access other parts of the buffer.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers being given to the device.
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
To enable the shader to access other parts of the buffer, call PSSetConstantBuffers1 instead of PSSetConstantBuffers. PSSetConstantBuffers1 has additional parameters pFirstConstant and pNumConstants.
The runtime drops the call to PSSetConstantBuffers1 if the numbers of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders. The maximum constant buffer size that is supported by shaders holds 4096 constants, where each constant has four 32-bit components.
The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the following window (range):
[value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]
That is, the window is the range is from (value in an element of pFirstConstant) to (value in an element of pFirstConstant + value in an element of pNumConstants).
The runtime also drops the call to PSSetConstantBuffers1 on existing drivers that do not support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the compute-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to CSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to CSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the compute-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to CSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to CSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Sets the constant buffers that the compute-shader stage uses.
-Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
An array that holds the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 16 indicates that the start of the associated constant buffer is 256 bytes into the constant buffer. Each offset must be a multiple of 16 constants.
An array that holds the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. Each number of constants must be a multiple of 16 constants, in the range [0..4096].
The runtime drops the call to CSSetConstantBuffers1 if the number of constants to which pNumConstants points is larger than the maximum constant buffer size that is supported by shaders (4096 constants). The values in the elements of the pFirstConstant and pFirstConstant + pNumConstants arrays can exceed the length of each buffer; from the shader's point of view, the constant buffer is the intersection of the actual memory allocation for the buffer and the window [value in an element of pFirstConstant, value in an element of pFirstConstant + value in an element of pNumConstants]. The runtime also drops the call to CSSetConstantBuffers1 on existing drivers that don't support this offsetting.
The runtime will emulate this feature for feature level 9.1, 9.2, and 9.3; therefore, this feature is supported for feature level 9.1, 9.2, and 9.3. This feature is always available on new drivers for feature level 10 and higher.
From the shader?s point of view, element [0] in the constant buffers array is the constant at pFirstConstant.
Out of bounds access to the constant buffers from the shader to the range that is defined by pFirstConstant and pNumConstants returns 0.
If pFirstConstant and pNumConstants arrays are
If either pFirstConstant or pNumConstants is
Gets the constant buffers that the vertex shader pipeline stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the hull-shader stage uses.
-If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the domain-shader stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the geometry shader pipeline stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the pixel shader pipeline stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Gets the constant buffers that the compute-shader stage uses.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references to be returned by the method.
A reference to an array that receives the offsets into the buffers that ppConstantBuffers specifies. Each offset specifies where, from the shader's point of view, each constant buffer starts. Each offset is measured in shader constants, which are 16 bytes (4*32-bit components). Therefore, an offset of 2 indicates that the start of the associated constant buffer is 32 bytes into the constant buffer. The runtime sets pFirstConstant to
A reference to an array that receives the numbers of constants in the buffers that ppConstantBuffers specifies. Each number specifies the number of constants that are contained in the constant buffer that the shader uses. Each number of constants starts from its respective offset that is specified in the pFirstConstant array. The runtime sets pNumConstants to
If no buffer is bound at a slot, pFirstConstant and pNumConstants are
Activates the given context state object and changes the current device behavior to Direct3D?11, Direct3D?10.1, or Direct3D?10.
-A reference to the
A reference to a variable that receives a reference to the
SwapDeviceContextState changes device behavior. This device behavior depends on the emulated interface that you passed to the EmulatedInterface parameter of the
SwapDeviceContextState is not supported on a deferred context.
SwapDeviceContextState disables the incompatible device interfaces ID3D10Device, ID3D10Device1, __uuidof(
or __uuidof(
turns off most of the Direct3D?10 device interfaces. A context state object that is created with __uuidof(ID3D10Device1)
or __uuidof(ID3D10Device)
turns off most of the
SwapDeviceContextState activates the context state object specified by pState. This means that the device behaviors that are associated with the context state object's feature level and compatible interface are activated on the Direct3D device until the next call to SwapDeviceContextState. In addition, any state that was saved when this context state object was last active is now reactivated, so that the previous state is replaced.
SwapDeviceContextState sets ppPreviousState to the most recently activated context state object. The object allows the caller to save and then later restore the previous device state. This behavior is useful in a plug-in architecture such as Direct2D that shares a Direct3D device with its plug-ins. A Direct2D interface can use context state objects to save and restore the application's state.
If the caller did not previously call the
The feature level that is specified by the application, and that is chosen by the context state object from the acceptable list that the application supplies to
The feature level of the context state object controls the functionality available from the immediate context. However, to maintain the free-threaded contract of the Direct3D?11 device methods?the resource-creation methods in particular?the upper-bound feature level of all created context state objects controls the set of resources that the device creates.
Because the context state object interface is published by the immediate context, the interface requires the same threading model as the immediate context. Specifically, SwapDeviceContextState is single-threaded with respect to the other immediate context methods and with respect to the equivalent methods of ID3D10Device.
Crucially, because only one of the Direct3D?10 or Direct3D?11 ref-count behaviors can be available at a time, one of the Direct3D?10 and Direct3D?11 interfaces must break its ref-count contract. To avoid this situation, the activation of a context state object turns off the incompatible version interface. Also, if you call a method of an incompatible version interface, the call silently fails if the method has return type void, returns an
When you switch from Direct3D?11 mode to either Direct3D?10 mode or Direct3D?10.1 mode, the binding behavior of the device changes. Specifically, the final release of a resource induces unbind in Direct3D?10 mode or Direct3D?10.1 mode. During final release an application releases all of the resource's references, including indirect references such as the linkage from view to resource, and the linkage from context state object to any of the context state object's bound resources. Any bound resource to which the application has no reference is unbound and destroyed, in order to maintain the Direct3D?10 behavior.
SwapDeviceContextState does not affect any state that
Command lists that are generated by deferred contexts do not hold a reference to context state objects and are not affected by future updates to context state objects.
No asynchronous objects are affected by SwapDeviceContextState. For example, if a query is active before a call to SwapDeviceContextState, it is still active after the call.
-Sets all the elements in a resource view to one value.
-A reference to the
A 4-component array that represents the color to use to clear the resource view.
An array of D3D11_RECT structures for the rectangles in the resource view to clear. If
Number of rectangles in the array that the pRect parameter specifies.
ClearView works only on render-target views (RTVs), depth/stencil views (DSVs) on depth-only resources (resources with no stencil component), unordered-access views (UAVs), or any video view of a Texture2D surface. The runtime drops invalid calls. Empty rectangles in the pRect array are a no-op. A rectangle is empty if the top value equals the bottom value or the left value equals the right value.
ClearView doesn?t support 3D textures.
ClearView applies the same color value to all array slices in a view; all rectangles in the pRect array correspond to each array slice. The pRect array of rectangles is a set of areas to clear on a single surface. If the view is an array, ClearView clears all the rectangles on each array slice individually.
When you apply rectangles to buffers, set the top value to 0 and the bottom value to 1 and set the left value and right value to describe the extent within the buffer. When the top value equals the bottom value or the left value equals the right value, the rectangle is empty and a no-op is achieved.
The driver converts and clamps color values to the destination format as appropriate per Direct3D conversion rules. For example, if the format of the view is
If the format is integer, such as
Here are the color mappings:
For video views with YUV or YCbBr formats, ClearView doesn't convert color values. In situations where the format name doesn?t indicate _UNORM, _UINT, and so on, ClearView assumes _UINT. Therefore, 235.0f maps to 235 (rounds to zero, out of range/INF values clamp to target range, and NaN to 0).
-Discards the specified elements in a resource view from the device context.
- A reference to the
An array of D3D11_RECT structures for the rectangles in the resource view to discard. If
Number of rectangles in the array that the pRects parameter specifies.
DiscardView1 informs the graphics processing unit (GPU) that the existing content in the specified elements in the resource view that pResourceView points to is no longer needed. The view can be an SRV, RTV, UAV, or DSV. DiscardView1 is a variation on the DiscardResource method. DiscardView1 allows you to discard elements of a subset of a resource that is in a view (such as elements of a single miplevel). More importantly, DiscardView1 provides a convenience because often views are what are being bound and unbound at the pipeline. Some pipeline bindings do not have views, such as stream output. In that situation, DiscardResource can do the job for any resource.
-The device context interface represents a device context; it is used to render commands.
Allows apps to determine when either a capture or profiling request is enabled.
-Returns TRUE if the capture tool is present and capturing or the app is being profiled such that SetMarkerInt or BeginEventInt will be logged to ETW. Otherwise, it returns
If apps detect that capture is being performed, they can prevent the Direct3D debugging tools, such as Microsoft Visual Studio?2013, from capturing them. The purpose of the
Updates mappings of tile locations in tiled resources to memory locations in a tile pool.
-A reference to the tiled resource.
The number of tiled resource regions.
An array of
An array of
A reference to the tile pool.
The number of tile-pool ranges.
An array of
An array of offsets into the tile pool. These are 0-based tile offsets, counting in tiles (not bytes).
An array of tiles.
An array of values that specify the number of tiles in each tile-pool range. The NumRanges parameter specifies the number of values in the array.
A combination of D3D11_TILE_MAPPING_FLAGS values that are combined by using a bitwise OR operation.
Returns
The debug layer will emit an error.
If out of memory occurs when this is called in a commandlist and the commandlist is being executed, the device will be removed. Apps can avoid this situation by only doing update calls that change existing mappings from tiled resources within commandlists (so drivers will not have to allocate page table memory, only change the mapping).
In a single call to UpdateTileMappings, you can map one or more ranges of resource tiles to one or more ranges of tile-pool tiles.
You can organize the parameters of UpdateTileMappings in these ways to perform an update:
If pTiledResourceRegionStartCoordinates isn't
The updates are applied from first region to last; so, if regions overlap in a single call, the updates later in the list overwrite the areas that overlap with previous updates.
NumRanges specifies the number of tile ranges, where the total tiles identified across all ranges must match the total number of tiles in the tile regions from the previously described tiled resource. Mappings are defined by iterating through the tiles in the tile regions in sequential order - x then y then z order for box regions - while walking through the set of tile ranges in sequential order. The breakdown of tile regions doesn't have to line up with the breakdown of tile ranges, but the total number of tiles on both sides must be equal so that each tiled resource tile specified has a mapping specified.
pRangeFlags, pTilePoolStartOffsets, and pRangeTileCounts are all arrays, of size NumRanges, that describe the tile ranges. If pRangeFlags is
If tile mappings have changed on a tiled resource that the app will render via RenderTargetView or DepthStencilView, the app must clear, by using the fixed function Clear APIs, the tiles that have changed within the area being rendered (mapped or not). If an app doesn't clear in these situations, the app receives undefined values when it reads from the tiled resource. -
Note??In Direct3D 11.2, hardware can now support ClearView on depth-only formats. For more info, seeIf an app needs to preserve existing memory contents of areas in a tiled resource where mappings have changed, the app can first save the contents where tile mappings have changed, by copying them to a temporary surface, for example using CopyTiles, issuing the required Clear, and then copying the contents back. -
Suppose a tile is mapped into multiple tiled resources at the same time and tile contents are manipulated by any means (render, copy, and so on) via one of the tiled resources. Then, if the same tile is to be rendered via any other tiled resource, the tile must be cleared first as previously described. -
For more info about tiled resources, see Tiled resources.
Here are some examples of common UpdateTileMappings cases:
-Copies mappings from a source tiled resource to a destination tiled resource.
-A reference to the destination tiled resource.
A reference to a
A reference to the source tiled resource.
A reference to a
A reference to a
A combination of D3D11_TILE_MAPPING_FLAGS values that are combined by using a bitwise OR operation. The only valid value is
Returns
The dest and the source regions must each entirely fit in their resource or behavior is undefined (debug layer will emit an error).
If out of memory occurs when this is called in a commandlist and the commandlist is being executed, the device will be removed. Applications can avoid this situation by only doing update calls that change existing mappings from Tiled Resources within commandlists (so drivers will not have to allocate page table memory, only change the mapping).
CopyTileMappings helps with tasks such as shifting mappings around within and across tiled resources, for example, scrolling tiles. The source and destination regions can overlap; the result of the copy in this situation is as if the source was saved to a temp location and then from there written to the destination.
For more info about tiled resources, see Tiled resources.
-Copies tiles from buffer to tiled resource or vice versa.
-A reference to a tiled resource.
A reference to a
A reference to a
A reference to an
The offset in bytes into the buffer at pBuffer to start the operation.
A combination of
CopyTiles drops write operations to unmapped areas and handles read operations from unmapped areas (except on Tier_1 tiled resources, where reading and writing unmapped areas is invalid).
If a copy operation involves writing to the same memory location multiple times because multiple locations in the destination resource are mapped to the same tile memory, the resulting write operations to multi-mapped tiles are non-deterministic and non-repeatable; that is, accesses to the tile memory happen in whatever order the hardware happens to execute the copy operation.
The tiles involved in the copy operation can't include tiles that contain packed mipmaps or results of the copy operation are undefined. To transfer data to and from mipmaps that the hardware packs into one tile, you must use the standard (that is, non-tile specific) copy and update APIs (like
The memory layout of the tiles in the non-tiled buffer resource side of the copy operation is linear in memory within 64 KB tiles, which the hardware and driver swizzle and deswizzle per tile as appropriate when they transfer to and from a tiled resource. For multisample antialiasing (MSAA) surfaces, the hardware and driver traverse each pixel's samples in sample-index order before they move to the next pixel. For tiles that are partially filled on the right side (for a surface that has a width not a multiple of tile width in pixels), the pitch and stride to move down a row is the full size in bytes of the number pixels that would fit across the tile if the tile was full. So, there can be a gap between each row of pixels in memory. Mipmaps that are smaller than a tile are not packed together in the linear layout, which might seem to be a waste of memory space, but as mentioned you can't use CopyTiles or
For more info about tiled resources, see Tiled resources.
-Updates tiles by copying from app memory to the tiled resource.
-A reference to a tiled resource to update.
A reference to a
A reference to a
A reference to memory that contains the source tile data that UpdateTiles uses to update the tiled resource.
A combination of
UpdateTiles drops write operations to unmapped areas (except on Tier_1 tiled resources, where writing to unmapped areas is invalid).
If a copy operation involves writing to the same memory location multiple times because multiple locations in the destination resource are mapped to the same tile memory, the resulting write operations to multi-mapped tiles are non-deterministic and non-repeatable; that is, accesses to the tile memory happen in whatever order the hardware happens to execute the copy operation.
The tiles involved in the copy operation can't include tiles that contain packed mipmaps or results of the copy operation are undefined. To transfer data to and from mipmaps that the hardware packs into one tile, you must use the standard (that is, non-tile specific) copy and update APIs (like
The memory layout of the data on the source side of the copy operation is linear in memory within 64 KB tiles, which the hardware and driver swizzle and deswizzle per tile as appropriate when they transfer to and from a tiled resource. For multisample antialiasing (MSAA) surfaces, the hardware and driver traverse each pixel's samples in sample-index order before they move to the next pixel. For tiles that are partially filled on the right side (for a surface that has a width not a multiple of tile width in pixels), the pitch and stride to move down a row is the full size in bytes of the number pixels that would fit across the tile if the tile was full. So, there can be a gap between each row of pixels in memory. Mipmaps that are smaller than a tile are not packed together in the linear layout, which might seem to be a waste of memory space, but as mentioned you can't use
For more info about tiled resources, see Tiled resources.
-Resizes a tile pool.
-A reference to an
The new size in bytes of the tile pool. The size must be a multiple of 64 KB or 0.
Returns
For E_INVALIDARG or E_OUTOFMEMORY, the existing tile pool remains unchanged, which includes existing mappings.
ResizeTilePool increases or decreases the size of the tile pool depending on whether the app needs more or less working set for the tiled resources that are mapped into it. An app can allocate additional tile pools for new tiled resources, but if any single tiled resource needs more space than initially available in its tile pool, the app can increase the size of the resource's tile pool. A tiled resource can't have mappings into multiple tile pools simultaneously.
When you increase the size of a tile pool, additional tiles are added to the end of the tile pool via one or more new allocations by the driver; your app can't detect the breakdown into the new allocations. Existing memory in the tile pool is left untouched, and existing tiled resource mappings into that memory remain intact.
When you decrease the size of a tile pool, tiles are removed from the end (this is allowed even below the initial allocation size, down to 0). This means that new mappings can't be made past the new size. But, existing mappings past the end of the new size remain intact and useable. The memory is kept active as long as mappings to any part of the allocations that are being used for the tile pool memory remains. If after decreasing, some memory has been kept active because tile mappings are pointing to it and the tile pool is increased again (by any amount), the existing memory is reused first before any additional allocations occur to service the size of the increase.
To be able to save memory, an app has to not only decrease a tile pool but also remove and remap existing mappings past the end of the new smaller tile pool size.
The act of decreasing (and removing mappings) doesn't necessarily produce immediate memory savings. Freeing of memory depends on how granular the driver's underlying allocations for the tile pool are. When a decrease in the size of a tile pool happens to be enough to make a driver allocation unused, the driver can free the allocation. If a tile pool was increased and if you then decrease to previous sizes (and remove and remap tile mappings correspondingly), you will most likely yield memory savings. But, this scenario isn't guaranteed in the case that the sizes don't exactly align with the underlying allocation sizes chosen by the driver.
For more info about tiled resources, see Tiled resources.
-Specifies a data access ordering constraint between multiple tiled resources. For more info about this constraint, see Remarks.
-A reference to an
A reference to an
Apps can use tiled resources to reuse tiles in different resources. But, a device and driver might not be able to determine whether some memory in a tile pool that was just rendered to is now being used for reading. -
For example, an app can render to some tiles in a tile pool with one tiled resource but then read from the same tiles by using a different tiled resource. These tiled-resource operations are different from using one resource and then just switching from writing with
When an app transitions from accessing (reading or writing) some location in a tile pool with one resource to accessing the same memory (read or write) via another tiled resource (with mappings to the same memory), the app must call TiledResourceBarrier after the first use of the resource and before the second. The parameters are the pTiledResourceOrViewAccessBeforeBarrier for accesses before the barrier (via rendering, copying), and the pTiledResourceOrViewAccessAfterBarrier for accesses after the barrier by using the same tile pool memory. If the resources are identical, the app doesn't need to call TiledResourceBarrier because this kind of hazard is already tracked and handled. -
The barrier call informs the driver that operations issued to the resource before the call must complete before any accesses that occur after the call via a different tiled resource that shares the same memory. -
Either or both of the parameters (before or after the barrier) can be
An app can pass a view reference, a resource, or
For more info about tiled resources, see Tiled resources.
-Allows apps to determine when either a capture or profiling request is enabled.
-Returns TRUE if capture or profiling is enabled and
Returns TRUE if the capture tool is present and capturing or the app is being profiled such that SetMarkerInt or BeginEventInt will be logged to ETW. Otherwise, it returns
If apps detect that capture is being performed, they can prevent the Direct3D debugging tools, such as Microsoft Visual Studio?2013, from capturing them. The purpose of the
Allows applications to annotate graphics commands.
-An optional string that will be logged to ETW when ETW logging is active. If ?#d? appears in the string, it will be replaced by the value of the Data parameter similar to the way printf works.
A signed data value that will be logged to ETW when ETW logging is active.
SetMarkerInt allows applications to annotate graphics commands, in order to provide more context to what the GPU is executing. When ETW logging or a support tool is enabled, an additional marker is correlated between the CPU and GPU timelines. The pLabel and Data value are logged to ETW. When the appropriate ETW logging is not enabled, this method does nothing.
-Allows applications to annotate the beginning of a range of graphics commands.
-An optional string that will be logged to ETW when ETW logging is active. If ?#d? appears in the string, it will be replaced by the value of the Data parameter similar to the way printf works.
A signed data value that will be logged to ETW when ETW logging is active.
BeginEventInt allows applications to annotate the beginning of a range of graphics commands, in order to provide more context to what the GPU is executing. When ETW logging (or a supported tool) is enabled, an additional marker is correlated between the CPU and GPU timelines. The pLabel and Data value are logged to ETW. When the appropriate ETW logging is not enabled, this method does nothing.
-Allows applications to annotate the end of a range of graphics commands.
-EndEvent allows applications to annotate the end of a range of graphics commands, in order to provide more context to what the GPU is executing. When the appropriate ETW logging is not enabled, this method does nothing. When ETW logging is enabled, an additional marker is correlated between the CPU and GPU timelines.
- The device context interface represents a device context; it is used to render commands.
Gets or sets whether hardware protection is enabled.
-Sends queued-up commands in the command buffer to the graphics processing unit (GPU), with a specified context type and an optional event handle to create an event query.
- A
An optional event handle. When specified, this method creates an event query.
Flush1 operates asynchronously, therefore it can return either before or after the GPU finishes executing the queued graphics commands, which will eventually complete. To create an event query, you can call
Flush1 has parameters. For more information, see
Sets the hardware protection state.
-Specifies whether to enable hardware protection.
Gets whether hardware protection is enabled.
- After this method returns, points to a
A debug interface controls debug settings, validates pipeline state and can only be used if the debug layer is turned on.
- This interface is obtained by querying it from the
For more information about the debug layer, see Debug Layer.
Windows?Phone?8: This API is supported.
-Get or sets the number of milliseconds to sleep after
Value is set with
Get or sets the swap chain that the runtime will use for automatically calling
The swap chain retrieved by this method will only be used if
Set a bit field of flags that will turn debug features on and off.
-A combination of feature-mask flags that are combined by using a bitwise OR operation. If a flag is present, that feature will be set to on, otherwise the feature will be set to off. For descriptions of the feature-mask flags, see Remarks.
This method returns one of the Direct3D 11 Return Codes.
Setting one of the following feature-mask flags will cause a rendering-operation method (listed below) to do some extra task when called.
Application will wait for the GPU to finish processing the rendering operation before continuing. | |
Runtime will additionally call | |
Runtime will call |
?
These feature-mask flags apply to the following rendering-operation methods:
By setting one of the following feature-mask flags, you can control the behavior of the
When you call | |
When you call |
?
The behavior of the
The following flag is supported by the Direct3D 11.1 runtime.
Disables the following default debugging behavior. |
?
When the debug layer is enabled, it performs certain actions to reveal application problems. By setting the
The following flag is supported by the Direct3D 11.2 runtime.
Disables the following default debugging behavior. |
?
By default (that is, without
If
Get a bitfield of flags that indicates which debug features are on or off.
-Mask of feature-mask flags bitwise ORed together. If a flag is present, then that feature will be set to on, otherwise the feature will be set to off. See
Set the number of milliseconds to sleep after
This method returns one of the following Direct3D 11 Return Codes.
The application will only sleep if
Get the number of milliseconds to sleep after
Number of milliseconds to sleep after Present is called.
Value is set with
Sets a swap chain that the runtime will use for automatically calling
This method returns one of the following Direct3D 11 Return Codes.
The swap chain set by this method will only be used if
Get the swap chain that the runtime will use for automatically calling
This method returns one of the following Direct3D 11 Return Codes.
The swap chain retrieved by this method will only be used if
Check to see if the draw pipeline state is valid.
-A reference to the
This method returns one of the following Direct3D 11 Return Codes.
Use validate prior to calling a draw method (for example,
Report information about a device object's lifetime.
-A value from the
This method returns one of the following Direct3D 11 Return Codes.
ReportLiveDeviceObjects uses the value in Flags to determine the amount of information to report about a device object's lifetime.
-Verifies whether the dispatch pipeline state is valid.
-A reference to the
This method returns one of the return codes described in the topic Direct3D 11 Return Codes.
Use this method before you call a dispatch method (for example,
A domain-shader interface manages an executable program (a domain shader) that controls the domain-shader stage.
-The domain-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a domain-shader interface, call
This interface is defined in D3D11.h.
-The device context interface represents a device context; it is used to render commands.
Optional flags that control the behavior of
Specifies the type of Microsoft Direct3D authenticated channel.
-Direct3D?11 channel. This channel provides communication with the Direct3D runtime.
Software driver channel. This channel provides communication with a driver that implements content protection mechanisms in software.
Hardware driver channel. This channel provides communication with a driver that implements content protection mechanisms in the GPU hardware.
Specifies the type of process that is identified in the
Identifies how to bind a resource to the pipeline.
-In general, binding flags can be combined using a logical OR (except the constant-buffer flag); however, you should use a single flag to allow the device to optimize the resource usage.
This enumeration is used by a:
A shader-resource buffer is NOT a constant buffer; rather, it is a texture or buffer resource that is bound to a shader, that contains texture or buffer data (it is not limited to a single element type in the buffer). A shader-resource buffer is created with the
Bind a buffer as a vertex buffer to the input-assembler stage.
Bind a buffer as an index buffer to the input-assembler stage.
Bind a buffer as a constant buffer to a shader stage; this flag may NOT be combined with any other bind flag.
Bind a buffer or texture to a shader stage; this flag cannot be used with the
Bind an output buffer for the stream-output stage.
Bind a texture as a render target for the output-merger stage.
Bind a texture as a depth-stencil target for the output-merger stage.
Bind an unordered access resource.
Set this flag to indicate that a 2D texture is used to receive output from the decoder API. The common way to create resources for a decoder output is by calling the
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that a 2D texture is used to receive input from the video encoder API. The common way to create resources for a video encoder is by calling the
Direct3D 11:??This value is not supported until Direct3D 11.1.
RGB or alpha blending operation.
-The runtime implements RGB blending and alpha blending separately. Therefore, blend state requires separate blend operations for RGB data and alpha data. These blend operations are specified in a blend description. The two sources ?source 1 and source 2? are shown in the blending block diagram.
Blend state is used by the output-merger stage to determine how to blend together two RGB pixel values and two alpha values. The two RGB pixel values and two alpha values are the RGB pixel value and alpha value that the pixel shader outputs and the RGB pixel value and alpha value already in the output render target. The blend option controls the data source that the blending stage uses to modulate values for the pixel shader, render target, or both. The blend operation controls how the blending stage mathematically combines these modulated values.
-Add source 1 and source 2.
Subtract source 1 from source 2.
Subtract source 2 from source 1.
Find the minimum of source 1 and source 2.
Find the maximum of source 1 and source 2.
Blend factors, which modulate values for the pixel shader and render target.
-Blend operations are specified in a blend description.
-The blend factor is (0, 0, 0, 0). No pre-blend operation.
The blend factor is (1, 1, 1, 1). No pre-blend operation.
The blend factor is (R?, G?, B?, A?), that is color data (RGB) from a pixel shader. No pre-blend operation.
The blend factor is (1 - R?, 1 - G?, 1 - B?, 1 - A?), that is color data (RGB) from a pixel shader. The pre-blend operation inverts the data, generating 1 - RGB.
The blend factor is (A?, A?, A?, A?), that is alpha data (A) from a pixel shader. No pre-blend operation.
The blend factor is ( 1 - A?, 1 - A?, 1 - A?, 1 - A?), that is alpha data (A) from a pixel shader. The pre-blend operation inverts the data, generating 1 - A.
The blend factor is (Ad Ad Ad Ad), that is alpha data from a render target. No pre-blend operation.
The blend factor is (1 - Ad 1 - Ad 1 - Ad 1 - Ad), that is alpha data from a render target. The pre-blend operation inverts the data, generating 1 - A.
The blend factor is (Rd, Gd, Bd, Ad), that is color data from a render target. No pre-blend operation.
The blend factor is (1 - Rd, 1 - Gd, 1 - Bd, 1 - Ad), that is color data from a render target. The pre-blend operation inverts the data, generating 1 - RGB.
The blend factor is (f, f, f, 1); where f = min(A?, 1 - Ad). The pre-blend operation clamps the data to 1 or less. -
The blend factor is the blend factor set with
The blend factor is the blend factor set with
The blend factor is data sources both as color data output by a pixel shader. There is no pre-blend operation. This blend factor supports dual-source color blending.
The blend factor is data sources both as color data output by a pixel shader. The pre-blend operation inverts the data, generating 1 - RGB. This blend factor supports dual-source color blending.
The blend factor is data sources as alpha data output by a pixel shader. There is no pre-blend operation. This blend factor supports dual-source color blending.
The blend factor is data sources as alpha data output by a pixel shader. The pre-blend operation inverts the data, generating 1 - A. This blend factor supports dual-source color blending.
Specifies the type of I/O bus that is used by the graphics adapter.
-Indicates a type of bus other than the types listed here. -
PCI bus. -
PCI-X bus. -
PCI Express bus. -
Accelerated Graphics Port (AGP) bus. -
The implementation for the graphics adapter is in a motherboard chipset's north bridge. This flag implies that data never goes over an expansion bus (such as PCI or AGP) when it is transferred from main memory to the graphics adapter.
Indicates that the graphics adapter is connected to a motherboard chipset's north bridge by tracks on the motherboard, and all of the graphics adapter's chips are soldered to the motherboard. This flag implies that data never goes over an expansion bus (such as PCI or AGP) when it is transferred from main memory to the graphics adapter.
The graphics adapter is connected to a motherboard chipset's north bridge by tracks on the motherboard, and all of the graphics adapter's chips are connected through sockets to the motherboard. -
The graphics adapter is connected to the motherboard through a daughterboard connector. -
The graphics adapter is connected to the motherboard through a daughterboard connector, and the graphics adapter is inside an enclosure that is not user accessible. -
One of the D3D11_BUS_IMPL_MODIFIER_Xxx flags is set. -
Identifies how to check multisample quality levels.
-Indicates to check the multisample quality levels of a tiled resource.
Identify which components of each pixel of a render target are writable during blending.
-These flags can be combined with a bitwise OR.
-Allow data to be stored in the red component.
Allow data to be stored in the green component.
Allow data to be stored in the blue component.
Allow data to be stored in the alpha component.
Allow data to be stored in all components.
Comparison options.
-A comparison option determines whether how the runtime compares source (new) data against destination (existing) data before storing the new data. The comparison option is declared in a description before an object is created. The API allows you to set a comparison option for a depth-stencil buffer (see
Never pass the comparison.
If the source data is less than the destination data, the comparison passes.
If the source data is equal to the destination data, the comparison passes.
If the source data is less than or equal to the destination data, the comparison passes.
If the source data is greater than the destination data, the comparison passes.
If the source data is not equal to the destination data, the comparison passes.
If the source data is greater than or equal to the destination data, the comparison passes.
Always pass the comparison.
Unordered resource support options for a compute shader resource (see
Identifies whether conservative rasterization is on or off.
-Conservative rasterization is off.
Conservative rasterization is on.
Specifies if the hardware and driver support conservative rasterization and at what tier level.
-Conservative rasterization isn't supported.
Tier_1 conservative rasterization is supported.
Tier_2 conservative rasterization is supported.
Tier_3 conservative rasterization is supported.
Contains flags that describe content-protection capabilities.
-The content protection is implemented in software by the driver.
The content protection is implemented in hardware by the GPU. -
Content protection is always applied to a protected surface, regardless of whether the application explicitly enables protection.
The driver can use partially encrypted buffers. If this capability is not present, the entire buffer must be either encrypted or clear.
The driver can encrypt data using a separate content key that is encrypted using the session key.
The driver can refresh the session key without renegotiating the key.
The driver can read back encrypted data from a protected surface. For more information, see
The driver requires a separate key to read encrypted data from a protected surface.
If the encryption type is D3DCRYPTOTYPE_AES128_CTR, the application must use a sequential count in the
The driver supports encrypted slice data, but does not support any other encrypted data in the compressed buffer. The caller should not encrypt any data within the buffer other than the slice data.
Note??The driver should only report this flag for the specific profiles that have this limitation. ?The driver can copy encrypted data from one resource to another, decrypting the data as part of the process.
The hardware supports the protection of specific resources. This means that:
Note??This enumeration value is supported starting with Windows?10.
Physical pages of a protected resource can be evicted and potentially paged to disk in low memory conditions without losing the contents of the resource when paged back in.
Note??This enumeration value is supported starting with Windows?10.
The hardware supports an automatic teardown mechanism that could trigger hardware keys or protected content to become lost in some conditions. The application can register to be notified when these events occur.
Note??This enumeration value is supported starting with Windows?10.
The secure environment is tightly coupled with the GPU and an
Note??This enumeration value is supported starting with Windows?10.
Specifies the context in which a query occurs.
-This enum is used by the following:
The query can occur in all contexts.
The query occurs in the context of a 3D command queue.
The query occurs in the context of a 3D compute queue.
The query occurs in the context of a 3D copy queue.
The query occurs in the context of video.
Specifies how to handle the existing contents of a resource during a copy or update operation of a region within that resource.
-The existing contents of the resource cannot be overwritten.
The existing contents of the resource are undefined and can be discarded.
Options for performance counters.
-Independent hardware vendors may define their own set of performance counters for their devices, by giving the enumeration value a number that is greater than the value for
This enumeration is used by
Define a performance counter that is dependent on the hardware device.
Data type of a performance counter.
-These flags are an output parameter in
32-bit floating point.
16-bit unsigned integer.
32-bit unsigned integer.
64-bit unsigned integer.
Specifies the types of CPU access allowed for a resource.
-This enumeration is used in
Applications may combine one or more of these flags with a logical OR. When possible, create resources with no CPU access flags, as this enables better resource optimization.
The
The resource is to be mappable so that the CPU can change its contents. Resources created with this flag cannot be set as outputs of the pipeline and must be created with either dynamic or staging usage (see
The resource is to be mappable so that the CPU can read its contents. Resources created with this flag cannot be set as either inputs or outputs to the pipeline and must be created with staging usage (see
Describes flags that are used to create a device context state object (
Represents the status of an
Indicates triangles facing a particular direction are not drawn.
-This enumeration is part of a rasterizer-state object description (see
Always draw all triangles.
Do not draw triangles that are front-facing.
Do not draw triangles that are back-facing.
Specifies the parts of the depth stencil to clear.
- These flags are used when calling
Clear the depth buffer, using fast clear if possible, then place the resource in a compressed state.
Clear the stencil buffer, using fast clear if possible, then place the resource in a compressed state.
Specifies how to access a resource used in a depth-stencil view.
-This enumeration is used in
The resource will be accessed as a 1D texture.
The resource will be accessed as an array of 1D textures.
The resource will be accessed as a 2D texture.
The resource will be accessed as an array of 2D textures.
The resource will be accessed as a 2D texture with multisampling.
The resource will be accessed as an array of 2D textures with multisampling.
Depth-stencil view options.
-This enumeration is used by
Limiting a depth-stencil buffer to read-only access allows more than one depth-stencil view to be bound to the pipeline simultaneously, since it is not possible to have a read/write conflicts between separate views.
-Indicates that depth values are read only.
Indicates that stencil values are read only.
Identify the portion of a depth-stencil buffer for writing depth data.
-Turn off writes to the depth-stencil buffer.
Turn on writes to the depth-stencil buffer.
Device context options.
-This enumeration is used by
The device context is an immediate context.
The device context is a deferred context.
Describes parameters that are used to create a device.
-Device creation flags are used by
An application might dynamically create (and destroy) threads to improve performance especially on a machine with multiple CPU cores. There may be cases, however, when an application needs to prevent extra threads from being created. This can happen when you want to simplify debugging, profile code or develop a tool for instance. For these cases, use
Use this flag if your application will only call methods of Direct3D?11 interfaces from a single thread. By default, the
Creates a device that supports the debug layer.
To use this flag, you must have D3D11*SDKLayers.dll installed; otherwise, device creation fails. To get D3D11_1SDKLayers.dll, install the SDK for Windows?8.
Prevents multiple threads from being created. When this flag is used with a Windows Advanced Rasterization Platform (WARP) device, no additional threads will be created by WARP and all rasterization will occur on the calling thread. This flag is not recommended for general use. See remarks.
Creates a device that supports BGRA formats (
Causes the device and driver to keep information that you can use for shader debugging. The exact impact from this flag will vary from driver to driver.
To use this flag, you must have D3D11_1SDKLayers.dll installed; otherwise, device creation fails. The created device supports the debug layer. To get D3D11_1SDKLayers.dll, install the SDK for Windows?8.
If you use this flag and the current driver does not support shader debugging, device creation fails. Shader debugging requires a driver that is implemented to the WDDM for Windows?8 (WDDM 1.2).
Direct3D 11:??This value is not supported until Direct3D 11.1.
Causes the Direct3D runtime to ignore registry settings that turn on the debug layer. You can turn on the debug layer by using the DirectX Control Panel that was included as part of the DirectX SDK. We shipped the last version of the DirectX SDK in June 2010; you can download it from the Microsoft Download Center. You can set this flag in your app, typically in release builds only, to prevent end users from using the DirectX Control Panel to monitor how the app uses Direct3D.
Note??You can also set this flag in your app to prevent Direct3D debugging tools, such as Visual Studio Ultimate?2012, from hooking your app. ?Windows?8.1:??This flag doesn't prevent Visual Studio?2013 and later running on Windows?8.1 and later from hooking your app; instead use
Direct3D 11:??This value is not supported until Direct3D 11.1.
Use this flag if the device will produce GPU workloads that take more than two seconds to complete, and you want the operating system to allow them to successfully finish. If this flag is not set, the operating system performs timeout detection and recovery when it detects a GPU packet that took more than two seconds to execute. If this flag is set, the operating system allows such a long running packet to execute without resetting the GPU. We recommend not to set this flag if your device needs to be highly responsive so that the operating system can detect and recover from GPU timeouts. We recommend to set this flag if your device needs to perform time consuming background tasks such as compute, image recognition, and video encoding to allow such tasks to successfully finish.
Direct3D 11:??This value is not supported until Direct3D 11.1.
Forces the creation of the Direct3D device to fail if the display driver is not implemented to the WDDM for Windows?8 (WDDM 1.2). When the display driver is not implemented to WDDM 1.2, only a Direct3D device that is created with feature level 9.1, 9.2, or 9.3 supports video; therefore, if this flag is set, the runtime creates the Direct3D device only for feature level 9.1, 9.2, or 9.3. We recommend not to specify this flag for applications that want to favor Direct3D capability over video. If feature level 10 and higher is available, the runtime will use that feature level regardless of video support.
If this flag is set, device creation on the Basic Render Device (BRD) will succeed regardless of the BRD's missing support for video decode. This is because the Media Foundation video stack operates in software mode on BRD. In this situation, if you force the video stack to create the Direct3D device twice (create the device once with this flag, next discover BRD, then again create the device without the flag), you actually degrade performance.
If you attempt to create a Direct3D device with driver type
Direct3D 11:??This value is not supported until Direct3D 11.1.
Direct3D 11 feature options.
- This enumeration is used when querying a driver about support for these features by calling
The driver supports multithreading. To see an example of testing a driver for multithread support, see How To: Check for Driver Support. Refer to
Supports the use of the double-precision shaders in HLSL. Refer to
Supports the formats in
Supports the formats in
Supports compute shaders and raw and structured buffers. Refer to
Supports Direct3D 11.1 feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports specific adapter architecture. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports Direct3D?9 feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports minimum precision of shaders. For more info about HLSL minimum precision, see using HLSL minimum precision. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports Direct3D?9 shadowing feature. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.1.
Supports Direct3D 11.2 feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.2.
Supports Direct3D 11.2 instancing options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.2.
Supports Direct3D 11.2 marker options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.2.
Supports Direct3D?9 feature options, which includes the Direct3D?9 shadowing feature and instancing support. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.2.
Supports Direct3D 11.3 conservative rasterization feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.3.
Supports Direct3D 11.4 conservative rasterization feature options. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.4.
Supports GPU virtual addresses. Refer to
Supports a single boolean for NV12 shared textures. Refer to
Direct3D 11:??This value is not supported until Direct3D 11.4.
Device context options.
-This enumeration is used by
The device context is an immediate context.
The device context is a deferred context.
Determines the fill mode to use when rendering triangles.
-This enumeration is part of a rasterizer-state object description (see
Draw lines connecting the vertices. Adjacent vertices are not drawn.
Fill the triangles formed by the vertices. Adjacent vertices are not drawn.
Filtering options during texture sampling.
-During texture sampling, one or more texels are read and combined (this is calling filtering) to produce a single value. Point sampling reads a single texel while linear sampling reads two texels (endpoints) and linearly interpolates a third value between the endpoints.
HLSL texture-sampling functions also support comparison filtering during texture sampling. Comparison filtering compares each sampled texel against a comparison value. The boolean result is blended the same way that normal texture filtering is blended.
You can use HLSL intrinsic texture-sampling functions that implement texture filtering only or companion functions that use texture filtering with comparison filtering.
Texture Sampling Function | Texture Sampling Function with Comparison Filtering |
---|---|
sample | samplecmp or samplecmplevelzero |
?
Comparison filters only work with textures that have the following DXGI formats: R32_FLOAT_X8X24_TYPELESS, R32_FLOAT, R24_UNORM_X8_TYPELESS, R16_UNORM.
-Use point sampling for minification, magnification, and mip-level sampling.
Use point sampling for minification and magnification; use linear interpolation for mip-level sampling.
Use point sampling for minification; use linear interpolation for magnification; use point sampling for mip-level sampling.
Use point sampling for minification; use linear interpolation for magnification and mip-level sampling.
Use linear interpolation for minification; use point sampling for magnification and mip-level sampling.
Use linear interpolation for minification; use point sampling for magnification; use linear interpolation for mip-level sampling.
Use linear interpolation for minification and magnification; use point sampling for mip-level sampling.
Use linear interpolation for minification, magnification, and mip-level sampling.
Use anisotropic interpolation for minification, magnification, and mip-level sampling.
Use point sampling for minification, magnification, and mip-level sampling. Compare the result to the comparison value.
Use point sampling for minification and magnification; use linear interpolation for mip-level sampling. Compare the result to the comparison value.
Use point sampling for minification; use linear interpolation for magnification; use point sampling for mip-level sampling. Compare the result to the comparison value.
Use point sampling for minification; use linear interpolation for magnification and mip-level sampling. Compare the result to the comparison value.
Use linear interpolation for minification; use point sampling for magnification and mip-level sampling. Compare the result to the comparison value.
Use linear interpolation for minification; use point sampling for magnification; use linear interpolation for mip-level sampling. Compare the result to the comparison value.
Use linear interpolation for minification and magnification; use point sampling for mip-level sampling. Compare the result to the comparison value.
Use linear interpolation for minification, magnification, and mip-level sampling. Compare the result to the comparison value.
Use anisotropic interpolation for minification, magnification, and mip-level sampling. Compare the result to the comparison value.
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Fetch the same set of texels as
Specifies the type of sampler filter reduction.
- This enum is used by the
Indicates standard (default) filter reduction.
Indicates a comparison filter reduction.
Indicates minimum filter reduction.
Indicates maximum filter reduction.
Types of magnification or minification sampler filters.
-Point filtering used as a texture magnification or minification filter. The texel with coordinates nearest to the desired pixel value is used. The texture filter to be used between mipmap levels is nearest-point mipmap filtering. The rasterizer uses the color from the texel of the nearest mipmap texture.
Bilinear interpolation filtering used as a texture magnification or minification filter. A weighted average of a 2 x 2 area of texels surrounding the desired pixel is used. The texture filter to use between mipmap levels is trilinear mipmap interpolation. The rasterizer linearly interpolates pixel color, using the texels of the two nearest mipmap textures.
Which resources are supported for a given format and given device (see
Type of data contained in an input slot.
-Use these values to specify the type of data for a particular input element (see
Input data is per-vertex data.
Input data is per-instance data.
Specifies logical operations to configure for a render target.
-Clears the render target.
Sets the render target.
Copys the render target.
Performs an inverted-copy of the render target.
No operation is performed on the render target.
Inverts the render target.
Performs a logical AND operation on the render target.
Performs a logical NAND operation on the render target.
Performs a logical OR operation on the render target.
Performs a logical NOR operation on the render target.
Performs a logical XOR operation on the render target.
Performs a logical equal operation on the render target.
Performs a logical AND and reverse operation on the render target.
Performs a logical AND and invert operation on the render target.
Performs a logical OR and reverse operation on the render target.
Performs a logical OR and invert operation on the render target.
Specifies how the CPU should respond when an application calls the
This enumeration is used by
Identifies a resource to be accessed for reading and writing by the CPU. Applications may combine one or more of these flags.
-This enumeration is used in
These remarks are divided into the following topics:
Resource is mapped for reading. The resource must have been created with read access (see
Resource is mapped for writing. The resource must have been created with write access (see
Resource is mapped for reading and writing. The resource must have been created with read and write access (see
Resource is mapped for writing; the previous contents of the resource will be undefined. The resource must have been created with write access and dynamic usage (See
Resource is mapped for writing; the existing contents of the resource cannot be overwritten (see Remarks). This flag is only valid on vertex and index buffers. The resource must have been created with write access (see
Categories of debug messages. This will identify the category of a message when retrieving a message with
This is part of the Information Queue feature. See
Debug message severity levels for an information queue.
-Use these values to allow or deny message categories to pass through the storage and retrieval filters for an information queue (see
Defines some type of corruption which has occurred.
Defines an error message.
Defines a warning message.
Defines an information message.
Defines a message other than corruption, error, warning, or information.
Direct3D 11:??This value is not supported until Direct3D 11.1.
Flags that describe miscellaneous query behavior.
-This flag is part of a query description (see
Tell the hardware that if it is not yet sure if something is hidden or not to draw it anyway. This is only used with an occlusion predicate. Predication data cannot be returned to your application via
Query types.
- Create a query with
Determines whether or not the GPU is finished processing commands. When the GPU is finished processing commands
Get the number of samples that passed the depth and stencil tests in between
Get a timestamp value where
Determines whether or not a
Get pipeline statistics, such as the number of pixel shader invocations in between
Similar to
Get streaming output statistics, such as the number of primitives streamed out in between
Determines whether or not any of the streaming output buffers overflowed in between
Get streaming output statistics for stream 0, such as the number of primitives streamed out in between
Determines whether or not the stream 0 output buffers overflowed in between
Get streaming output statistics for stream 1, such as the number of primitives streamed out in between
Determines whether or not the stream 1 output buffers overflowed in between
Get streaming output statistics for stream 2, such as the number of primitives streamed out in between
Determines whether or not the stream 2 output buffers overflowed in between
Get streaming output statistics for stream 3, such as the number of primitives streamed out in between
Determines whether or not the stream 3 output buffers overflowed in between
These flags identify the type of resource that will be viewed as a render target.
-This enumeration is used in
Do not use this value, as it will cause
The resource will be accessed as a buffer.
The resource will be accessed as a 1D texture.
The resource will be accessed as an array of 1D textures.
The resource will be accessed as a 2D texture.
The resource will be accessed as an array of 2D textures.
The resource will be accessed as a 2D texture with multisampling.
The resource will be accessed as an array of 2D textures with multisampling.
The resource will be accessed as a 3D texture.
Options for the amount of information to report about a device object's lifetime.
- This enumeration is used by
Several inline functions exist to combine the options using operators, see the D3D11SDKLayers.h header file for details.
-Specifies to obtain a summary about a device object's lifetime.
Specifies to obtain detailed information about a device object's lifetime.
Do not use this enumeration constant. It is for internal use only.
Identifies the type of resource being used.
-This enumeration is used in
Resource is of unknown type.
Resource is a buffer.
Resource is a 1D texture.
Resource is a 2D texture.
Resource is a 3D texture.
Identifies options for resources.
- This enumeration is used in
These flags can be combined by bitwise OR.
The
Enables MIP map generation by using
Enables resource data sharing between two or more Direct3D devices. The only resources that can be shared are 2D non-mipmapped textures.
WARP and REF devices do not support shared resources. If you try to create a resource with this flag on either a WARP or REF device, the create method will return an E_OUTOFMEMORY error code.
Note?? Starting with Windows?8, WARP devices fully support shared resources. ? Note?? Starting with Windows?8, we recommend that you enable resource data sharing between two or more Direct3D devices by using a combination of theSets a resource to be a cube texture created from a Texture2DArray that contains 6 textures.
Enables instancing of GPU-generated content.
Enables a resource as a byte address buffer.
Enables a resource as a structured buffer.
Enables a resource with MIP map clamping for use with
Enables the resource to be synchronized by using the
If you call any of these methods with the
WARP and REF devices do not support shared resources. If you try to create a resource with this flag on either a WARP or REF device, the create method will return an E_OUTOFMEMORY error code.
Note?? Starting with Windows?8, WARP devices fully support shared resources. ? Enables a resource compatible with GDI. You must set the
Consider the following programming tips for using
You must set the texture format to one of the following types.
Set this flag to enable the use of NT HANDLE values when you create a shared resource. By enabling this flag, you deprecate the use of existing HANDLE values.
When you use this flag, you must combine it with the
Without this flag set, the runtime does not strictly validate shared resource parameters (that is, formats, flags, usage, and so on). When the runtime does not validate shared resource parameters, behavior of much of the Direct3D API might be undefined and might vary from driver to driver.
Direct3D 11 and earlier:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the resource might contain protected content; therefore, the operating system should use the resource only when the driver and hardware support content protection. If the driver and hardware do not support content protection and you try to create a resource with this flag, the resource creation fails.
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the operating system restricts access to the shared surface. You can use this flag together with the
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the driver restricts access to the shared surface. You can use this flag in conjunction with the
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the resource is guarded. Such a resource is returned by the
A guarded resource automatically restricts all writes to the region that is related to one of the preceding APIs. Additionally, the resource enforces access to the ROI with these restrictions:
Direct3D 11:??This value is not supported until Direct3D 11.1.
Set this flag to indicate that the resource is a tile pool.
Direct3D 11:??This value is not supported until Direct3D 11.2.
Set this flag to indicate that the resource is a tiled resource.
Direct3D 11:??This value is not supported until Direct3D 11.2.
Set this flag to indicate that the resource should be created such that it will be protected by the hardware. Resource creation will fail if hardware content protection is not supported.
This flag has the following restrictions:
Creating a texture using this flag does not automatically guarantee that hardware protection will be enabled for the underlying allocation. Some implementations require that the DRM components are first initialized prior to any guarantees of protection.
?Note?? This enumeration value is supported starting with Windows?10.
Identifies expected resource use during rendering. The usage directly reflects whether a resource is accessible by the CPU and/or the graphics processing unit (GPU).
-An application identifies the way a resource is intended to be used (its usage) in a resource description. There are several structures for creating resources including:
Differences between Direct3D 9 and Direct3D 10/11: In Direct3D 9, you specify the type of memory a resource should be created in at resource creation time (using D3DPOOL). It was an application's job to decide what memory pool would provide the best combination of functionality and performance. In Direct3D 10/11, an application no longer specifies what type of memory (the pool) to create a resource in. Instead, you specify the intended usage of the resource, and let the runtime (in concert with the driver and a memory manager) choose the type of memory that will achieve the best performance. |
?
-A resource that requires read and write access by the GPU. This is likely to be the most common usage choice.
A resource that can only be read by the GPU. It cannot be written by the GPU, and cannot be accessed at all by the CPU. This type of resource must be initialized when it is created, since it cannot be changed after creation.
A resource that is accessible by both the GPU (read only) and the CPU (write only). A dynamic resource is a good choice for a resource that will be updated by the CPU at least once per frame. To update a dynamic resource, use a Map method.
For info about how to use dynamic resources, see How to: Use dynamic resources.
A resource that supports data transfer (copy) from the GPU to the CPU.
Describes the level of support for shader caching in the current graphics driver.
-This enum is used by the D3D_FEATURE_DATA_SHADER_CACHE structure.
-Indicates that the driver does not support shader caching.
Indicates that the driver supports an OS-managed shader cache that stores compiled shaders in memory during the current run of the application.
Indicates that the driver supports an OS-managed shader cache that stores compiled shaders on disk to accelerate future runs of the application.
Values that specify minimum precision levels at shader stages.
-Minimum precision level is 10-bit.
Minimum precision level is 16-bit.
Identifies how to view a buffer resource.
-This enumeration is used by
View the buffer as raw. For more info about raw viewing of buffers, see Raw Views of Buffers.
Options that specify how to perform shader debug tracking.
-This enumeration is used by the following methods:
No debug tracking is performed.
Track the reading of uninitialized data.
Track read-after-write hazards.
Track write-after-read hazards.
Track write-after-write hazards.
Track that hazards are allowed in which data is written but the value does not change.
Track that only one type of atomic operation is used on an address.
Track read-after-write hazards across thread groups.
Track write-after-read hazards across thread groups.
Track write-after-write hazards across thread groups.
Track that only one type of atomic operation is used on an address across thread groups.
Track hazards that are specific to unordered access views (UAVs).
Track all hazards.
Track all hazards and track that hazards are allowed in which data is written but the value does not change.
All of the preceding tracking options are set except
Indicates which resource types to track.
-The
No resource types are tracked.
Track device memory that is created with unordered access view (UAV) bind flags.
Track device memory that is created without UAV bind flags.
Track all device memory.
Track all shaders that use group shared memory.
Track all device memory except device memory that is created without UAV bind flags.
Track all device memory except device memory that is created with UAV bind flags.
Track all memory on the device.
Specifies a multi-sample pattern type.
-An app calls
The runtime defines the following standard sample patterns for 1(trivial), 2, 4, 8, and 16 sample counts. Hardware must support 1, 4, and 8 sample counts. Hardware vendors can expose more sample counts beyond these. However, if vendors support 2, 4(required), 8(required), or 16, they must also support the corresponding standard pattern or center pattern for each of those sample counts.
-Pre-defined multi-sample patterns required for Direct3D?11 and Direct3D?10.1 hardware.
Pattern where all of the samples are located at the pixel center.
The stencil operations that can be performed during depth-stencil testing.
-Keep the existing stencil data.
Set the stencil data to 0.
Set the stencil data to the reference value set by calling
Increment the stencil value by 1, and clamp the result.
Decrement the stencil value by 1, and clamp the result.
Invert the stencil data.
Increment the stencil value by 1, and wrap the result if necessary.
Decrement the stencil value by 1, and wrap the result if necessary.
Identify a technique for resolving texture coordinates that are outside of the boundaries of a texture.
-Tile the texture at every (u,v) integer junction. For example, for u values between 0 and 3, the texture is repeated three times.
Flip the texture at every (u,v) integer junction. For u values between 0 and 1, for example, the texture is addressed normally; between 1 and 2, the texture is flipped (mirrored); between 2 and 3, the texture is normal again; and so on.
Texture coordinates outside the range [0.0, 1.0] are set to the texture color at 0.0 or 1.0, respectively.
Texture coordinates outside the range [0.0, 1.0] are set to the border color specified in
Similar to
The different faces of a cube texture.
-Positive X face.
Negative X face.
Positive Y face.
Negative Y face.
Positive Z face.
Negative Z face.
Specifies texture layout options.
-This enumeration controls the swizzle pattern of default textures and enable map support on default textures. Callers must query
The standard swizzle formats applies within each page-sized chunk, and pages are laid out in linear order with respect to one another. A 16-bit interleave pattern defines the conversion from pre-swizzled intra-page location to the post-swizzled location.
To demonstrate, consider the 32bpp swizzle format above. This is represented by the following interleave masks, where bits on the left are most-significant.
UINT xBytesMask = 1010 1010 1000 1111
- UINT yMask = 0101 0101 0111 0000
-
To compute the swizzled address, the following code could be used (where the _pdep_u32 instruction is supported):
UINT swizzledOffset = resourceBaseOffset + _pdep_u32(xOffset, xBytesMask) + _pdep_u32(yOffset, yBytesMask);
-
- The texture layout is undefined, and is selected by the driver.
Data for the texture is stored in row major (sometimes called pitch-linear) order.
A default texture uses the standardized swizzle pattern.
Identifies how to copy a tile.
-Indicates that the GPU isn't currently referencing any of the portions of destination memory being written. -
Indicates that the
Indicates that the
Indicates the tier level at which tiled resources are supported.
-Tiled resources are not supported.
Tier_1 tiled resources are supported.
The device supports calls to CreateTexture2D and so on with the
The device supports calls to CreateBuffer with the
If you access tiles (read or write) that are
Tier_2 tiled resources are supported.
Superset of Tier_1 functionality, which includes this additional support:
Tier_3 tiled resources are supported.
Superset of Tier_2 functionality, Tier 3 is essentially Tier 2 but with the additional support of Texture3D for Tiled Resources.
Identifies how to perform a tile-mapping operation.
-Indicates that no overwriting of tiles occurs in the tile-mapping operation.
Specifies a range of tile mappings to use with
Identifies unordered-access view options for a buffer resource.
-Resource contains raw, unstructured data. Requires the UAV format to be
Allow data to be appended to the end of the buffer.
Adds a counter to the unordered-access-view buffer.
Unordered-access view options.
- This enumeration is used by a unordered access-view description (see
The view type is unknown.
View the resource as a buffer.
View the resource as a 1D texture.
View the resource as a 1D texture array.
View the resource as a 2D texture.
View the resource as a 2D texture array.
View the resource as a 3D texture array.
Specifies how to access a resource that is used in a video decoding output view.
-This enumeration is used with the
Not a valid value.
The resource will be accessed as a 2D texture. -
Specifies a type of compressed buffer for decoding.
-Picture decoding parameter buffer. -
Macroblock control command buffer. -
Residual difference block data buffer. -
Deblocking filter control command buffer. -
Inverse quantization matrix buffer. -
Slice-control buffer. -
Bitstream data buffer. -
Motion vector buffer. -
Film grain synthesis data buffer. -
Specifies capabilities of the video decoder.
-Indicates that the graphics driver supports at least a subset of downsampling operations.
Indicates that the decoding hardware cannot support the decode operation in real-time. Decoding is still supported for transcoding scenarios. With this capability, it is possible that decoding can occur in real-time if downsampling is enabled. -
Indicates that the driver supports changing down sample parameters after the initial down sample parameters have been applied. For more information, see
Describes how a video stream is interlaced.
-Frames are progressive.
Frames are interlaced. The top field of each frame is displayed first.
Frame are interlaced. The bottom field of each frame is displayed first.
Specifies the alpha fill mode for video processing.
-Alpha values inside the target rectangle are set to opaque.
Alpha values inside the target rectangle are set to the alpha value specified in the background color. To set the background color, call the
Existing alpha values remain unchanged in the output surface.
Alpha values are taken from an input stream, scaled, and copied to the corresponding destination rectangle for that stream. The input stream is specified in the StreamIndex parameter of the
If the input stream does not have alpha data, the video processor sets the alpha values in the target rectangle to opaque. If the input stream is disabled or the source rectangle is empty, the alpha values in the target rectangle are not modified.
Specifies the automatic image processing capabilities of the video processor.
-Denoise.
Deringing.
Edge enhancement.
Color correction.
Flesh-tone mapping.
Image stabilization.
Enhanced image resolution.
Anamorphic scaling.
Specifies flags that indicate the most efficient methods for performing video processing operations.
-Multi-plane overlay hardware can perform the rotation operation more efficiently than the
Multi-plane overlay hardware can perform the scaling operation more efficiently than the
Multi-plane overlay hardware can perform the colorspace conversion operation more efficiently than the
The video processor output data should be at least triple buffered for optimal performance.
Defines video processing capabilities for a Microsoft Direct3D?11 video processor.
-The video processor can blend video content in linear color space. Most video content is gamma corrected, resulting in nonlinear values. This capability flag means that the video processor converts colors to linear space before blending, which produces better results.
The video processor supports the xvYCC color space for YCbCr data.
The video processor can perform range conversion when the input and output are both RGB but use different color ranges (0-255 or 16-235, for 8-bit RGB).
The video processor can apply a matrix conversion to YCbCr values when the input and output are both YCbCr. For example, the driver can convert colors from BT.601 to BT.709.
The video processor supports YUV nominal range .
Supported in Windows?8.1 and later.
Defines features that a Microsoft Direct3D?11 video processor can support.
-The video processor can set alpha values on the output pixels. For more information, see
The video processor can downsample the video output. For more information, see
The video processor can perform luma keying. For more information, see
The video processor can apply alpha values from color palette entries.
The driver does not support full video processing capabilities. If this capability flag is set, the video processor has the following limitations:
The video processor can support 3D stereo video. For more information, see
All drivers setting this caps must support the following stereo formats:
The driver can rotate the input data either 90, 180, or 270 degrees clockwise as part of the video processing operation.
The driver supports the VideoProcessorSetStreamAlpha call.
The driver supports the VideoProcessorSetStreamPixelAspectRatio call.
Identifies a video processor filter.
-Brightness filter.
Contrast filter.
Hue filter.
Saturation filter.
Noise reduction filter.
Edge enhancement filter.
Anamorphic scaling filter.
Stereo adjustment filter. When stereo 3D video is enabled, this filter adjusts the offset between the left and right views, allowing the user to reduce potential eye strain.
The filter value indicates the amount by which the left and right views are adjusted. A positive value shifts the images away from each other: the left image toward the left, and the right image toward the right. A negative value shifts the images in the opposite directions, closer to each other.
Defines image filter capabilities for a Microsoft Direct3D?11 video processor.
-These capability flags indicate support for the image filters defined by the
The video processor can adjust the brightness level.
The video processor can adjust the contrast level.
The video processor can adjust hue.
The video processor can adjust the saturation level.
The video processor can perform noise reduction.
The video processor can perform edge enhancement.
The video processor can perform anamorphic scaling. Anamorphic scaling can be used to stretch 4:3 content to a widescreen 16:9 aspect ratio.
For stereo 3D video, the video processor can adjust the offset between the left and right views, allowing the user to reduce potential eye strain.
Defines capabilities related to input formats for a Microsoft Direct3D?11 video processor.
-These flags define video processing capabilities that usually are not needed, and that video devices are therefore not required to support.
The first three flags relate to RGB support for functions that are normally applied to YCbCr video: deinterlacing, color adjustment, and luma keying. A device that supports these functions for YCbCr is not required to support them for RGB input. Supporting RGB input for these functions is an additional capability, reflected by these constants. Note that the driver might convert the input to another color space, perform the indicated function, and then convert the result back to RGB.
Similarly, a device that supports deinterlacing is not required to support deinterlacing of palettized formats. This capability is indicated by the
The video processor can deinterlace an input stream that contains interlaced RGB video.
The video processor can perform color adjustment on RGB video.
The video processor can perform luma keying on RGB video.
The video processor can deinterlace input streams with palettized color formats.
Specifies how a video format can be used for video processing.
-The format can be used as the input to the video processor.
The format can be used as the output from the video processor.
Specifies the inverse telecine (IVTC) capabilities of a video processor.
-The video processor can reverse 3:2 pulldown.
The video processor can reverse 2:2 pulldown.
The video processor can reverse 2:2:2:4 pulldown.
The video processor can reverse 2:3:3:2 pulldown.
The video processor can reverse 3:2:3:2:2 pulldown.
The video processor can reverse 5:5 pulldown.
The video processor can reverse 6:4 pulldown.
The video processor can reverse 8:7 pulldown.
The video processor can reverse 2:2:2:2:2:2:2:2:2:2:2:3 pulldown.
The video processor can reverse other telecine modes not listed here.
Specifies values for the luminance range of YUV data.
-Driver defaults are used, which should be Studio luminance range [16-235],
Studio luminance range [16-235]
Full luminance range [0-255]
Specifies the rate at which the video processor produces output frames from an input stream.
-The output is the normal frame rate.
The output is half the frame rate.
The output is a custom frame rate.
Specifies video processing capabilities that relate to deinterlacing, inverse telecine (IVTC), and frame-rate conversion.
-The video processor can perform blend deinterlacing.
In blend deinterlacing, the two fields from an interlaced frame are blended into a single progressive frame. A video processor uses blend deinterlacing when it deinterlaces at half rate, as when converting 60i to 30p. Blend deinterlacing does not require reference frames.
The video processor can perform bob deinterlacing.
In bob deinterlacing, missing field lines are interpolated from the lines above and below. Bob deinterlacing does not require reference frames.
The video processor can perform adaptive deinterlacing.
Adaptive deinterlacing uses spatial or temporal interpolation, and switches between the two on a field-by-field basis, depending on the amount of motion. If the video processor does not receive enough reference frames to perform adaptive deinterlacing, it falls back to bob deinterlacing.
The video processor can perform motion-compensated deinterlacing.
Motion-compensated deinterlacing uses motion vectors to recreate missing lines. If the video processor does not receive enough reference frames to perform motion-compensated deinterlacing, it falls back to bob deinterlacing.
The video processor can perform inverse telecine (IVTC).
If the video processor supports this capability, the ITelecineCaps member of the
The video processor can convert the frame rate by interpolating frames.
Specifies the video rotation states.
-The video is not rotated.
The video is rotated 90 degrees clockwise.
The video is rotated 180 degrees clockwise.
The video is rotated 270 degrees clockwise.
Defines stereo 3D capabilities for a Microsoft Direct3D?11 video processor.
-The video processor supports the
The video processor supports the
The video processor supports the
The video processor supports the
The video processor can flip one or both views. For more information, see
For stereo 3D video, specifies whether the data in frame 0 or frame 1 is flipped, either horizontally or vertically.
-Neither frame is flipped.
The data in frame 0 is flipped.
The data in frame 1 is flipped.
Specifies the layout in memory of a stereo 3D video frame.
-This enumeration designates the two stereo views as "frame 0" and "frame 1". The LeftViewFrame0 parameter of the VideoProcessorSetStreamStereoFormat method specifies which view is the left view, and which is the right view.
For packed formats, if the source rectangle clips part of the surface, the driver interprets the rectangle in logical coordinates relative to the stereo view, rather than absolute pixel coordinates. The result is that frame 0 and frame 1 are clipped proportionately.
To query whether the device supports stereo 3D video, call
The sample does not contain stereo data. If the stereo format is not specified, this value is the default.
Frame 0 and frame 1 are packed side-by-side, as shown in the following diagram.
All drivers that support stereo video must support this format.
Frame 0 and frame 1 are packed top-to-bottom, as shown in the following diagram.
All drivers that support stereo video must support this format.
Frame 0 and frame 1 are placed in separate resources or in separate texture array elements within the same resource.
All drivers that support stereo video must support this format.
The sample contains non-stereo data. However, the driver should create a left/right output of this sample using a specified offset. The offset is specified in the MonoOffset parameter of the
This format is primarily intended for subtitles and other subpicture data, where the entire sample is presented on the same plane.
Support for this stereo format is optional.
Frame 0 and frame 1 are packed into interleaved rows, as shown in the following diagram.
Support for this stereo format is optional.
Frame 0 and frame 1 are packed into interleaved columns, as shown in the following diagram.
Support for this stereo format is optional.
Frame 0 and frame 1 are packed in a checkerboard format, as shown in the following diagram.
Support for this stereo format is optional.
Specifies the intended use for a video processor.
-Normal video playback. The graphics driver should expose a set of capabilities that are appropriate for real-time video playback.
Optimal speed. The graphics driver should expose a minimal set of capabilities that are optimized for performance.
Use this setting if you want better performance and can accept some reduction in video quality. For example, you might use this setting in power-saving mode or to play video thumbnails.
Optimal quality. The grahics driver should expose its maximum set of capabilities.
Specify this setting to get the best video quality possible. It is appropriate for tasks such as video editing, when quality is more important than speed. It is not appropriate for real-time playback.
Specifies how to access a resource that is used in a video processor input view.
-This enumeration is used with the
Not a valid value.
The resource will be accessed as a 2D texture.
Specifies how to access a resource that is used in a video processor output view.
-This enumeration is used with the
Not a valid value.
The resource will be accessed as a 2D texture.
The resource will be accessed as an array of 2D textures.
Creates a device that represents the display adapter.
- A reference to the video adapter to use when creating a device. Pass
The
A handle to a DLL that implements a software rasterizer. If DriverType is
The runtime layers to enable (see
A reference to an array of
{Note?? If the Direct3D 11.1 runtime is present on the computer and pFeatureLevels is set to, , , , , ,};
The number of elements in pFeatureLevels.
The SDK version; use
Returns the address of a reference to an
If successful, returns the first
Returns the address of a reference to an
This method can return one of the Direct3D 11 Return Codes.
This method returns E_INVALIDARG if you set the pAdapter parameter to a non-
This method returns
This entry-point is supported by the Direct3D 11 runtime, which is available on Windows 7, Windows Server 2008 R2, and as an update to Windows Vista (KB971644).
To create a Direct3D 11.1 device (
To create a Direct3D 11.2 device (
Set ppDevice and ppImmediateContext to
For an example, see How To: Create a Device and Immediate Context; to create a device and a swap chain at the same time, use D3D11CreateDeviceAndSwapChain.
If you set the pAdapter parameter to a non-
Differences between Direct3D 10 and Direct3D 11: In Direct3D 10, the presence of pAdapter dictated which adapter to use and the DriverType could mismatch what the adapter was. In Direct3D 11, if you are trying to create a hardware or a software device, set pAdapter !=
On the other hand, if pAdapter ==
|
?
The function signature PFN_D3D11_CREATE_DEVICE is provided as a typedef, so that you can use dynamic linking techniques (GetProcAddress) instead of statically linking.
Windows?Phone?8: This API is supported.
Windows Phone 8.1: This API is supported.
-Creates a device that uses Direct3D 11 functionality in Direct3D 12, specifying a pre-existing D3D12 device to use for D3D11 interop.
- Specifies a pre-existing D3D12 device to use for D3D11 interop. May not be
One or more bitwise OR'ed flags from
An array of any of the following:
The first feature level which is less than or equal to the D3D12 device's feature level will be used to perform D3D11 validation. Creation will fail if no acceptable feature levels are provided. Providing
The size of the feature levels array, in bytes.
An array of unique queues for D3D11On12 to use. Valid queue types: 3D command queue.
The size of the command queue array, in bytes.
Which node of the D3D12 device to use. Only 1 bit may be set.
Pointer to the returned
A reference to the returned
A reference to the returned feature level. May be
This method returns one of the Direct3D 12 Return Codes that are documented for
This method returns
The function signature PFN_D3D11ON12_CREATE_DEVICE is provided as a typedef, so that you can use dynamic linking techniques (GetProcAddress) instead of statically linking.
-This interface encapsulates methods for retrieving data from the GPU asynchronously.
-There are three types of asynchronous interfaces, all of which inherit this interface:
Get the size of the data (in bytes) that is output when calling
Get the size of the data (in bytes) that is output when calling
Size of the data (in bytes) that is output when calling GetData.
Provides a communication channel with the graphics driver or the Microsoft Direct3D runtime.
-To get a reference to this interface, call
Gets the size of the driver's certificate chain.
-Gets a handle to the authenticated channel.
-Gets the size of the driver's certificate chain.
-Receives the size of the certificate chain, in bytes.
If this method succeeds, it returns
Gets the driver's certificate chain.
-The size of the pCertificate array, in bytes. To get the size of the certificate chain, call
A reference to a byte array that receives the driver's certificate chain. The caller must allocate the array.
If this method succeeds, it returns
Gets a handle to the authenticated channel.
-Receives a handle to the channel.
The
There is no explicit creation method, simply declare an
Gets the initialization flags associated with the deferred context that created the command list.
-The GetContextFlags method gets the flags that were supplied to the ContextFlags parameter of
Gets the initialization flags associated with the deferred context that created the command list.
-The context flag is reserved for future use and is always 0.
The GetContextFlags method gets the flags that were supplied to the ContextFlags parameter of
Represents a cryptographic session.
-To get a reference to this interface, call
Gets the type of encryption that is supported by this session.
-The application specifies the encryption type when it creates the session.
-Gets the decoding profile of the session.
-The application specifies the profile when it creates the session.
-Gets the size of the driver's certificate chain.
-To get the certificate, call
Gets a handle to the cryptographic session.
-You can use this handle to associate the session with a decoder. This enables the decoder to decrypt data that is encrypted using this session.
-Gets the type of encryption that is supported by this session.
-Receives a
Value | Meaning |
---|---|
| 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher. |
?
The application specifies the encryption type when it creates the session.
-Gets the decoding profile of the session.
-Receives the decoding profile. For a list of possible values, see
The application specifies the profile when it creates the session.
-Gets the size of the driver's certificate chain.
-Receives the size of the certificate chain, in bytes.
If this method succeeds, it returns
To get the certificate, call
Gets the driver's certificate chain.
-The size of the pCertificate array, in bytes. To get the size of the certificate chain, call
A reference to a byte array that receives the driver's certificate chain. The caller must allocate the array.
If this method succeeds, it returns
Gets a handle to the cryptographic session.
-Receives a handle to the session.
You can use this handle to associate the session with a decoder. This enables the decoder to decrypt data that is encrypted using this session.
-Handles the creation, wrapping and releasing of D3D11 resources for Direct3D 11on12.
-This method creates D3D11 resources for use with D3D 11on12.
-A reference to an already-created D3D12 resource or heap.
A
The use of the resource on input, as a bitwise-OR'd combination of
The use of the resource on output, as a bitwise-OR'd combination of
The globally unique identifier (
After the method returns, points to the newly created wrapped D3D11 resource or heap.
This method returns one of the Direct3D 12 Return Codes.
Releases D3D11 resources that were wrapped for D3D 11on12.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
Call this method prior to calling Flush, to insert resource barriers to the appropriate "out" state, and to mark that they should then be expected to be in the "in" state. If no resource list is provided, all wrapped resources are transitioned. These resources will be marked as ?not acquired? in hazard tracking until
Keyed mutex resources cannot be provided to this method; use
Releases D3D11 resources that were wrapped for D3D 11on12.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
Call this method prior to calling Flush, to insert resource barriers to the appropriate "out" state, and to mark that they should then be expected to be in the "in" state. If no resource list is provided, all wrapped resources are transitioned. These resources will be marked as ?not acquired? in hazard tracking until
Keyed mutex resources cannot be provided to this method; use
Releases D3D11 resources that were wrapped for D3D 11on12.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
Call this method prior to calling Flush, to insert resource barriers to the appropriate "out" state, and to mark that they should then be expected to be in the "in" state. If no resource list is provided, all wrapped resources are transitioned. These resources will be marked as ?not acquired? in hazard tracking until
Keyed mutex resources cannot be provided to this method; use
Acquires D3D11 resources for use with D3D 11on12. Indicates that rendering to the wrapped resources can begin again.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
This method marks the resources as "acquired" in hazard tracking.
Keyed mutex resources cannot be provided to this method; use
Acquires D3D11 resources for use with D3D 11on12. Indicates that rendering to the wrapped resources can begin again.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
This method marks the resources as "acquired" in hazard tracking.
Keyed mutex resources cannot be provided to this method; use
Acquires D3D11 resources for use with D3D 11on12. Indicates that rendering to the wrapped resources can begin again.
- Specifies a reference to a set of D3D11 resources, defined by
Count of the number of resources.
This method marks the resources as "acquired" in hazard tracking.
Keyed mutex resources cannot be provided to this method; use
The device interface represents a virtual adapter; it is used to create resources.
Registers the "device removed" event and indicates when a Direct3D device has become removed for any reason, using an asynchronous notification mechanism.
-The handle to the "device removed" event.
A reference to information about the "device removed" event, which can be used in UnregisterDeviceRemoved to unregister the event.
Indicates when a Direct3D device has become removed for any reason, using an asynchronous notification mechanism, rather than as an
Applications register and un-register a Win32 event handle with a particular device. That event handle will be signaled when the device becomes removed. A poll into the device's
ISignalableNotifier or SetThreadpoolWait can be used by UWP apps.
When the graphics device is lost, the app or title will receive the graphics event, so that the app or title knows that its graphics device is no longer valid and it is safe for the app or title to re-create its DirectX devices. In response to this event, the app or title needs to re-create its rendering device and pass it into a SetRenderingDevice call on the composition graphics device objects.
After setting this new rendering device, the app or title needs to redraw content of all the pre-existing surfaces after the composition graphics device's OnRenderingDeviceReplaced event is fired.
This method supports Composition for device loss.
The event is not signaled when it is most ideal to re-create. So, instead, we recommend iterating through the adapter ordinals and creating the first ordinal that will succeed.
The application can register an event with the device. The application will be signaled when the device becomes removed.
If the device is already removed, calls to RegisterDeviceRemovedEvent will signal the event immediately. No device-removed error code will be returned from RegisterDeviceRemovedEvent.
Each "device removed" event is never signaled, or is signaled only once. These events are not signaled during device destruction. These events are unregistered during destruction.
The semantics of RegisterDeviceRemovedEvent are similar to
Unregisters the "device removed" event.
-Information about the "device removed" event, retrieved during a successful RegisterDeviceRemovedEvent call.
See RegisterDeviceRemovedEvent.
-The
The
The
Bind an array of shader resources to the domain-shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources(ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a render target, then the method will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set a domain shader to the device.
- Pointer to a domain shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Windows?Phone?8: This API is supported.
-Set a domain shader to the device.
- Pointer to a domain shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Windows?Phone?8: This API is supported.
-Set a domain shader to the device.
- Pointer to a domain shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Windows?Phone?8: This API is supported.
-Set an array of sampler states to the domain-shader stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets the constant buffers used by the domain-shader stage.
- Index into the zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the DSSetConstantBuffers1 method instead.
Windows?Phone?8: This API is supported.
-Get the domain-shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the domain shader currently set on the device.
-Address of a reference to a domain shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler state interfaces from the domain-shader stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the constant buffers used by the domain-shader stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-A geometry-shader interface manages an executable program (a geometry shader) that controls the geometry-shader stage.
-The geometry-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a geometry shader interface, call either
This interface is defined in D3D11.h.
-The
Sets the constant buffers used by the geometry shader pipeline stage.
-Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
You can't use the
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the GSSetConstantBuffers1 method instead.
-Set a geometry shader to the device.
-Pointer to a geometry shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a geometry shader to the device.
-Pointer to a geometry shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a geometry shader to the device.
-Pointer to a geometry shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Bind an array of shader resources to the geometry shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources(ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a render target, then the method will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set an array of sampler states to the geometry shader pipeline stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get the constant buffers used by the geometry shader pipeline stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the geometry shader currently set on the device.
-Address of a reference to a geometry shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the geometry shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler state interfaces from the geometry shader pipeline stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-A hull-shader interface manages an executable program (a hull shader) that controls the hull-shader stage.
-The hull-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a hull-shader interface, call
This interface is defined in D3D11.h.
-The
Bind an array of shader resources to the hull-shader stage.
-If an overlapping resource view is already bound to an output slot, such as a render target, then the method will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set a hull shader to the device.
-Pointer to a hull shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a hull shader to the device.
-Pointer to a hull shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a hull shader to the device.
-Pointer to a hull shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set an array of sampler states to the hull-shader stage.
-Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set the constant buffers used by the hull-shader stage.
-The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the HSSetConstantBuffers1 method instead.
-Get the hull-shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the hull shader currently set on the device.
-Address of a reference to a hull shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler state interfaces from the hull-shader stage.
-Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the constant buffers used by the hull-shader stage.
-Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-An information-queue interface stores, retrieves, and filters debug messages. The queue consists of a message queue, an optional storage filter stack, and a optional retrieval filter stack.
- To get this interface, turn on debug layer and use IUnknown::QueryInterface from the
Windows?Phone?8: This API is supported.
-Get or sets the maximum number of messages that can be added to the message queue.
-When the number of messages in the message queue has reached the maximum limit, new messages coming in will push old messages out.
-Get the number of messages that were allowed to pass through a storage filter.
-Get the number of messages that were denied passage through a storage filter.
-Get the number of messages currently stored in the message queue.
-Get the number of messages that are able to pass through a retrieval filter.
-Get the number of messages that were discarded due to the message count limit.
-Get and set the message count limit with
Get the size of the storage-filter stack in bytes.
-Get the size of the retrieval-filter stack in bytes.
-Get or sets a boolean that turns the debug output on or off.
-Set the maximum number of messages that can be added to the message queue.
-Maximum number of messages that can be added to the message queue. -1 means no limit.
This method returns one of the following Direct3D 11 Return Codes.
When the number of messages in the message queue has reached the maximum limit, new messages coming in will push old messages out.
-Clear all messages from the message queue.
-Get a message from the message queue.
-Index into message queue after an optional retrieval filter has been applied. This can be between 0 and the number of messages in the message queue that pass through the retrieval filter (which can be obtained with
Returned message (see
Size of pMessage in bytes, including the size of the message string that the pMessage points to.
This method returns one of the following Direct3D 11 Return Codes.
This method does not remove any messages from the message queue.
This method gets messages from the message queue after an optional retrieval filter has been applied.
Applications should call this method twice to retrieve a message - first to obtain the size of the message and second to get the message. Here is a typical example:
// Get the size of the message -messageLength = 0; - hr = pInfoQueue->GetMessage(0, null , &messageLength); // Allocate space and get the message -* pMessage = ( *)malloc(messageLength); - hr = pInfoQueue->GetMessage(0, pMessage, &messageLength); -
For an overview see Information Queue Overview.
-Get the number of messages that were allowed to pass through a storage filter.
-Number of messages allowed by a storage filter.
Get the number of messages that were denied passage through a storage filter.
-Number of messages denied by a storage filter.
Get the number of messages currently stored in the message queue.
-Number of messages currently stored in the message queue.
Get the number of messages that are able to pass through a retrieval filter.
-Number of messages allowed by a retrieval filter.
Get the number of messages that were discarded due to the message count limit.
-Number of messages discarded.
Get and set the message count limit with
Get the maximum number of messages that can be added to the message queue.
-Maximum number of messages that can be added to the queue. -1 means no limit.
When the number of messages in the message queue has reached the maximum limit, new messages coming in will push old messages out.
-Add storage filters to the top of the storage-filter stack.
-Array of storage filters (see
This method returns one of the following Direct3D 11 Return Codes.
Get the storage filter at the top of the storage-filter stack.
-Storage filter at the top of the storage-filter stack.
Size of the storage filter in bytes. If pFilter is
This method returns one of the following Direct3D 11 Return Codes.
Remove a storage filter from the top of the storage-filter stack.
-Push an empty storage filter onto the storage-filter stack.
-This method returns one of the following Direct3D 11 Return Codes.
An empty storage filter allows all messages to pass through.
-Push a copy of storage filter currently on the top of the storage-filter stack onto the storage-filter stack.
-This method returns one of the following Direct3D 11 Return Codes.
Push a storage filter onto the storage-filter stack.
-Pointer to a storage filter (see
This method returns one of the following Direct3D 11 Return Codes.
Pop a storage filter from the top of the storage-filter stack.
-Get the size of the storage-filter stack in bytes.
-Size of the storage-filter stack in bytes.
Add storage filters to the top of the retrieval-filter stack.
-Array of retrieval filters (see
This method returns one of the following Direct3D 11 Return Codes.
The following code example shows how to use
-cats[] = { ..., ..., ... }; - sevs[] = { ..., ..., ... }; - UINT ids[] = { ..., ..., ... }; filter; - memset( &filter, 0, sizeof(filter) ); // To set the type of messages to allow, - // set filter.AllowList as follows: - filter.AllowList.NumCategories = sizeof(cats / sizeof( )); - filter.AllowList.pCategoryList = cats; - filter.AllowList.NumSeverities = sizeof(sevs / sizeof( )); - filter.AllowList.pSeverityList = sevs; - filter.AllowList.NumIDs = sizeof(ids) / sizeof(UINT); - filter.AllowList.pIDList = ids; // To set the type of messages to deny, set filter.DenyList - // similarly to the preceding filter.AllowList. // The following single call sets all of the preceding information. - hr = infoQueue->AddRetrievalFilterEntries( &filter ); -
Get the retrieval filter at the top of the retrieval-filter stack.
-Retrieval filter at the top of the retrieval-filter stack.
Size of the retrieval filter in bytes. If pFilter is
This method returns one of the following Direct3D 11 Return Codes.
Remove a retrieval filter from the top of the retrieval-filter stack.
-Push an empty retrieval filter onto the retrieval-filter stack.
-This method returns one of the following Direct3D 11 Return Codes.
An empty retrieval filter allows all messages to pass through.
-Push a copy of retrieval filter currently on the top of the retrieval-filter stack onto the retrieval-filter stack.
-This method returns one of the following Direct3D 11 Return Codes.
Push a retrieval filter onto the retrieval-filter stack.
-Pointer to a retrieval filter (see
This method returns one of the following Direct3D 11 Return Codes.
Pop a retrieval filter from the top of the retrieval-filter stack.
-Get the size of the retrieval-filter stack in bytes.
-Size of the retrieval-filter stack in bytes.
Add a debug message to the message queue and send that message to debug output.
-Category of a message (see
Severity of a message (see
Unique identifier of a message (see
User-defined message.
This method returns one of the following Direct3D 11 Return Codes.
This method is used by the runtime's internal mechanisms to add debug messages to the message queue and send them to debug output. For applications to add their own custom messages to the message queue and send them to debug output, call
Add a user-defined message to the message queue and send that message to debug output.
-Severity of a message (see
Message string.
This method returns one of the following Direct3D 11 Return Codes.
Set a message category to break on when a message with that category passes through the storage filter.
-Message category to break on (see
Turns this breaking condition on or off (true for on, false for off).
This method returns one of the following Direct3D 11 Return Codes.
Set a message severity level to break on when a message with that severity level passes through the storage filter.
-A
Turns this breaking condition on or off (true for on, false for off).
This method returns one of the following Direct3D 11 Return Codes.
Set a message identifier to break on when a message with that identifier passes through the storage filter.
-Message identifier to break on (see
Turns this breaking condition on or off (true for on, false for off).
This method returns one of the following Direct3D 11 Return Codes.
Get a message category to break on when a message with that category passes through the storage filter.
-Message category to break on (see
Whether this breaking condition is turned on or off (true for on, false for off).
Get a message severity level to break on when a message with that severity level passes through the storage filter.
-Message severity level to break on (see
Whether this breaking condition is turned on or off (true for on, false for off).
Get a message identifier to break on when a message with that identifier passes through the storage filter.
-Message identifier to break on (see
Whether this breaking condition is turned on or off (true for on, false for off).
Set a boolean that turns the debug output on or off.
-Disable/Enable the debug output (TRUE to disable or mute the output,
This will stop messages that pass the storage filter from being printed out in the debug output, however those messages will still be added to the message queue.
-Get a boolean that turns the debug output on or off.
-Whether the debug output is on or off (true for on, false for off).
Get a message from the message queue.
-Index into message queue after an optional retrieval filter has been applied. This can be between 0 and the number of messages in the message queue that pass through the retrieval filter (which can be obtained with
Get the storage filter at the top of the storage-filter stack.
-Get the retrieval filter at the top of the retrieval-filter stack.
-An input-layout interface holds a definition of how to feed vertex data that is laid out in memory into the input-assembler stage of the graphics pipeline.
-To create an input-layout object, call
Provides threading protection for critical sections of a multi-threaded application.
-This interface is obtained by querying it from an immediate device context created with the
Unlike D3D10, there is no multithreaded layer in D3D11. By default, multithread protection is turned off. Use SetMultithreadProtected to turn it on, then Enter and Leave to encapsulate graphics commands that must be executed in a specific order.
By default in D3D11, applications can only use one thread with the immediate context at a time. But, applications can use this interface to change that restriction. The interface can turn on threading protection for the immediate context, which will increase the overhead of each immediate context call in order to share one context with multiple threads.
-Find out if multithread protection is turned on or not.
-Enter a device's critical section.
-If SetMultithreadProtected is set to true, then entering a device's critical section prevents other threads from simultaneously calling that device's methods, calling DXGI methods, and calling the methods of all resource, view, shader, state, and asynchronous interfaces.
This function should be used in multithreaded applications when there is a series of graphics commands that must happen in order. This function is typically called at the beginning of the series of graphics commands, and Leave is typically called after those graphics commands.
-Leave a device's critical section.
-This function is typically used in multithreaded applications when there is a series of graphics commands that must happen in order. Enter is typically called at the beginning of a series of graphics commands, and this function is typically called after those graphics commands.
-Turns multithread protection on or off.
-Set to true to turn multithread protection on, false to turn it off.
True if multithread protection was already turned on prior to calling this method, false otherwise.
Find out if multithread protection is turned on or not.
-Returns true if multithread protection is turned on, false otherwise.
A pixel-shader interface manages an executable program (a pixel shader) that controls the pixel-shader stage.
-The pixel-shader interface has no methods; use HLSL to implement your shader functionality. All shaders in are implemented from a common set of features referred to as the common-shader core..
To create a pixel shader interface, call
This interface is defined in D3D11.h.
-The
Bind an array of shader resources to the pixel shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a rendertarget, then this API will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets a pixel shader to the device.
- Pointer to a pixel shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Set ppClassInstances to
Windows?Phone?8: This API is supported.
-Sets a pixel shader to the device.
- Pointer to a pixel shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Set ppClassInstances to
Windows?Phone?8: This API is supported.
-Sets a pixel shader to the device.
- Pointer to a pixel shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
Set ppClassInstances to
Windows?Phone?8: This API is supported.
-Set an array of sampler states to the pixel shader pipeline stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
State | Default Value |
---|---|
Filter | |
AddressU | |
AddressV | |
AddressW | |
MipLODBias | 0 |
MaxAnisotropy | 1 |
ComparisonFunc | |
BorderColor[0] | 1.0f |
BorderColor[1] | 1.0f |
BorderColor[2] | 1.0f |
BorderColor[3] | 1.0f |
MinLOD | -FLT_MAX |
MaxLOD | FLT_MAX |
?
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Sets the constant buffers used by the pixel shader pipeline stage.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available on Windows?8 and later operating systems, can bind a larger number of
To enable the shader to access other parts of the buffer, call PSSetConstantBuffers1 instead of PSSetConstantBuffers. PSSetConstantBuffers1 has additional parameters pFirstConstant and pNumConstants.
-Bind an array of shader resources to the pixel shader stage.
-Index into the device's zero-based array to begin setting shader resources to (ranges from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a rendertarget, then this API will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get the pixel shader currently set on the device.
- Address of a reference to a pixel shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed, to avoid memory leaks.
Windows?Phone?8: This API is supported.
-Get an array of sampler states from the pixel shader pipeline stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Arry of sampler-state interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the constant buffers used by the pixel shader pipeline stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-A predicate interface determines whether geometry should be processed depending on the results of a previous draw call.
-To create a predicate object, call
There are two types of predicates: stream-output-overflow predicates and occlusion predicates. Stream-output-overflow predicates cause any geometry residing in stream-output buffers that were overflowed to not be processed. Occlusion predicates cause any geometry that did not have a single sample pass the depth/stencil tests to not be processed.
-A query interface queries information from the GPU.
-A query can be created with
Query data is typically gathered by issuing an
There are, however, some queries that do not require calls to Begin. For a list of possible queries see
A query is typically executed as shown in the following code:
queryDesc; - ... // Fill out queryDesc structure - * pQuery; - pDevice->CreateQuery(&queryDesc, &pQuery); - pDeviceContext->Begin(pQuery); ... // Issue graphics commands pDeviceContext->End(pQuery); - UINT64 queryData; // This data type is different depending on the query type while( != pDeviceContext->GetData(pQuery, &queryData, sizeof(UINT64), 0) ) - { - } -
When using a query that does not require a call to Begin, it still requires a call to End. The call to End causes the data returned by GetData to be accurate up until the last call to End.
-Get a query description.
-Get a query description.
-Pointer to a query description (see
Represents a query object for querying information from the graphics processing unit (GPU).
-A query can be created with
Query data is typically gathered by issuing an
There are, however, some queries that do not require calls to Begin. For a list of possible queries see
When using a query that does not require a call to Begin, it still requires a call to End. The call to End causes the data returned by GetData to be accurate up until the last call to End.
-Gets a query description.
-Gets a query description.
-A reference to a
The rasterizer-state interface holds a description for rasterizer state that you can bind to the rasterizer stage.
-To create a rasterizer-state object, call
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-You use the description for rasterizer state in a call to the
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-A reference to a
You use the description for rasterizer state in a call to the
Create a rasterizer state object that tells the rasterizer stage how to behave.
-4096 unique rasterizer state objects can be created on a device at a time.
If an application attempts to create a rasterizer-state interface with the same state as an existing interface, the same interface will be returned and the total number of unique rasterizer state objects will stay the same.
-The rasterizer-state interface holds a description for rasterizer state that you can bind to the rasterizer stage. This rasterizer-state interface supports forced sample count.
-To create a rasterizer-state object, call
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-You use the description for rasterizer state in a call to the
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-A reference to a
You use the description for rasterizer state in a call to the
The rasterizer-state interface holds a description for rasterizer state that you can bind to the rasterizer stage. This rasterizer-state interface supports forced sample count and conservative rasterization mode.
-To create a rasterizer-state object, call
Gets the description for rasterizer state that you used to create the rasterizer-state object.
-You use the description for rasterizer state in a call to the
Gets the description for rasterizer state that you used to create the rasterizer-state object.
- A reference to a
You use the description for rasterizer state in a call to the
Sets graphics processing unit (GPU) debug reference default tracking options for specific resource types.
-This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets graphics processing unit (GPU) debug reference default tracking options for specific resource types.
- A
A combination of D3D11_SHADER_TRACKING_OPTIONS-typed flags that are combined by using a bitwise OR operation. The resulting value identifies tracking options. If a flag is present, the tracking option that the flag represents is set to "on"; otherwise the tracking option is set to "off."
This method returns one of the Direct3D 11 return codes.
This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets graphics processing unit (GPU) debug reference tracking options.
-This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets graphics processing unit (GPU) debug reference tracking options.
-This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets graphics processing unit (GPU) debug reference tracking options.
-A combination of D3D11_SHADER_TRACKING_OPTIONS-typed flags that are combined by using a bitwise OR operation. The resulting value identifies tracking options. If a flag is present, the tracking option that the flag represents is set to "on"; otherwise the tracking option is set to "off."
This method returns one of the Direct3D 11 return codes.
This API requires the Windows Software Development Kit (SDK) for Windows?8.
-A render-target-view interface identifies the render-target subresources that can be accessed during rendering.
-To create a render-target view, call
A rendertarget is a resource that can be written by the output-merger stage at the end of a render pass. Each render-target should also have a corresponding depth-stencil view.
-Get the properties of a render target view.
-Get the properties of a render target view.
-Pointer to the description of a render target view (see
A render-target-view interface represents the render-target subresources that can be accessed during rendering.
-To create a render-target view, call
A render target is a resource that can be written by the output-merger stage at the end of a render pass. Each render target can also have a corresponding depth-stencil view.
-Gets the properties of a render-target view.
-Gets the properties of a render-target view.
-A reference to a
A resource interface provides common actions on all resources.
-You don't directly create a resource interface; instead, you create buffers and textures that inherit from a resource interface. For more info, see How to: Create a Vertex Buffer, How to: Create an Index Buffer, How to: Create a Constant Buffer, and How to: Create a Texture.
-Get the type of the resource.
-Windows?Phone?8: This API is supported.
-Get or sets the eviction priority of a resource.
-Get the type of the resource.
- Pointer to the resource type (see
Windows?Phone?8: This API is supported.
-Set the eviction priority of a resource.
-Eviction priority for the resource, which is one of the following values:
Resource priorities determine which resource to evict from video memory when the system has run out of video memory. The resource will not be lost; it will be removed from video memory and placed into system memory, or possibly placed onto the hard drive. The resource will be loaded back into video memory when it is required.
A resource that is set to the maximum priority,
Changing the priorities of resources should be done carefully. The wrong eviction priorities could be a detriment to performance rather than an improvement.
-Get the eviction priority of a resource.
-One of the following values, which specifies the eviction priority for the resource:
A view interface specifies the parts of a resource the pipeline can access during rendering.
-A view interface is the base interface for all views. There are four types of views; a depth-stencil view, a render-target view, a shader-resource view, and an unordered-access view.
All resources must be bound to the pipeline before they can be accessed.
Get the resource that is accessed through this view.
-Address of a reference to the resource that is accessed through this view. (See
This function increments the reference count of the resource by one, so it is necessary to call Release on the returned reference when the application is done with it. Destroying (or losing) the returned reference before Release is called will result in a memory leak.
-Get the resource that is accessed through this view.
-This function increments the reference count of the resource by one, so it is necessary to call Release on the returned reference when the application is done with it. Destroying (or losing) the returned reference before Release is called will result in a memory leak.
-Get the resource that is accessed through this view.
-This function increments the reference count of the resource by one, so it is necessary to call Dispose on the returned reference when the application is done with it. Destroying (or losing) the returned reference before Release is called will result in a memory leak.
-The sampler-state interface holds a description for sampler state that you can bind to any shader stage of the pipeline for reference by texture sample operations.
-To create a sampler-state object, call
To bind a sampler-state object to any pipeline shader stage, call the following methods:
You can bind the same sampler-state object to multiple shader stages simultaneously.
-Gets the description for sampler state that you used to create the sampler-state object.
-You use the description for sampler state in a call to the
Gets the description for sampler state that you used to create the sampler-state object.
-A reference to a
You use the description for sampler state in a call to the
A shader-resource-view interface specifies the subresources a shader can access during rendering. Examples of shader resources include a constant buffer, a texture buffer, and a texture.
-To create a shader-resource view, call
A shader-resource view is required when binding a resource to a shader stage; the binding occurs by calling
Get the shader resource view's description.
-Get the shader resource view's description.
-A reference to a
A shader-resource-view interface represents the subresources a shader can access during rendering. Examples of shader resources include a constant buffer, a texture buffer, and a texture.
-To create a shader-resource view, call
A shader-resource view is required when binding a resource to a shader stage; the binding occurs by calling
Gets the shader-resource view's description.
-Gets the shader-resource view's description.
-A reference to a
Reserved.
Reserved.
A 1D texture interface accesses texel data, which is structured memory.
-To create an empty 1D texture, call
Textures cannot be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render target or depth-stencil resource, call
Get the properties of the texture resource.
-Get the properties of the texture resource.
-Pointer to a resource description (see
A 2D texture interface manages texel data, which is structured memory.
-To create an empty Texture2D resource, call
Textures cannot be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render target or depth-stencil resource, call
Get the properties of the texture resource.
-Get the properties of the texture resource.
-Pointer to a resource description (see
A 2D texture interface represents texel data, which is structured memory.
-To create an empty Texture2D resource, call
Textures can't be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render-target or depth-stencil resource, call
Gets the properties of the texture resource.
-Gets the properties of the texture resource.
-A reference to a
A 3D texture interface accesses texel data, which is structured memory.
-To create an empty Texture3D resource, call
Textures cannot be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render target or depth-stencil resource, call
Get the properties of the texture resource.
-Get the properties of the texture resource.
-Pointer to a resource description (see
A 3D texture interface represents texel data, which is structured memory.
-To create an empty Texture3D resource, call
Textures can't be bound directly to the pipeline; instead, a view must be created and bound. Using a view, texture data can be interpreted at run time within certain restrictions. To use the texture as a render-target or depth-stencil resource, call
Gets the properties of the texture resource.
-Gets the properties of the texture resource.
-A reference to a
The tracing device interface sets shader tracking information, which enables accurate logging and playback of shader execution.
-To get this interface, turn on the debug layer and use IUnknown::QueryInterface from the
Sets the reference rasterizer's default race-condition tracking options for the specified resource types.
-A
A combination of D3D11_SHADER_TRACKING_OPTIONS-typed flags that are combined by using a bitwise OR operation. The resulting value identifies tracking options. If a flag is present, the tracking option that the flag represents is set to "on," otherwise the tracking option is set to "off."
This method returns one of the Direct3D 11 return codes.
This API requires the Windows Software Development Kit (SDK) for Windows?8.
-Sets the reference rasterizer's race-condition tracking options for a specific shader.
-A reference to the
A combination of D3D11_SHADER_TRACKING_OPTIONS-typed flags that are combined by using a bitwise OR operation. The resulting value identifies tracking options. If a flag is present, the tracking option that the flag represents is set to "on"; otherwise the tracking option is set to "off."
This method returns one of the Direct3D 11 return codes.
A view interface specifies the parts of a resource the pipeline can access during rendering.
-To create a view for an unordered access resource, call
All resources must be bound to the pipeline before they can be accessed. Call
Get a description of the resource.
-Get a description of the resource.
-Pointer to a resource description (see
An unordered-access-view interface represents the parts of a resource the pipeline can access during rendering.
-To create a view for an unordered access resource, call
All resources must be bound to the pipeline before they can be accessed. Call
Gets a description of the resource.
-Gets a description of the resource.
-A reference to a
The
The methods of
The
The
You must call the BeginEvent and EndEvent methods in pairs; pairs of calls to these methods can nest within pairs of calls to these methods at a higher level in the application's call stack. In other words, a "Draw World" section can entirely contain another section named "Draw Trees," which can in turn entirely contain a section called "Draw Oaks." You can only associate an EndEvent method with the most recent BeginEvent method, that is, pairs cannot overlap. You cannot call an EndEvent for any BeginEvent that preceded the most recent BeginEvent. In fact, the runtime interprets the first EndEvent as ending the second BeginEvent.
-Determines whether the calling application is running under a Microsoft Direct3D profiling tool.
-You can call GetStatus to determine whether your application is running under a Direct3D profiling tool before you make further calls to other methods of the
Marks the beginning of a section of event code.
-A
Returns the number of previous calls to BeginEvent that have not yet been finalized by calls to the
The return value is ?1 if the calling application is not running under a Direct3D profiling tool.
You call the EndEvent method to mark the end of the section of event code.
A user can visualize the event when the calling application is running under an enabled Direct3D profiling tool such as Microsoft Visual Studio Ultimate?2012.
BeginEvent has no effect if the calling application is not running under an enabled Direct3D profiling tool.
-Marks the end of a section of event code.
-Returns the number of previous calls to the
The return value is ?1 if the calling application is not running under a Direct3D profiling tool.
You call the BeginEvent method to mark the beginning of the section of event code.
A user can visualize the event when the calling application is running under an enabled Direct3D profiling tool such as Microsoft Visual Studio Ultimate?2012.
EndEvent has no effect if the calling application is not running under an enabled Direct3D profiling tool.
-Marks a single point of execution in code.
-A
A user can visualize the marker when the calling application is running under an enabled Direct3D profiling tool such as Microsoft Visual Studio Ultimate?2012.
SetMarker has no effect if the calling application is not running under an enabled Direct3D profiling tool.
-Determines whether the calling application is running under a Microsoft Direct3D profiling tool.
-The return value is nonzero if the calling application is running under a Direct3D profiling tool such as Visual Studio Ultimate?2012, and zero otherwise.
You can call GetStatus to determine whether your application is running under a Direct3D profiling tool before you make further calls to other methods of the
A vertex-shader interface manages an executable program (a vertex shader) that controls the vertex-shader stage.
-The vertex-shader interface has no methods; use HLSL to implement your shader functionality. All shaders are implemented from a common set of features referred to as the common-shader core..
To create a vertex shader interface, call
This interface is defined in D3D11.h.
-The
Sets the constant buffers used by the vertex shader pipeline stage.
- Index into the device's zero-based array to begin setting constant buffers to (ranges from 0 to
Number of buffers to set (ranges from 0 to
Array of constant buffers (see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The Direct3D 11.1 runtime, which is available starting with Windows?8, can bind a larger number of
If the application wants the shader to access other parts of the buffer, it must call the VSSetConstantBuffers1 method instead.
Windows?Phone?8: This API is supported.
-Set a vertex shader to the device.
-Pointer to a vertex shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a vertex shader to the device.
-Pointer to a vertex shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Set a vertex shader to the device.
-Pointer to a vertex shader (see
A reference to an array of class-instance interfaces (see
The number of class-instance interfaces in the array.
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
The maximum number of instances a shader can have is 256.
-Bind an array of shader resources to the vertex-shader stage.
-Index into the device's zero-based array to begin setting shader resources to (range is from 0 to
Number of shader resources to set. Up to a maximum of 128 slots are available for shader resources (range is from 0 to
Array of shader resource view interfaces to set to the device.
If an overlapping resource view is already bound to an output slot, such as a rendertarget, then this API will fill the destination shader resource slot with
For information about creating shader-resource views, see
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Set an array of sampler states to the vertex shader pipeline stage.
-Index into the device's zero-based array to begin setting samplers to (ranges from 0 to
Number of samplers in the array. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Pointer to an array of sampler-state interfaces (see
Any sampler may be set to
//Default sampler state: -SamplerDesc; - SamplerDesc.Filter = ; - SamplerDesc.AddressU = ; - SamplerDesc.AddressV = ; - SamplerDesc.AddressW = ; - SamplerDesc.MipLODBias = 0; - SamplerDesc.MaxAnisotropy = 1; - SamplerDesc.ComparisonFunc = ; - SamplerDesc.BorderColor[0] = 1.0f; - SamplerDesc.BorderColor[1] = 1.0f; - SamplerDesc.BorderColor[2] = 1.0f; - SamplerDesc.BorderColor[3] = 1.0f; - SamplerDesc.MinLOD = -FLT_MAX; - SamplerDesc.MaxLOD = FLT_MAX;
The method will hold a reference to the interfaces passed in. This differs from the device state behavior in Direct3D 10.
-Get the constant buffers used by the vertex shader pipeline stage.
-Index into the device's zero-based array to begin retrieving constant buffers from (ranges from 0 to
Number of buffers to retrieve (ranges from 0 to
Array of constant buffer interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the vertex shader currently set on the device.
-Address of a reference to a vertex shader (see
Pointer to an array of class instance interfaces (see
The number of class-instance elements in the array.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get the vertex shader resources.
-Index into the device's zero-based array to begin getting shader resources from (ranges from 0 to
The number of resources to get from the device. Up to a maximum of 128 slots are available for shader resources (ranges from 0 to
Array of shader resource view interfaces to be returned by the device.
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Get an array of sampler states from the vertex shader pipeline stage.
-Index into a zero-based array to begin getting samplers from (ranges from 0 to
Number of samplers to get from a device context. Each pipeline stage has a total of 16 sampler slots available (ranges from 0 to
Arry of sampler-state interface references (see
Any returned interfaces will have their reference count incremented by one. Applications should call IUnknown::Release on the returned interfaces when they are no longer needed to avoid memory leaks.
-Provides the video functionality of a Microsoft Direct3D?11 device.
-To get a reference to this interface, call QueryInterface with an
This interface provides access to several areas of Microsoft Direct3D video functionality:
In Microsoft Direct3D?9, the equivalent functions were distributed across several interfaces:
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Gets a reference to a DirectX Video Acceleration (DXVA) decoder buffer.
-The graphics driver allocates the buffers that are used for DXVA decoding. This method locks the Microsoft Direct3D surface that contains the buffer. When you are done using the buffer, call
Gets a reference to a decoder buffer.
-A reference to the
The type of buffer to retrieve, specified as a member of the
Receives the size of the buffer, in bytes.
Receives a reference to the start of the memory buffer.
If this method succeeds, it returns
The graphics driver allocates the buffers that are used for decoding. This method locks the Microsoft Direct3D surface that contains the buffer. When you are done using the buffer, call
Releases a buffer that was obtained by calling the
If this method succeeds, it returns
Starts a decoding operation to decode a video frame.
-A reference to the
A reference to the
The size of the content key that is specified in pContentKey. If pContentKey is
An optional reference to a content key that was used to encrypt the frame data. If no content key was used, set this parameter to
If this method succeeds, it returns
After this method is called, call
Each call to DecoderBeginFrame must have a matching call to DecoderEndFrame. In most cases you cannot nest DecoderBeginFrame calls, but some codecs, such as like VC-1, can have nested DecoderBeginFrame calls for special operations like post processing.
The following encryption scenarios are supported through the content key:
Signals the end of a decoding operation.
-A reference to the
If this method succeeds, it returns
Submits one or more buffers for decoding.
-A reference to the
The number of buffers submitted for decoding.
A reference to an array of
If this method succeeds, it returns
This function does not honor a D3D11 predicate that may have been set.
If the application uses D3D11 quries, this function may not be accounted for with
When using feature levels 9_x, all partially encrypted buffers must use the same EncryptedBlockInfo, and partial encryption cannot be turned off on a per frame basis.
-Performs an extended function for decoding. This method enables extensions to the basic decoder functionality.
-A reference to the
A reference to a
If this method succeeds, it returns
Sets the target rectangle for the video processor.
-A reference to the
Specifies whether to apply the target rectangle.
A reference to a
The target rectangle is the area within the destination surface where the output will be drawn. The target rectangle is given in pixel coordinates, relative to the destination surface.
If this method is never called, or if the Enable parameter is
Sets the background color for the video processor.
-A reference to the
If TRUE, the color is specified as a YCbCr value. Otherwise, the color is specified as an RGB value.
A reference to a
The video processor uses the background color to fill areas of the target rectangle that do not contain a video image. Areas outside the target rectangle are not affected.
-Sets the output color space for the video processor.
-A reference to the
A reference to a
Sets the alpha fill mode for data that the video processor writes to the render target.
-A reference to the
The alpha fill mode, specified as a
The zero-based index of an input stream. This parameter is used if AlphaFillMode is
To find out which fill modes the device supports, call the
The default fill mode is
Sets the amount of downsampling to perform on the output.
-A reference to the
If TRUE, downsampling is enabled. Otherwise, downsampling is disabled and the Size member is ignored.
The sampling size.
Downsampling is sometimes used to reduce the quality of premium content when other forms of content protection are not available. By default, downsampling is disabled.
If the Enable parameter is TRUE, the driver downsamples the composed image to the specified size, and then scales it back to the size of the target rectangle.
The width and height of Size must be greater than zero. If the size is larger than the target rectangle, downsampling does not occur.
To use this feature, the driver must support downsampling, indicated by the
Specifies whether the video processor produces stereo video frames.
-A reference to the
If TRUE, stereo output is enabled. Otherwise, the video processor produces mono video frames.
By default, the video processor produces mono video frames.
To use this feature, the driver must support stereo video, indicated by the
Sets a driver-specific video processing state.
-A reference to the
A reference to a
The size of the pData buffer, in bytes.
A reference to a buffer that contains private state data. The method passes this buffer directly to the driver without validation. It is the responsibility of the driver to validate the data.
If this method succeeds, it returns
Gets the current target rectangle for the video processor.
-A reference to the
Receives the value TRUE if the target rectangle was explicitly set using the
If Enabled receives the value TRUE, this parameter receives the target rectangle. Otherwise, this parameter is ignored.
Gets the current background color for the video processor.
-A reference to the
Receives the value TRUE if the background color is a YCbCr color, or
A reference to a
Gets the current output color space for the video processor.
-A reference to the
A reference to a
Gets the current alpha fill mode for the video processor.
-A reference to the
Receives the alpha fill mode, as a
If the alpha fill mode is
Gets the current level of downsampling that is performed by the video processor.
-A reference to the
Receives the value TRUE if downsampling was explicitly enabled using the
If Enabled receives the value TRUE, this parameter receives the downsampling size. Otherwise, this parameter is ignored.
Queries whether the video processor produces stereo video frames.
-A reference to the
Receives the value TRUE if stereo output is enabled, or
Gets private state data from the video processor.
-A reference to the
A reference to a
The size of the pData buffer, in bytes.
A reference to a buffer that receives the private state data.
If this method succeeds, it returns
Specifies whether an input stream on the video processor contains interlaced or progressive frames.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
A
Sets the color space for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
A reference to a
Sets the rate at which the video processor produces output frames for an input stream.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The output rate, specified as a
Specifies how the driver performs frame-rate conversion, if required.
Value | Meaning |
---|---|
| Repeat frames. |
Interpolate frames. |
?
A reference to a
The standard output rates are normal frame-rate (
Depending on the output rate, the driver might need to convert the frame rate. If so, the value of RepeatFrame controls whether the driver creates interpolated frames or simply repeats input frames.
-Sets the source rectangle for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether to apply the source rectangle.
A reference to a
The source rectangle is the portion of the input surface that is blitted to the destination surface. The source rectangle is given in pixel coordinates, relative to the input surface.
If this method is never called, or if the Enable parameter is
Sets the destination rectangle for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether to apply the destination rectangle.
A reference to a
The destination rectangle is the portion of the output surface that receives the blit for this stream. The destination rectangle is given in pixel coordinates, relative to the output surface.
The default destination rectangle is an empty rectangle (0, 0, 0, 0). If this method is never called, or if the Enable parameter is
Sets the planar alpha for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether alpha blending is enabled.
The planar alpha value. The value can range from 0.0 (transparent) to 1.0 (opaque). If Enable is
To use this feature, the driver must support stereo video, indicated by the D3D11_VIDEO_PROCESSOR_FEATURE_CAPS_ALHPA_STREAM capability flag. To query for this capability, call
Alpha blending is disabled by default.
For each pixel, the destination color value is computed as follows:
Cd = Cs * (As * Ap * Ae) + Cd * (1.0 - As * Ap * Ae)
where:
Cd
= The color value of the destination pixelCs
= The color value of the source pixelAs
= The per-pixel source alphaAp
= The planar alpha valueAe
= The palette-entry alpha value, or 1.0 (see Note)The destination alpha value is computed according to the alpha fill mode. For more information, see
Sets the color-palette entries for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The number of elements in the pEntries array.
A reference to an array of palette entries. For RGB streams, the palette entries use the DXGI_FORMAT_B8G8R8A8 representation. For YCbCr streams, the palette entries use the
This method applies only to input streams that have a palettized color format. Palettized formats with 4 bits per pixel (bpp) use the first 16 entries in the list. Formats with 8 bpp use the first 256 entries.
If a pixel has a palette index greater than the number of entries, the device treats the pixel as white with opaque alpha. For full-range RGB, this value is (255, 255, 255, 255); for YCbCr the value is (255, 235, 128, 128).
If the driver does not report the
Sets the pixel aspect ratio for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether the pSourceAspectRatio and pDestinationAspectRatio parameters contain valid values. Otherwise, the pixel aspect ratios are unspecified.
A reference to a
A reference to a
This function can only be called if the driver reports the
Pixel aspect ratios of the form 0/n and n/0 are not valid.
The default pixel aspect ratio is 1:1 (square pixels).
-Sets the luma key for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether luma keying is enabled.
The lower bound for the luma key. The valid range is [0?1]. If Enable is
The upper bound for the luma key. The valid range is [0?1]. If Enable is
To use this feature, the driver must support luma keying, indicated by the
The values of Lower and Upper give the lower and upper bounds of the luma key, using a nominal range of [0...1]. Given a format with n bits per channel, these values are converted to luma values as follows:
val = f * ((1 << n)-1)
Any pixel whose luma value falls within the upper and lower bounds (inclusive) is treated as transparent.
For example, if the pixel format uses 8-bit luma, the upper bound is calculated as follows:
BYTE Y = BYTE(max(min(1.0, Upper), 0.0) * 255.0)
Note that the value is clamped to the range [0...1] before multiplying by 255.
-Enables or disables stereo 3D video for an input stream on the video processor. In addition, this method specifies the layout of the video frames in memory.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies whether stereo 3D is enabled for this stream. If the value is
Specifies the layout of the two stereo views in memory, as a
If TRUE, frame 0 contains the left view. Otherwise, frame 0 contains the right view.
This parameter is ignored for the following stereo formats:
If TRUE, frame 0 contains the base view. Otherwise, frame 1 contains the base view.
This parameter is ignored for the following stereo formats:
A flag from the
For
If Format is not
Enables or disables automatic processing features on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
If TRUE, automatic processing features are enabled. If
By default, the driver might perform certain processing tasks automatically during the video processor blit. This method enables the application to disable these extra video processing features. For example, if you provide your own pixel shader for the video processor, you might want to disable the driver's automatic processing.
-Enables or disables an image filter for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The filter, specified as a
To query which filters the driver supports, call
Specifies whether to enable the filter.
The filter level. If Enable is
To find the valid range of levels for a specified filter, call
Sets a driver-specific state on a video processing stream.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
A reference to a
The size of the pData buffer, in bytes.
A reference to a buffer that contains private state data. The method passes this buffer directly to the driver without validation. It is the responsibility of the driver to validate the data.
If this method succeeds, it returns
Gets the format of an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives a
Gets the color space for an input stream of the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives a
Gets the rate at which the video processor produces output frames for an input stream.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives a
Receives a Boolean value that specifies how the driver performs frame-rate conversion, if required.
Value | Meaning |
---|---|
| Repeat frames. |
Interpolate frames. |
?
A reference to a
Gets the source rectangle for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if the source rectangle is enabled, or
A reference to a
Gets the destination rectangle for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if the destination rectangle is enabled, or
A reference to a
Gets the planar alpha for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if planar alpha is enabled, or
Receives the planar alpha value. The value can range from 0.0 (transparent) to 1.0 (opaque).
Gets the color-palette entries for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The number of entries in the pEntries array.
A reference to a UINT array allocated by the caller. The method fills the array with the palette entries. For RGB streams, the palette entries use the DXGI_FORMAT_B8G8R8A8 representation. For YCbCr streams, the palette entries use the
This method applies only to input streams that have a palettized color format. Palettized formats with 4 bits per pixel (bpp) use 16 palette entries. Formats with 8 bpp use 256 entries.
-Gets the pixel aspect ratio for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if the pixel aspect ratio is specified. Otherwise, receives the value
A reference to a
A reference to a
When the method returns, if *pEnabled is TRUE, the pSourceAspectRatio and pDestinationAspectRatio parameters contain the pixel aspect ratios. Otherwise, the default pixel aspect ratio is 1:1 (square pixels).
-Gets the luma key for an input stream of the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if luma keying is enabled, or
Receives the lower bound for the luma key. The valid range is [0?1].
Receives the upper bound for the luma key. The valid range is [0?1].
Gets the stereo 3D format for an input stream on the video processor
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if stereo 3D is enabled for this stream, or
Receives a
Receives a Boolean value.
Value | Meaning |
---|---|
| Frame 0 contains the left view. |
Frame 0 contains the right view. |
?
Receives a Boolean value.
Value | Meaning |
---|---|
| Frame 0 contains the base view. |
Frame 1 contains the base view. |
?
Receives a
Receives the pixel offset used for
Queries whether automatic processing features of the video processor are enabled.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Receives the value TRUE if automatic processing features are enabled, or
Automatic processing refers to additional image processing that drivers might have performed on the image data prior to the application receiving the data.
-Gets the image filter settings for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
The filter to query, specified as a
Receives the value TRUE if the image filter is enabled, or
Receives the filter level.
Gets a driver-specific state for a video processing stream.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
A reference to a
The size of the pData buffer, in bytes.
A reference to a buffer that receives the private state data.
If this method succeeds, it returns
Performs a video processing operation on one or more input samples and writes the result to a Direct3D surface.
-A reference to the
A reference to the
The frame number of the output video frame, indexed from zero.
The number of input streams to process.
A reference to an array of
If this method succeeds, it returns
The maximum value of StreamCount is given in the MaxStreamStates member of the
If the output stereo mode is TRUE:
Otherwise:
This function does not honor a D3D11 predicate that may have been set.
If the application uses D3D11 quries, this function may not be accounted for with
Establishes the session key for a cryptographic session.
-A reference to the
The size of the pData byte array, in bytes.
A reference to a byte array that contains the encrypted session key.
If this method succeeds, it returns
The key exchange mechanism depends on the type of cryptographic session.
For RSA Encryption Scheme - Optimal Asymmetric Encryption Padding (RSAES-OAEP), the software decoder generates the secret key, encrypts the secret key by using the public key with RSAES-OAEP, and places the cipher text in the pData parameter. The actual size of the buffer for RSAES-OAEP is 256 bytes.
-Reads encrypted data from a protected surface.
-A reference to the
A reference to the
A reference to the
The size of the pIV buffer, in bytes.
A reference to a buffer that receives the initialization vector (IV). The caller allocates this buffer, but the driver generates the IV.
For 128-bit AES-CTR encryption, pIV points to a
Not all drivers support this method. To query the driver capabilities, call
Some drivers might require a separate key to decrypt the data that is read back. To check for this requirement, call GetContentProtectionCaps and check for the
This method has the following limitations:
This function does not honor a D3D11 predicate that may have been set.
If the application uses D3D11 quries, this function may not be accounted for with
Writes encrypted data to a protected surface.
-A reference to the
A reference to the surface that contains the source data.
A reference to the protected surface where the encrypted data is written.
A reference to a
If the driver supports partially encrypted buffers, pEncryptedBlockInfo indicates which portions of the buffer are encrypted. If the entire surface is encrypted, set this parameter to
To check whether the driver supports partially encrypted buffers, call
The size of the encrypted content key, in bytes.
A reference to a buffer that contains a content encryption key, or
If the driver supports content keys, use the content key to encrypt the surface. Encrypt the content key using the session key, and place the resulting cipher text in pContentKey. If the driver does not support content keys, use the session key to encrypt the surface and set pContentKey to
The size of the pIV buffer, in bytes.
A reference to a buffer that contains the initialization vector (IV).
For 128-bit AES-CTR encryption, pIV points to a
For other encryption types, a different structure might be used, or the encryption might not use an IV.
Not all hardware or drivers support this functionality for all cryptographic types. This function can only be called when the
This method does not support writing to sub-rectangles of the surface.
If the hardware and driver support a content key:
Otherwise, the data is encrypted by the caller using the session key and
If the driver and hardware support partially encrypted buffers, pEncryptedBlockInfo indicates which portions of the buffer are encrypted and which is not. If the entire buffer is encrypted, pEncryptedBlockinfo should be
The
This function does not honor a D3D11 predicate that may have been set.
If the application uses D3D11 quries, this function may not be accounted for with
Gets a random number that can be used to refresh the session key.
-A reference to the
The size of the pRandomNumber array, in bytes. The size should match the size of the session key.
A reference to a byte array that receives a random number.
To generate a new session key, perform a bitwise XOR between the previous session key and the random number. The new session key does not take affect until the application calls
To query whether the driver supports this method, call
Switches to a new session key.
-A reference to the
This function can only be called when the
Before calling this method, call
Gets the cryptographic key to decrypt the data returned by the
If this method succeeds, it returns
This method applies only when the driver requires a separate content key for the EncryptionBlt method. For more information, see the Remarks for EncryptionBlt.
Each time this method is called, the driver generates a new key.
The KeySize should match the size of the session key.
The read back key is encrypted by the driver/hardware using the session key.
-Establishes a session key for an authenticated channel.
-A reference to the
The size of the data in the pData array, in bytes.
A reference to a byte array that contains the encrypted session key. The buffer must contain 256 bytes of data, encrypted using RSA Encryption Scheme - Optimal Asymmetric Encryption Padding (RSAES-OAEP).
If this method succeeds, it returns
This method will fail if the channel type is
Sends a query to an authenticated channel.
-A reference to the
The size of the pInput array, in bytes.
A reference to a byte array that contains input data for the query. This array always starts with a
The size of the pOutput array, in bytes.
A reference to a byte array that receives the result of the query. This array always starts with a
If this method succeeds, it returns
Sends a configuration command to an authenticated channel.
-A reference to the
The size of the pInput array, in bytes.
A reference to a byte array that contains input data for the command. This buffer always starts with a
A reference to a
If this method succeeds, it returns
Sets the stream rotation for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies if the stream is to be rotated in a clockwise orientation.
Specifies the rotation of the stream.
This is an optional state and the application should only use it if
The stream source rectangle will be specified in the pre-rotation coordinates (typically landscape) and the stream destination rectangle will be specified in the post-rotation coordinates (typically portrait). The application must update the stream destination rectangle correctly when using a rotation value other than 0? and 180?.
-Gets the stream rotation for an input stream on the video processor.
-A reference to the
The zero-based index of the input stream. To get the maximum number of streams, call
Specifies if the stream is rotated.
Specifies the rotation of the stream in a clockwise orientation.
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Gets a reference to a DirectX Video Acceleration (DXVA) decoder buffer.
-A reference to the
The type of buffer to retrieve, specified as a member of the
The graphics driver allocates the buffers that are used for DXVA decoding. This method locks the Microsoft Direct3D surface that contains the buffer. When you are done using the buffer, call
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the video functionality of a Microsoft Direct3D?11 device.
-To get a reference to this interface, call QueryInterface with an
Submits one or more buffers for decoding.
-A reference to the
The number of buffers submitted for decoding.
A reference to an array of
If this method succeeds, it returns
This function does not honor any D3D11 predicate that may have been set.
Allows the driver to return IHV specific information used when initializing the new hardware key.
-A reference to the
The size of the memory referenced by the pPrivateInputData parameter.
The private input data. The contents of this parameter is defined by the implementation of the secure execution environment. It may contain data about the license or about the stream properties.
A reference to the private output data. The return data is defined by the implementation of the secure execution environment. It may contain graphics-specific data to be associated with the underlying hardware key.
This method returns one of the following error codes.
The operation completed successfully. | |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
Checks the status of a crypto session.
-Specifies a
A
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
Indicates that decoder downsampling will be used and that the driver should allocate the appropriate reference frames.
-A reference to the
The color space information of the reference frame data.
The resolution, format, and colorspace of the output/display frames. This is the destination resolution and format of the downsample operation.
The number of reference frames to be used in the operation.
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
This function can only be called once for a specific
Updates the decoder downsampling parameters.
-A reference to the
The resolution, format, and colorspace of the output/display frames. This is the destination resolution and format of the downsample operation.
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
This method can only be called after decode downsampling is enabled by calling DecoderEnableDownsampling. This method is only supported if the
Sets the color space information for the video processor output surface.
-A reference to the
A
Sets a value indicating whether the output surface from a call to
Gets the color space information for the video processor output surface.
-A reference to the
A reference to a
Gets a value indicating whether the output surface from a call to
Sets the color space information for the video processor input stream.
-A reference to the
An index identifying the input stream.
A
Specifies whether the video processor input stream should be flipped vertically or horizontally.
-A reference to the
An index identifying the input stream.
True if mirroring should be enabled; otherwise, false.
True if the stream should be flipped horizontally; otherwise, false.
True if the stream should be flipped vertically; otherwise, false.
When used in combination, transformations on the processor input stream should be applied in the following order:
Gets the color space information for the video processor input stream.
-A reference to the
An index identifying the input stream.
A reference to a
Gets values that indicate whether the video processor input stream is being flipped vertically or horizontally.
-A reference to the
An index identifying the input stream.
A reference to a boolean value indicating whether mirroring is enabled. True if mirroring is enabled; otherwise, false.
A reference to a boolean value indicating whether the stream is being flipped horizontally. True if the stream is being flipped horizontally; otherwise, false.
A reference to a boolean value indicating whether the stream is being flipped vertically. True if the stream is being flipped vertically; otherwise, false.
Returns driver hints that indicate which of the video processor operations are best performed using multi-plane overlay hardware rather than
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
E_OUTOFMEMORY | There is insufficient memory to complete the operation. |
?
This method computes the behavior hints using the current state of the video processor as set by the "SetOutput" and "SetStream" methods of
Provides the video functionality of a Microsoft Direct3D?11 device.
-To get a reference to this interface, call QueryInterface with an
This interface provides access to several areas of Microsoft Direct3D video functionality:
In Microsoft Direct3D?9, the equivalent functions were distributed across several interfaces:
Represents a hardware-accelerated video decoder for Microsoft Direct3D?11.
-To get a reference to this interface, call
Gets a handle to the driver.
-The driver handle can be used to configure content protection.
-Gets the parameters that were used to create the decoder.
-A reference to a
A reference to a
If this method succeeds, it returns
Gets a handle to the driver.
-Receives a handle to the driver.
If this method succeeds, it returns
The driver handle can be used to configure content protection.
-Identifies the output surfaces that can be accessed during video decoding.
-To get a reference to this interface, call
Gets the properties of the video decoder output view. -
-Gets the properties of the video decoder output view. -
-A reference to a
Provides the video decoding and video processing capabilities of a Microsoft Direct3D?11 device.
-The Direct3D?11 device supports this interface. To get a reference to this interface, call QueryInterface with an
If you query an
Gets the number of profiles that are supported by the driver.
-To enumerate the profiles, call
Creates a video decoder device for Microsoft Direct3D?11.
-A reference to a
A reference to a
Receives a reference to the
If this method succeeds, it returns
This method allocates the necessary decoder buffers.
The
Creates a video processor device for Microsoft Direct3D?11.
-A reference to the
Specifies the frame-rate conversion capabilities for the video processor. The value is a zero-based index that corresponds to the TypeIndex parameter of the
Receives a reference to the
If this method succeeds, it returns
The
Creates a channel to communicate with the Microsoft Direct3D device or the graphics driver. The channel can be used to send commands and queries for content protection.
-Specifies the type of channel, as a member of the
Receives a reference to the
If this method succeeds, it returns
If the ChannelType parameter is
If ChannelType is
Creates a cryptographic session to encrypt video content that is sent to the graphics driver.
-A reference to a
Value | Meaning |
---|---|
| 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher. |
?
A reference to a
A reference to a
Value | Meaning |
---|---|
| The caller will create the session key, encrypt it with RSA Encryption Scheme - Optimal Asymmetric Encryption Padding (RSAES-OAEP) by using the driver's public key, and pass the session key to the driver. |
?
Receives a reference to the
If this method succeeds, it returns
The
Creates a resource view for a video decoder, describing the output sample for the decoding operation.
-A reference to the
A reference to a
Receives a reference to the
If this method succeeds, it returns
Set the ppVDOVView parameter to
Creates a resource view for a video processor, describing the input sample for the video processing operation.
-A reference to the
A reference to the
A reference to a
Receives a reference to the
If this method succeeds, it returns
Set the ppVPIView parameter to
The surface format is given in the FourCC member of the
Resources used for video processor input views must use the following bind flag combinations:
Creates a resource view for a video processor, describing the output sample for the video processing operation.
-A reference to the
A reference to the
A reference to a
Receives a reference to the
If this method succeeds, it returns
Set the ppVPOView parameter to
Resources used for video processor output views must use the following
If stereo output is enabled, the output view must have 2 array elements. Otherwise, it must only have a single array element.
-Enumerates the video processor capabilities of the driver.
-A reference to a
Receives a reference to the
If this method succeeds, it returns
To create the video processor device, pass the
Gets the number of profiles that are supported by the driver.
-Returns the number of profiles.
To enumerate the profiles, call
Gets a profile that is supported by the driver.
-The zero-based index of the profile. To get the number of profiles that the driver supports, call
Receives a
If this method succeeds, it returns
Given aprofile, checks whether the driver supports a specified output format.
-A reference to a
A
Receives the value TRUE if the format is supported, or
If this method succeeds, it returns
If the driver does not support the profile given in pDecoderProfile, the method returns E_INVALIDARG. If the driver supports the profile, but the DXGI format is not compatible with the profile, the method succeeds but returns the value
Gets the number of decoder configurations that the driver supports for a specified video description.
-A reference to a
Receives the number of decoder configurations.
If this method succeeds, it returns
To enumerate the decoder configurations, call
Gets a decoder configuration that is supported by the driver.
-A reference to a
The zero-based index of the decoder configuration. To get the number of configurations that the driver supports, call
A reference to a
If this method succeeds, it returns
Queries the driver for its content protection capabilities.
-A reference to a
Value | Meaning |
---|---|
| 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher. |
?
If no encryption will be used, set this parameter to
A reference to a
The driver might disallow some combinations of encryption type and profile.
A reference to a
If this method succeeds, it returns
Gets a cryptographic key-exchange mechanism that is supported by the driver.
-A reference to a
Value | Meaning |
---|---|
| 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher. |
?
A reference to a
The zero-based index of the key-exchange type. The driver reports the number of types in the KeyExchangeTypeCount member of the
Receives a
If this method succeeds, it returns
Sets private data on the video device and associates that data with a
The
The size of the data, in bytes.
A reference to the data.
If this method succeeds, it returns
Sets a private
If this method succeeds, it returns
Provides the video decoding and video processing capabilities of a Microsoft Direct3D?11 device.
-The Direct3D?11 device supports this interface. To get a reference to this interface, call QueryInterface with an
Retrieves optional sizes for private driver data.
-Indicates the crypto type for which the private input and output size is queried.
Indicates the decoder profile for which the private input and output size is queried.
Indicates the key exchange type for which the private input and output size is queried.
Returns the size of private data that the driver needs for input commands.
Returns the size of private data that the driver needs for output commands.
If this method succeeds, it returns
When pKeyExchangeType is D3D11_KEY_EXCHANGE_HW_PROTECTION, the following behavior is expected in the
Retrieves capabilities and limitations of the video decoder.
-The decode profile for which the capabilities are queried.
The video width for which the capabilities are queried.
The video height for which the capabilities are queried.
The frame rate of the video content. This information is used by the driver to determine whether the video can be decoded in real-time.
The bit rate of the video stream. A value of zero indicates that the bit rate can be ignored.
The type of cryptography used to encrypt the video stream. A value of
A reference to a bitwise OR combination of
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
?
Indicates whether the video decoder supports downsampling with the specified input format, and whether real-time downsampling is supported.
-An object describing the decoding profile, the resolution, and format of the input stream. This is the resolution and format to be downsampled.
A
The configuration data associated with the decode profile.
The frame rate of the video content. This is used by the driver to determine whether the video can be decoded in real-time.
An object describing the resolution, format, and colorspace of the output frames. This is the destination resolution and format of the downsample operation.
Pointer to a boolean value set by the driver that indicates if downsampling is supported with the specified input data. True if the driver supports the requested downsampling; otherwise, false.
Pointer to a boolean value set by the driver that indicates if real-time decoding is supported with the specified input data. True if the driver supports the requested real-time decoding; otherwise, false. Note that the returned value is based on the current configuration of the video decoder and does not guarantee that real-time decoding will be supported for future downsampling operations.
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
?
You should call GetVideoDecoderCaps to determine whether decoder downsampling is supported before checking support for a specific configuration.
-Allows the driver to recommend optimal output downsample parameters from the input parameters.
-A
A
The configuration data associated with the decode profile.
The frame rate of the video content. This is used by the driver to determine whether the video can be decoded in real-time.
Pointer to a
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
?
You should call GetVideoDecoderCaps to determine whether decoder downsampling is supported before checking support for a specific configuration.
-Represents a video processor for Microsoft Direct3D?11.
-To get a reference to this interface, call
Gets the content description that was used to create the video processor.
-Gets the rate conversion capabilities of the video processor.
-Gets the content description that was used to create the video processor.
-A reference to a
Gets the rate conversion capabilities of the video processor.
-A reference to a
Gets the content description that was used to create this enumerator.
-Gets the content description that was used to create this enumerator.
-Gets the capabilities of the video processor.
-Gets the content description that was used to create this enumerator.
-A reference to a
If this method succeeds, it returns
Queries whether the video processor supports a specified video format.
-The video format to query, specified as a
Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Gets the capabilities of the video processor.
-A reference to a
If this method succeeds, it returns
Returns a group of video processor capabilities that are associated with frame-rate conversion, including deinterlacing and inverse telecine.
-The zero-based index of the group to retrieve. To get the maximum index, call
A reference to a
If this method succeeds, it returns
The capabilities defined in the
Gets a list of custom frame rates that a video processor supports.
-The zero-based index of the frame-rate capability group. To get the maxmum index, call
The zero-based index of the custom rate to retrieve. To get the maximum index, call
This index value is always relative to the capability group specified in the TypeIndex parameter.
A reference to a
If this method succeeds, it returns
Gets the range of values for an image filter.
-The type of image filter, specified as a
A reference to a
If this method succeeds, it returns
Enumerates the video processor capabilities of a Microsoft Direct3D?11 device.
-To get a reference to this interface, call
Indicates whether the driver supports the specified combination of format and colorspace conversions.
-The format of the video processor input.
The colorspace of the video processor input.
The format of the video processor output.
The colorspace of the video processor output.
Pointer to a boolean that is set by the driver to indicate if the specified combination of format and colorspace conversions is supported. True if the conversion is supported; otherwise, false.
This method returns one of the following error codes.
The operation completed successfully. | |
E_INVALIDARG | An invalid parameter was passed or this function was called using an invalid calling pattern. |
?
Identifies the input surfaces that can be accessed during video processing.
-To get a reference to this interface, call
Gets the properties of the video processor input view.
-Gets the properties of the video processor input view.
-A reference to a
Identifies the output surfaces that can be accessed during video processing.
-To get a reference to this interface, call
Gets the properties of the video processor output view.
-Gets the properties of the video processor output view.
-A reference to a
Contains an initialization vector (IV) for 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher encryption.
-The IV, in big-endian format.
The block count, in big-endian format.
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_ENCRYPTION_WHEN_ACCESSIBLE command.
-A
A
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_CRYPTO_SESSION command.
-A
A handle to the decoder device. Get this from
A handle to the cryptographic session. Get this from
A handle to the Direct3D device. Get this from D3D11VideoContext::QueryAuthenticatedChannel using D3D11_AUTHENTICATED_QUERY_DEVICE_HANDLE. -
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_INITIALIZE command.
-A
The initial sequence number for queries.
The initial sequence number for commands.
Contains input data for the
Contains the response from the
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_PROTECTION command.
-A
A
Contains input data for a D3D11_AUTHENTICATED_CONFIGURE_SHARED_RESOURCE command.
-A
A
A process handle. If the ProcessType member equals
If TRUE, the specified process has access to restricted shared resources.
Specifies the protection level for video content.
-If 1, video content protection is enabled.
If 1, the application requires video to be displayed using either a hardware overlay or full-screen exclusive mode.
Reserved. Set all bits to zero.
Use this member to access all of the bits in the union.
Contains the response to a D3D11_AUTHENTICATED_QUERY_ENCRYPTION_WHEN_ACCESSIBLE_GUID_COUNT query.
-A
The number of encryption GUIDs.
Contains input data for a D3D11_AUTHENTICATED_QUERY_ENCRYPTION_WHEN_ACCESSIBLE_GUID query.
-A
The index of the encryption
Contains the response to a D3D11_AUTHENTICATED_QUERY_ENCRYPTION_WHEN_ACCESSIBLE_GUID query.
-A
The index of the encryption
A
Contains the response to a D3D11_AUTHENTICATED_QUERY_CHANNEL_TYPE query.
-A
A
Contains input data for a D3D11_AUTHENTICATED_QUERY_CRYPTO_SESSION query.
-A
A handle to a decoder device.
Contains the response to a D3D11_AUTHENTICATED_QUERY_CRYPTO_SESSION query.
-A
A handle to a decoder device.
A handle to the cryptographic session that is associated with the decoder device.
A handle to the Direct3D device that is associated with the decoder device.
Contains the response to a D3D11_AUTHENTICATED_QUERY_CURRENT_ENCRYPTION_WHEN_ACCESSIBLE query.
-A
A
Contains the response to a D3D11_AUTHENTICATED_QUERY_DEVICE_HANDLE query.
-A
A handle to the device.
Contains input data for the
Contains a response from the
Contains input data for a D3D11_AUTHENTICATED_QUERY_OUTPUT_ID_COUNT query.
-A
A handle to the device.
A handle to the cryptographic session.
Contains the response to a D3D11_AUTHENTICATED_QUERY_OUTPUT_ID_COUNT query.
-A
A handle to the device.
A handle to the cryptographic session.
The number of output IDs associated with the specified device and cryptographic session.
Contains input data for a D3D11_AUTHENTICATED_QUERY_OUTPUT_ID query.
-A
A handle to the device.
A handle to the cryptographic session.
The index of the output ID.
Contains the response to a D3D11_AUTHENTICATED_QUERY_OUTPUT_ID query.
-A
A handle to the device.
A handle to the cryptographic session.
The index of the output ID.
An output ID that is associated with the specified device and cryptographic session.
Contains the response to a D3D11_AUTHENTICATED_QUERY_PROTECTION query.
-A
A
Contains the response to a D3D11_AUTHENTICATED_QUERY_RESTRICTED_SHARED_RESOURCE_PROCESS_COUNT query.
-A
The number of processes that are allowed to open shared resources that have restricted access. A process cannot open such a resource unless the process has been granted access.
Contains input data for a D3D11_AUTHENTICATED_QUERY_RESTRICTED_SHARED_RESOURCE_PROCESS query.
-A
The index of the process.
Contains the response to a D3D11_AUTHENTICATED_QUERY_RESTRICTED_SHARED_RESOURCE_PROCESS query.
-The Desktop Window Manager (DWM) process is identified by setting ProcessIdentifier equal to
A
The index of the process in the list of processes.
A
A process handle. If the ProcessIdentifier member equals
Contains the response to a D3D11_AUTHENTICATED_QUERY_UNRESTRICTED_PROTECTED_SHARED_RESOURCE_COUNT query.
-A
The number of protected, shared resources that can be opened by any process without restrictions.
Describes an HLSL class instance.
-The
The members of this structure except InstanceIndex are valid (non default values) if they describe a class instance aquired using
The instance ID of an HLSL class; the default value is 0.
The instance index of an HLSL class; the default value is 0.
The type ID of an HLSL class; the default value is 0.
Describes the constant buffer associated with an HLSL class; the default value is 0.
The base constant buffer offset associated with an HLSL class; the default value is 0.
The base texture associated with an HLSL class; the default value is 127.
The base sampler associated with an HLSL class; the default value is 15.
True if the class was created; the default value is false.
Information about the video card's performance counter capabilities.
-This structure is returned by
Largest device-dependent counter ID that the device supports. If none are supported, this value will be 0. Otherwise it will be greater than or equal to
Number of counters that can be simultaneously supported.
Number of detectable parallel units that the counter is able to discern. Values are 1 ~ 4. Use NumDetectableParallelUnits to interpret the values of the VERTEX_PROCESSING, GEOMETRY_PROCESSING, PIXEL_PROCESSING, and OTHER_GPU_PROCESSING counters.
Describes a counter.
-This structure is used by
Type of counter (see
Reserved.
Used with
Use this structure with CreateWrappedResource.
-Stencil operations that can be performed based on the results of stencil test.
-All stencil operations are specified as a
This structure is a member of a depth-stencil description.
-The stencil operation to perform when stencil testing fails.
The stencil operation to perform when stencil testing passes and depth testing fails.
The stencil operation to perform when stencil testing and depth testing both pass.
A function that compares stencil data against existing stencil data. The function options are listed in
Specifies the subresources of a texture that are accessible from a depth-stencil view.
-These are valid formats for a depth-stencil view:
A depth-stencil view cannot use a typeless format. If the format chosen is
A depth-stencil-view description is needed when calling
Specifies the subresource from a 1D texture that is accessible to a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first mipmap level to use.
Specifies the subresources from an array of 1D textures to use in a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first mipmap level to use.
The index of the first texture to use in an array of textures.
Number of textures to use.
Specifies the subresource from a 2D texture that is accessible to a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first mipmap level to use.
Specifies the subresources from an array 2D textures that are accessible to a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first mipmap level to use.
The index of the first texture to use in an array of textures.
Number of textures to use.
Specifies the subresource from a multisampled 2D texture that is accessible to a depth-stencil view.
-Because a multisampled 2D texture contains a single subtexture, there is nothing to specify; this unused member is included so that this structure will compile in C.
-Unused.
Specifies the subresources from an array of multisampled 2D textures for a depth-stencil view.
-This structure is one member of a depth-stencil-view description (see
The index of the first texture to use in an array of textures.
Number of textures to use.
Resource data format (see
Type of resource (see
A value that describes whether the texture is read only. Pass 0 to specify that it is not read only; otherwise, pass one of the members of the
Specifies a 1D texture subresource (see
Specifies an array of 1D texture subresources (see
Specifies a 2D texture subresource (see
Specifies an array of 2D texture subresources (see
Specifies a multisampled 2D texture (see
Specifies an array of multisampled 2D textures (see
Arguments for draw indexed instanced indirect.
- The members of this structure serve the same purpose as the parameters of
The number of indices read from the index buffer for each instance.
The number of instances to draw.
The location of the first index read by the GPU from the index buffer.
A value added to each index before reading a vertex from the vertex buffer.
A value added to each index before reading per-instance data from a vertex buffer.
Arguments for draw instanced indirect.
- The members of this structure serve the same purpose as the parameters of
The number of vertices to draw.
The number of instances to draw.
The index of the first vertex.
A value added to each index before reading per-instance data from a vertex buffer.
Specifies which bytes in a video surface are encrypted.
-The number of bytes that are encrypted at the start of the buffer.
The number of bytes that are skipped after the first NumEncryptedBytesAtBeginning bytes, and then after each block of NumBytesInEncryptPattern bytes. Skipped bytes are not encrypted.
The number of bytes that are encrypted after each block of skipped bytes.
Describes information about Direct3D 11.1 adapter architecture.
-Specifies whether a rendering device batches rendering commands and performs multipass rendering into tiles or bins over a render area. Certain API usage patterns that are fine for TileBasedDefferredRenderers (TBDRs) can perform worse on non-TBDRs and vice versa. Applications that are careful about rendering can be friendly to both TBDR and non-TBDR architectures. TRUE if the rendering device batches rendering commands and
Describes compute shader and raw and structured buffer support in the current graphics driver.
-Direct3D 11 devices (
TRUE if compute shaders and raw and structured buffers are supported; otherwise
Describes Direct3D 11.1 feature options in the current graphics driver.
-If a Microsoft Direct3D device supports feature level 11.1 (
Feature level 11.1 provides the following additional features:
The runtime always sets the following groupings of members identically. That is, all the values in a grouping are TRUE or
Specifies whether logic operations are available in blend state. The runtime sets this member to TRUE if logic operations are available in blend state and
Specifies whether the driver can render with no render target views (RTVs) or depth stencil views (DSVs), and only unordered access views (UAVs) bound. The runtime sets this member to TRUE if the driver can render with no RTVs or DSVs and only UAVs bound and
Specifies whether the driver supports the
Specifies whether the driver supports new semantics for copy and update that are exposed by the
Specifies whether the driver supports the
Specifies whether you can call
Specifies whether the driver supports partial updates of constant buffers. The runtime sets this member to TRUE if the driver supports partial updates of constant buffers and
Specifies whether the driver supports new semantics for setting offsets in constant buffers for a shader. The runtime sets this member to TRUE if the driver supports allowing you to specify offsets when you call new methods like the
Specifies whether you can call
Specifies whether you can call
Specifies whether the driver supports multisample rendering when you render with RTVs bound. If TRUE, you can set the ForcedSampleCount member of
Specifies whether the hardware and driver support the msad4 intrinsic function in shaders. The runtime sets this member to TRUE if the hardware and driver support calls to msad4 intrinsic functions in shaders. If
Specifies whether the hardware and driver support the fma intrinsic function and other extended doubles instructions (DDIV and DRCP) in shaders. The fma intrinsic function emits an extended doubles DFMA instruction. The runtime sets this member to TRUE if the hardware and driver support extended doubles instructions in shaders (shader model 5 and higher). Support of this option implies support of basic double-precision shader instructions as well. You can use the
Specifies whether the hardware and driver support sharing a greater variety of Texture2D resource types and formats. The runtime sets this member to TRUE if the hardware and driver support extended Texture2D resource sharing.
Describes Direct3D 11.2 feature options in the current graphics driver.
- If the Direct3D API is the Direct3D 11.2 runtime and can support 11.2 features,
Specifies whether the hardware and driver support tiled resources. The runtime sets this member to a
Specifies whether the hardware and driver support the filtering options (
Specifies whether the hardware and driver also support the
Specifies support for creating
Describes Direct3D 11.3 feature options in the current graphics driver.
-Whether to use the VP and RT array index from any shader feeding the rasterizer.
Describes Direct3D 11.4 feature options in the current graphics driver.
-Use this structure with the
Refer to the section on NV12 in Direct3D 11.4 Features.
-Specifies a
Describes Direct3D 9 feature options in the current graphics driver.
-Specifies whether the driver supports the nonpowers-of-2-unconditionally feature. For more information about this feature, see feature level. The runtime sets this member to TRUE for hardware at Direct3D 10 and higher feature levels. For hardware at Direct3D 9.3 and lower feature levels, the runtime sets this member to
Describes Direct3D 9 feature options in the current graphics driver.
-You can use the
Specifies whether the driver supports the nonpowers-of-2-unconditionally feature. For more info about this feature, see feature level. The runtime sets this member to TRUE for hardware at Direct3D 10 and higher feature levels. For hardware at Direct3D 9.3 and lower feature levels, the runtime sets this member to
Specifies whether the driver supports the shadowing feature with the comparison-filtering mode set to less than or equal to. The runtime sets this member to TRUE for hardware at Direct3D 10 and higher feature levels. For hardware at Direct3D 9.3 and lower feature levels, the runtime sets this member to TRUE only if the hardware and driver support the shadowing feature; otherwise
Specifies whether the hardware and driver support simple instancing. The runtime sets this member to TRUE if the hardware and driver support simple instancing.
Specifies whether the hardware and driver support setting a single face of a TextureCube as a render target while the depth stencil surface that is bound alongside can be a Texture2D (as opposed to TextureCube). The runtime sets this member to TRUE if the hardware and driver support this feature; otherwise
If the hardware and driver don't support this feature, the app must match the render target surface type with the depth stencil surface type. Because hardware at Direct3D 9.3 and lower feature levels doesn't allow TextureCube depth surfaces, the only way to render a scene into a TextureCube while having depth buffering enabled is to render each TextureCube face separately to a Texture2D render target first (because that can be matched with a Texture2D depth), and then copy the results into the TextureCube. If the hardware and driver support this feature, the app can just render to the TextureCube faces directly while getting depth buffering out of a Texture2D depth buffer.
You only need to query this feature from hardware at Direct3D 9.3 and lower feature levels because hardware at Direct3D 10.0 and higher feature levels allow TextureCube depth surfaces.
Describes Direct3D?9 shadow support in the current graphics driver.
-Shadows are an important element in realistic 3D scenes. You can use the shadow buffer technique to render shadows. The basic principle of the technique is to use a depth buffer to store the scene depth info from the perspective of the light source, and then compare each point rendered in the scene with that buffer to determine if it is in shadow.
To render objects into the scene with shadows on them, you create sampler state objects with comparison filtering set and the comparison mode (ComparisonFunc) to LessEqual. You can also set BorderColor addressing on this depth sampler, even though BorderColor isn't typically allowed on feature levels 9.1 and 9.2. By using the border color and picking 0.0 or 1.0 as the border color value, you can control whether the regions off the edge of the shadow map appear to be always in shadow or never in shadow respectively. - You can control the shadow filter quality by the Mag and Min filter settings in the comparison sampler. Point sampling will produce shadows with non-anti-aliased edges. Linear filter sampler settings will result in higher quality shadow edges, but might affect performance on some power-optimized devices.
Note??If you use a separate setting for Mag versus Min filter options, you produce an undefined result. Anisotropic filtering is not supported. The Mip filter choice is not relevant because feature level 9.x does not allow mipmapped depth buffers.?Note??On feature level 9.x, you can't compile a shader with the SampleCmp and SampleCmpLevelZero intrinsic functions by using older versions of the compiler. For example, you can't use the fxc.exe compiler that ships with the DirectX SDK or use theSpecifies whether the driver supports the shadowing feature with the comparison-filtering mode set to less than or equal to. The runtime sets this member to TRUE for hardware at Direct3D 10 and higher feature levels. For hardware at Direct3D 9.3 and lower feature levels, the runtime sets this member to TRUE only if the hardware and driver support the shadowing feature; otherwise
Describes whether simple instancing is supported.
- If the Direct3D API is the Direct3D 11.2 runtime and can support 11.2 features,
Simple instancing means that instancing is supported with the caveat that the InstanceDataStepRate member of the
Specifies whether the hardware and driver support simple instancing. The runtime sets this member to TRUE if the hardware and driver support simple instancing.
Describes double data type support in the current graphics driver.
-If the runtime sets DoublePrecisionFloatShaderOps to TRUE, the hardware and driver support the following Shader Model 5 instructions:
Specifies whether double types are allowed. If TRUE, double types are allowed; otherwise
Describes which resources are supported by the current graphics driver for a given format.
-
Combination of
Describes which unordered resource options are supported by the current graphics driver for a given format.
-
Combination of
Describes feature data GPU virtual address support, including maximum address bits per resource and per process.
- See
The maximum GPU virtual address bits per resource.
The maximum GPU virtual address bits per process.
Describes whether a GPU profiling technique is supported.
-If the Direct3D API is the Direct3D 11.2 runtime and can support 11.2 features,
Specifies whether the hardware and driver support a GPU profiling technique that can be used with development tools. The runtime sets this member to TRUE if the hardware and driver support data marking.
Stencil operations that can be performed based on the results of stencil test.
-All stencil operations are specified as a
This structure is a member of a depth-stencil description.
-The stencil operation to perform when stencil testing fails.
Describes precision support options for shaders in the current graphics driver.
-For hardware at Direct3D 10 and higher feature levels, the runtime sets both members identically. For hardware at Direct3D 9.3 and lower feature levels, the runtime can set a lower precision support in the PixelShaderMinPrecision member than the AllOtherShaderStagesMinPrecision member; for 9.3 and lower, all other shader stages represent only the vertex shader.
For more info about HLSL minimum precision, see using HLSL minimum precision.
-A combination of
A combination of
Describes the multi-threading features that are supported by the current graphics driver.
-Use the
TRUE means resources can be created concurrently on multiple threads while drawing;
TRUE means command lists are supported by the current driver;
Allow or deny certain types of messages to pass through a filter.
-Number of message categories to allow or deny.
Array of message categories to allow or deny. Array must have at least NumCategories members (see
Allow or deny certain types of messages to pass through a filter.
-Number of message categories to allow or deny.
Array of message categories to allow or deny. Array must have at least NumCategories members (see
Number of message severity levels to allow or deny.
Array of message severity levels to allow or deny. Array must have at least NumSeverities members (see
Number of message IDs to allow or deny.
Array of message IDs to allow or deny. Array must have at least NumIDs members (see
A description of a single element for the input-assembler stage.
-An input-layout object contains an array of structures, each structure defines one element being read from an input slot. Create an input-layout object by calling
The HLSL semantic associated with this element in a shader input-signature.
The semantic index for the element. A semantic index modifies a semantic, with an integer index number. A semantic index is only needed in a case where there is more than one element with the same semantic. For example, a 4x4 matrix would have four components each with the semantic name
matrix
, however each of the four component would have different semantic indices (0, 1, 2, and 3).
The data type of the element data. See
An integer value that identifies the input-assembler (see input slot). Valid values are between 0 and 15, defined in D3D11.h.
Optional. Offset (in bytes) between each element. Use D3D11_APPEND_ALIGNED_ELEMENT for convenience to define the current element directly after the previous one, including any packing if necessary.
Identifies the input data class for a single input slot (see
The number of instances to draw using the same per-instance data before advancing in the buffer by one element. This value must be 0 for an element that contains per-vertex data (the slot class is set to
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents key exchange data for hardware content protection.
-A reference to this structure is passed in the pData parameter of
The function ID of the DRM command. The values and meanings of the function ID are defined by the DRM specification.
Pointer to a buffer containing a
Pointer to a buffer containing a
The result of the hardware DRM command.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents key exchange input data for hardware content protection.
-The size of the private data reserved for IHV usage. This size is determined from the pPrivateInputSize parameter returned by the
The size of the DRM command data.
If PrivateDataSize is greater than 0, pbInput[0] ? pbInput[PrivateDataSize - 1] is reserved for IHV use.
pbInput[PrivateDataSize] ? pbInput[HWProtectionDataSize + PrivateDataSize - 1] contains the input data for the DRM command. The format and size of the DRM command is defined by the DRM specification.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents key exchange output data for hardware content protection.
-The size of the private data reserved for IHV usage. This size is determined from the pPrivateOutputSize parameter returned by the
The maximum size of data that the driver can return in the output buffer. The last byte that it can write to is pbOuput[PrivateDataSize + MaxHWProtectionDataSize ? 1].
The size of the output data written by the driver.
The number of 100 nanosecond units spent transporting the data.
The number of 100 nanosecond units spent executing the content protection command.
If PrivateDataSize is greater than 0, pbInput[0] ? pbOutput[PrivateDataSize - 1] is reserved for IHV use.
pbOutput[PrivateDataSize] ? pbOutput[HWProtectionDataSize + PrivateDataSize - 1] contains the input data for the DRM command. The format and size of the DRM command is defined by the DRM specification.
A debug message in the Information Queue.
-This structure is returned from
The category of the message. See
The severity of the message. See
The ID of the message. See
The message string.
The length of pDescription in bytes.
Contains a Message Authentication Code (MAC).
-A byte array that contains the cryptographic MAC value of the message.
Describes the tile structure of a tiled resource with mipmaps.
-Number of standard mipmaps in the tiled resource.
Number of packed mipmaps in the tiled resource.
This number starts from the least detailed mipmap (either sharing tiles or using non standard tile layout). This number is 0 if no - such packing is in the resource. For array surfaces, this value is the number of mipmaps that are packed for a given array slice where each array slice repeats the same - packing. -
On Tier_2 tiled resources hardware, mipmaps that fill at least one standard shaped tile in all dimensions - are not allowed to be included in the set of packed mipmaps. On Tier_1 hardware, mipmaps that are an integer multiple of one standard shaped tile in all dimensions are not allowed to be included in the set of packed mipmaps. Mipmaps with at least one - dimension less than the standard tile shape may or may not be packed. When a given mipmap needs to be packed, all coarser - mipmaps for a given array slice are considered packed as well. -
Number of tiles for the packed mipmaps in the tiled resource.
If there is no packing, this value is meaningless and is set to 0. - Otherwise, it is set to the number of tiles - that are needed to represent the set of packed mipmaps. - The pixel layout within the packed mipmaps is hardware specific. - If apps define only partial mappings for the set of tiles in packed mipmaps, read and write behavior is vendor specific and undefined. - For arrays, this value is only the count of packed mipmaps within - the subresources for each array slice.
Offset of the first packed tile for the resource - in the overall range of tiles. If NumPackedMips is 0, this - value is meaningless and is 0. Otherwise, it is the - offset of the first packed tile for the resource in the overall - range of tiles for the resource. A value of 0 for - StartTileIndexInOverallResource means the entire resource is packed. - For array surfaces, this is the offset for the tiles that contain the packed - mipmaps for the first array slice. Packed mipmaps for each array slice in arrayed surfaces are at this offset - past the beginning of the tiles for each array slice.
Note??The - number of overall tiles, packed or not, for a given array slice is - simply the total number of tiles for the resource divided by the - resource's array size, so it is easy to locate the range of tiles for - any given array slice, out of which StartTileIndexInOverallResource identifies - which of those are packed. ?Query information about graphics-pipeline activity in between calls to
Query information about the reliability of a timestamp query.
-For a list of query types see
How frequently the GPU counter increments in Hz.
If this is TRUE, something occurred in between the query's
Describes a query.
-Type of query (see
Miscellaneous flags (see
Describes a query.
-A
A combination of
A
Describes rasterizer state.
-Rasterizer state defines the behavior of the rasterizer stage. To create a rasterizer-state object, call
If you do not specify some rasterizer state, the Direct3D runtime uses the following default values for rasterizer state.
State | Default Value |
---|---|
FillMode | Solid |
CullMode | Back |
FrontCounterClockwise | |
DepthBias | 0 |
SlopeScaledDepthBias | 0.0f |
DepthBiasClamp | 0.0f |
DepthClipEnable | TRUE |
ScissorEnable | |
MultisampleEnable | |
AntialiasedLineEnable |
?
Note??For feature levels 9.1, 9.2, 9.3, and 10.0, if you set MultisampleEnable to
Line-rendering algorithm | MultisampleEnable | AntialiasedLineEnable |
---|---|---|
Aliased | ||
Alpha antialiased | TRUE | |
Quadrilateral | TRUE | |
Quadrilateral | TRUE | TRUE |
?
The settings of the MultisampleEnable and AntialiasedLineEnable members apply only to multisample antialiasing (MSAA) render targets (that is, render targets with sample counts greater than 1). Because of the differences in feature-level behavior and as long as you aren?t performing any line drawing or don?t mind that lines render as quadrilaterals, we recommend that you always set MultisampleEnable to TRUE whenever you render on MSAA render targets.
-Determines the fill mode to use when rendering (see
Indicates triangles facing the specified direction are not drawn (see
Determines if a triangle is front- or back-facing. If this parameter is TRUE, a triangle will be considered front-facing if its vertices are counter-clockwise on the render target and considered back-facing if they are clockwise. If this parameter is
Depth value added to a given pixel. For info about depth bias, see Depth Bias.
Maximum depth bias of a pixel. For info about depth bias, see Depth Bias.
Scalar on a given pixel's slope. For info about depth bias, see Depth Bias.
Enable clipping based on distance.
The hardware always performs x and y clipping of rasterized coordinates. When DepthClipEnable is set to the default?TRUE, the hardware also clips the z value (that is, the hardware performs the last step of the following algorithm). -
0 < w
- -w <= x <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- -w <= y <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- 0 <= z <= w
-
When you set DepthClipEnable to
Enable scissor-rectangle culling. All pixels outside an active scissor rectangle are culled.
Specifies whether to use the quadrilateral or alpha line anti-aliasing algorithm on multisample antialiasing (MSAA) render targets. Set to TRUE to use the quadrilateral line anti-aliasing algorithm and to
Specifies whether to enable line antialiasing; only applies if doing line drawing and MultisampleEnable is
Describes rasterizer state.
-Rasterizer state defines the behavior of the rasterizer stage. To create a rasterizer-state object, call
If you do not specify some rasterizer state, the Direct3D runtime uses the following default values for rasterizer state.
State | Default Value |
---|---|
FillMode | Solid |
CullMode | Back |
FrontCounterClockwise | |
DepthBias | 0 |
SlopeScaledDepthBias | 0.0f |
DepthBiasClamp | 0.0f |
DepthClipEnable | TRUE |
ScissorEnable | |
MultisampleEnable | |
AntialiasedLineEnable | |
ForcedSampleCount | 0 |
?
Note??For feature levels 9.1, 9.2, 9.3, and 10.0, if you set MultisampleEnable to
Line-rendering algorithm | MultisampleEnable | AntialiasedLineEnable |
---|---|---|
Aliased | ||
Alpha antialiased | TRUE | |
Quadrilateral | TRUE | |
Quadrilateral | TRUE | TRUE |
?
The settings of the MultisampleEnable and AntialiasedLineEnable members apply only to multisample antialiasing (MSAA) render targets (that is, render targets with sample counts greater than 1). Because of the differences in feature-level behavior and as long as you aren?t performing any line drawing or don?t mind that lines render as quadrilaterals, we recommend that you always set MultisampleEnable to TRUE whenever you render on MSAA render targets.
-Determines the fill mode to use when rendering.
Indicates that triangles facing the specified direction are not drawn.
Specifies whether a triangle is front- or back-facing. If TRUE, a triangle will be considered front-facing if its vertices are counter-clockwise on the render target and considered back-facing if they are clockwise. If
Depth value added to a given pixel. For info about depth bias, see Depth Bias.
Maximum depth bias of a pixel. For info about depth bias, see Depth Bias.
Scalar on a given pixel's slope. For info about depth bias, see Depth Bias.
Specifies whether to enable clipping based on distance.
The hardware always performs x and y clipping of rasterized coordinates. When DepthClipEnable is set to the default?TRUE, the hardware also clips the z value (that is, the hardware performs the last step of the following algorithm). -
0 < w
- -w <= x <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- -w <= y <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- 0 <= z <= w
-
When you set DepthClipEnable to
Specifies whether to enable scissor-rectangle culling. All pixels outside an active scissor rectangle are culled.
Specifies whether to use the quadrilateral or alpha line anti-aliasing algorithm on multisample antialiasing (MSAA) render targets. Set to TRUE to use the quadrilateral line anti-aliasing algorithm and to
Specifies whether to enable line antialiasing; only applies if doing line drawing and MultisampleEnable is
The sample count that is forced while UAV rendering or rasterizing. Valid values are 0, 1, 2, 4, 8, and optionally 16. 0 indicates that the sample count is not forced.
Note??If you want to render with ForcedSampleCount set to 1 or greater, you must follow these guidelines:
Describes rasterizer state.
-Rasterizer state defines the behavior of the rasterizer stage. To create a rasterizer-state object, call
If you do not specify some rasterizer state, the Direct3D runtime uses the following default values for rasterizer state.
State | Default Value |
---|---|
FillMode | Solid |
CullMode | Back |
FrontCounterClockwise | |
DepthBias | 0 |
SlopeScaledDepthBias | 0.0f |
DepthBiasClamp | 0.0f |
DepthClipEnable | TRUE |
ScissorEnable | |
MultisampleEnable | |
AntialiasedLineEnable | |
ForcedSampleCount | 0 |
ConservativeRaster |
?
Note??For feature levels 9.1, 9.2, 9.3, and 10.0, if you set MultisampleEnable to
Line-rendering algorithm | MultisampleEnable | AntialiasedLineEnable |
---|---|---|
Aliased | ||
Alpha antialiased | TRUE | |
Quadrilateral | TRUE | |
Quadrilateral | TRUE | TRUE |
?
The settings of the MultisampleEnable and AntialiasedLineEnable members apply only to multisample antialiasing (MSAA) render targets (that is, render targets with sample counts greater than 1). Because of the differences in feature-level behavior and as long as you aren?t performing any line drawing or don?t mind that lines render as quadrilaterals, we recommend that you always set MultisampleEnable to TRUE whenever you render on MSAA render targets.
-A
A
Specifies whether a triangle is front- or back-facing. If TRUE, a triangle will be considered front-facing if its vertices are counter-clockwise on the render target and considered back-facing if they are clockwise. If
Depth value added to a given pixel. For info about depth bias, see Depth Bias.
Maximum depth bias of a pixel. For info about depth bias, see Depth Bias.
Scalar on a given pixel's slope. For info about depth bias, see Depth Bias.
Specifies whether to enable clipping based on distance.
The hardware always performs x and y clipping of rasterized coordinates. When DepthClipEnable is set to the default?TRUE, the hardware also clips the z value (that is, the hardware performs the last step of the following algorithm). -
0 < w
- -w <= x <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- -w <= y <= w (or arbitrarily wider range if implementation uses a guard band to reduce clipping burden)
- 0 <= z <= w
-
When you set DepthClipEnable to
Specifies whether to enable scissor-rectangle culling. All pixels outside an active scissor rectangle are culled.
Specifies whether to use the quadrilateral or alpha line anti-aliasing algorithm on multisample antialiasing (MSAA) render targets. Set to TRUE to use the quadrilateral line anti-aliasing algorithm and to
Specifies whether to enable line antialiasing; only applies if doing line drawing and MultisampleEnable is
The sample count that is forced while UAV rendering or rasterizing. Valid values are 0, 1, 2, 4, 8, and optionally 16. 0 indicates that the sample count is not forced.
Note??If you want to render with ForcedSampleCount set to 1 or greater, you must follow these guidelines:
A
Describes the blend state for a render target.
-You specify an array of
For info about how blending is done, see the output-merger stage.
Here are the default values for blend state.
State | Default Value |
---|---|
BlendEnable | |
SrcBlend | |
DestBlend | |
BlendOp | |
SrcBlendAlpha | |
DestBlendAlpha | |
BlendOpAlpha | |
RenderTargetWriteMask |
?
-Enable (or disable) blending.
This blend option specifies the operation to perform on the RGB value that the pixel shader outputs. The BlendOp member defines how to combine the SrcBlend and DestBlend operations.
This blend option specifies the operation to perform on the current RGB value in the render target. The BlendOp member defines how to combine the SrcBlend and DestBlend operations.
This blend operation defines how to combine the SrcBlend and DestBlend operations.
This blend option specifies the operation to perform on the alpha value that the pixel shader outputs. Blend options that end in _COLOR are not allowed. The BlendOpAlpha member defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
This blend option specifies the operation to perform on the current alpha value in the render target. Blend options that end in _COLOR are not allowed. The BlendOpAlpha member defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
This blend operation defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
A write mask.
Describes the blend state for a render target.
-You specify an array of
For info about how blending is done, see the output-merger stage.
Here are the default values for blend state.
State | Default Value |
---|---|
BlendEnable | |
LogicOpEnable | |
SrcBlend | |
DestBlend | |
BlendOp | |
SrcBlendAlpha | |
DestBlendAlpha | |
BlendOpAlpha | |
LogicOp | |
RenderTargetWriteMask |
?
-Enable (or disable) blending.
Enable (or disable) a logical operation.
This blend option specifies the operation to perform on the RGB value that the pixel shader outputs. The BlendOp member defines how to combine the SrcBlend and DestBlend operations.
This blend option specifies the operation to perform on the current RGB value in the render target. The BlendOp member defines how to combine the SrcBlend and DestBlend operations.
This blend operation defines how to combine the SrcBlend and DestBlend operations.
This blend option specifies the operation to perform on the alpha value that the pixel shader outputs. Blend options that end in _COLOR are not allowed. The BlendOpAlpha member defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
This blend option specifies the operation to perform on the current alpha value in the render target. Blend options that end in _COLOR are not allowed. The BlendOpAlpha member defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
This blend operation defines how to combine the SrcBlendAlpha and DestBlendAlpha operations.
A
A write mask.
Specifies the subresources from a resource that are accessible using a render-target view.
-A render-target-view description is passed into
A render-target-view cannot use the following formats:
If the format is set to
Specifies the elements in a buffer resource to use in a render-target view.
- A render-target view is a member of a render-target-view description (see
Number of bytes between the beginning of the buffer and the first element to access.
The offset of the first element in the view to access, relative to element 0.
The total number of elements in the view.
The width of each element (in bytes). This can be determined from the format stored in the render-target-view description.
Specifies the subresource from a 1D texture to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the mipmap level to use mip slice.
Specifies the subresources from an array of 1D textures to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the mipmap level to use mip slice.
The index of the first texture to use in an array of textures.
Number of textures to use.
Specifies the subresource from a 2D texture to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the mipmap level to use mip slice.
Specifies the subresource from a multisampled 2D texture to use in a render-target view.
-Since a multisampled 2D texture contains a single subresource, there is actually nothing to specify in
Integer of any value. See remarks.
Specifies the subresources from an array of 2D textures to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the mipmap level to use mip slice.
The index of the first texture to use in an array of textures.
Number of textures in the array to use in the render target view, starting from FirstArraySlice.
Specifies the subresources from a an array of multisampled 2D textures to use in a render-target view.
-This structure is one member of a render-target-view description (see
The index of the first texture to use in an array of textures.
Number of textures to use.
Specifies the subresources from a 3D texture to use in a render-target view.
-This structure is one member of a render target view. See
The index of the mipmap level to use mip slice.
First depth level to use.
Number of depth levels to use in the render-target view, starting from FirstWSlice. A value of -1 indicates all of the slices along the w axis, starting from FirstWSlice.
The data format (see
The resource type (see
Specifies which buffer elements can be accessed (see
Specifies the subresources in a 1D texture that can be accessed (see
Specifies the subresources in a 1D texture array that can be accessed (see
Specifies the subresources in a 2D texture that can be accessed (see
Specifies the subresources in a 2D texture array that can be accessed (see
Specifies a single subresource because a multisampled 2D texture only contains one subresource (see
Specifies the subresources in a multisampled 2D texture array that can be accessed (see
Specifies subresources in a 3D texture that can be accessed (see
Describes the subresources from a resource that are accessible using a render-target view.
-A render-target-view description is passed into
A render-target-view can't use the following formats:
If the format is set to
Describes the subresource from a 2D texture to use in a render-target view.
-The index of the mipmap level to use mip slice.
The index (plane slice number) of the plane to use in the texture.
Describes the subresources from an array of 2D textures to use in a render-target view.
-The index of the mipmap level to use mip slice.
The index of the first texture to use in an array of textures.
Number of textures in the array to use in the render-target view, starting from FirstArraySlice.
The index (plane slice number) of the plane to use in an array of textures.
A
A
A
A
A
A
A
A
A
A
Defines a 3D box.
-The following diagram shows a 3D box, where the origin is the left, front, top corner.
The values for right, bottom, and back are each one pixel past the end of the pixels that are included in the box region. That is, the values for left, top, and front are included in the box region while the values for right, bottom, and back are excluded from the box region. For example, for a box that is one pixel wide, (right - left) == 1; the box region includes the left pixel but not the right pixel.
Coordinates of a box are in bytes for buffers and in texels for textures.
-The x position of the left hand side of the box.
The y position of the top of the box.
The z position of the front of the box.
The x position of the right hand side of the box.
The y position of the bottom of the box.
The z position of the back of the box.
Describes a sampler state.
-These are the default values for sampler state.
State | Default Value |
---|---|
Filter | |
AddressU | |
AddressV | |
AddressW | |
MinLOD | -3.402823466e+38F (-FLT_MAX) |
MaxLOD | 3.402823466e+38F (FLT_MAX) |
MipMapLODBias | 0.0f |
MaxAnisotropy | 1 |
ComparisonFunc | |
BorderColor | float4(1.0f,1.0f,1.0f,1.0f) |
Texture | N/A |
?
- Filtering method to use when sampling a texture (see
Method to use for resolving a u texture coordinate that is outside the 0 to 1 range (see
Method to use for resolving a v texture coordinate that is outside the 0 to 1 range.
Method to use for resolving a w texture coordinate that is outside the 0 to 1 range.
Offset from the calculated mipmap level. For example, if Direct3D calculates that a texture should be sampled at mipmap level 3 and MipLODBias is 2, then the texture will be sampled at mipmap level 5.
Clamping value used if
A function that compares sampled data against existing sampled data. The function options are listed in
Border color to use if
Lower end of the mipmap range to clamp access to, where 0 is the largest and most detailed mipmap level and any level higher than that is less detailed.
Upper end of the mipmap range to clamp access to, where 0 is the largest and most detailed mipmap level and any level higher than that is less detailed. This value must be greater than or equal to MinLOD. To have no upper limit on LOD set this to a large value such as D3D11_FLOAT32_MAX.
Describes a shader-resource view.
-A view is a format-specific way to look at the data in a resource. The view determines what data to look at, and how it is cast when read.
When viewing a resource, the resource-view description must specify a typed format, that is compatible with the resource format. So that means that you cannot create a resource-view description using any format with _TYPELESS in the name. You can however view a typeless resource by specifying a typed format for the view. For example, a
Create a shader-resource-view description by calling
Specifies the elements in a buffer resource to use in a shader-resource view.
- The
Number of bytes between the beginning of the buffer and the first element to access.
The offset of the first element in the view to access, relative to element 0.
The total number of elements in the view.
The width of each element (in bytes). This can be determined from the format stored in the shader-resource-view description.
Describes the elements in a raw buffer resource to use in a shader-resource view.
-This structure is used by
The index of the first element to be accessed by the view.
The number of elements in the resource.
A
Specifies the subresource from a 1D texture to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
As an example, assuming MostDetailedMip = 6 and MipLevels = 2, the view will have access to 2 mipmap levels, 6 and 7, of the original texture for which
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture1D for which
The maximum number of mipmap levels for the view of the texture. See the remarks.
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Specifies the subresources from an array of 1D textures to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture1D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
The index of the first texture to use in an array of textures.
Number of textures in the array.
Specifies the subresource from a 2D texture to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture2D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Specifies the subresources from an array of 2D textures to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture2D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
The index of the first texture to use in an array of textures.
Number of textures in the array.
Specifies the subresources from a 3D texture to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original Texture3D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Specifies the subresource from a cube texture to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original TextureCube for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Specifies the subresources from an array of cube textures to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
Index of the most detailed mipmap level to use; this number is between 0 and MipLevels (from the original TextureCube for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
Index of the first 2D texture to use.
Number of cube textures in the array.
Specifies the subresources from a multisampled 2D texture to use in a shader-resource view.
-Since a multisampled 2D texture contains a single subresource, there is actually nothing to specify in
Integer of any value. See remarks.
Specifies the subresources from an array of multisampled 2D textures to use in a shader-resource view.
-This structure is one member of a shader-resource-view description (see
The index of the first texture to use in an array of textures.
Number of textures to use.
A
The resource type of the view. See D3D11_SRV_DIMENSION. This should be the same as the resource type of the underlying resource. This parameter also determines which _SRV to use in the union below.
View the resource as a buffer using information from a shader-resource view (see
View the resource as a 1D texture using information from a shader-resource view (see
View the resource as a 1D-texture array using information from a shader-resource view (see
View the resource as a 2D-texture using information from a shader-resource view (see
View the resource as a 2D-texture array using information from a shader-resource view (see
View the resource as a 2D-multisampled texture using information from a shader-resource view (see
View the resource as a 2D-multisampled-texture array using information from a shader-resource view (see
View the resource as a 3D texture using information from a shader-resource view (see
View the resource as a 3D-cube texture using information from a shader-resource view (see
View the resource as a 3D-cube-texture array using information from a shader-resource view (see
View the resource as a raw buffer using information from a shader-resource view (see
Describes a shader-resource view.
-A view is a format-specific way to look at the data in a resource. The view determines what data to look at, and how it is cast when read.
When viewing a resource, the resource-view description must specify a typed format, that is compatible with the resource format. So that means that you cannot create a resource-view description using any format with _TYPELESS in the name. You can however view a typeless resource by specifying a typed format for the view. For example, a
Create a shader-resource-view description by calling
Describes the subresource from a 2D texture to use in a shader-resource view.
- Index of the most detailed mipmap level to use; this number is between 0 and (MipLevels (from the original Texture2D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
The index (plane slice number) of the plane to use in the texture.
Describes the subresources from an array of 2D textures to use in a shader-resource view.
- Index of the most detailed mipmap level to use; this number is between 0 and ( MipLevels (from the original Texture2D for which
The maximum number of mipmap levels for the view of the texture. See the remarks in
Set to -1 to indicate all the mipmap levels from MostDetailedMip on down to least detailed.
The index of the first texture to use in an array of textures.
Number of textures in the array.
The index (plane slice number) of the plane to use in an array of textures.
A
A D3D11_SRV_DIMENSION-typed value that specifies the resource type of the view. This type is the same as the resource type of the underlying resource. This member also determines which _SRV to use in the union below.
A
A
A
A
A
A
A
A
A
A
A
Description of a vertex element in a vertex buffer in an output slot.
-Zero-based, stream number.
Type of output element; possible values include: "POSITION", "NORMAL", or "TEXCOORD0". Note that if SemanticName is
Output element's zero-based index. Should be used if, for example, you have more than one texture coordinate stored in each vertex.
Which component of the entry to begin writing out to. Valid values are 0 to 3. For example, if you only wish to output to the y and z components of a position, then StartComponent should be 1 and ComponentCount should be 2.
The number of components of the entry to write out to. Valid values are 1 to 4. For example, if you only wish to output to the y and z components of a position, then StartComponent should be 1 and ComponentCount should be 2. Note that if SemanticName is
The associated stream output buffer that is bound to the pipeline (see
Query information about the amount of data streamed out to the stream-output buffers in between
Describes a tiled subresource volume.
-Each packed mipmap is individually reported as 0 for WidthInTiles, HeightInTiles and DepthInTiles. -
The total number of tiles in subresources is WidthInTiles*HeightInTiles*DepthInTiles.
-The width in tiles of the subresource.
The height in tiles of the subresource.
The depth in tiles of the subresource.
The index of the tile in the overall tiled subresource to start with.
GetResourceTiling sets StartTileIndexInOverallResource to D3D11_PACKED_TILE (0xffffffff) to indicate that the whole
-
Describes a 1D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE1D_DESC derived structure, which is defined in D3D11.h and behaves like an inherited class, to help create a texture description.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Number of textures in the array. The range is from 1 to
Texture format (see
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
Identifies a texture resource for a video processor output view.
-The zero-based index into the array of subtextures.
The index of the first texture to use.
The number of textures in the array.
Describes a 2D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE2D_DESC derived structure, which is defined in D3D11.h and behaves like an inherited class, to help create a texture description.
The device places some size restrictions (must be multiples of a minimum size) for a subsampled, block compressed, or bit-format resource.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
Texture height (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Number of textures in the texture array. The range is from 1 to
Texture format (see
Structure that specifies multisampling parameters for the texture. See
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
Describes a 2D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE2D_DESC1 derived structure, which is defined in D3D11_3.h and behaves like an inherited class, to help create a texture description.
The device places some size restrictions (must be multiples of a minimum size) for a subsampled, block compressed, or bit-format resource.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
Texture height (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Number of textures in the texture array. The range is from 1 to
Texture format (see
Structure that specifies multisampling parameters for the texture. See
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
A
The TextureLayout parameter selects both the actual layout of the texture in memory and the layout visible to the application while the texture is mapped. These flags may not be requested without CPU access also requested.
It is illegal to set CPU access flags on default textures without also setting TextureLayout to a value other than
Identifies the texture resource for a video decoder output view.
-The zero-based index of the texture.
Identifies the texture resource for a video processor input view.
-The zero-based index into the array of subtextures.
The zero-based index of the texture.
Identifies a texture resource for a video processor output view.
-The zero-based index into the array of subtextures.
Describes a 3D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE3D_DESC derived structure, which is defined in D3D11.h and behaves like an inherited class, to help create a texture description.
The device restricts the size of subsampled, block compressed, and bit format resources to be multiples of sizes specific to each format.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
Texture height (in texels). The range is from 1 to
Texture depth (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Texture format (see
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
Describes a 3D texture.
-This structure is used in a call to
In addition to this structure, you can also use the CD3D11_TEXTURE3D_DESC1 derived structure, which is defined in D3D11_3.h and behaves like an inherited class, to help create a texture description.
The device restricts the size of subsampled, block compressed, and bit format resources to be multiples of sizes specific to each format.
The texture size range is determined by the feature level at which you create the device and not the Microsoft Direct3D interface version. For example, if you use Microsoft Direct3D?10 hardware at feature level 10 (
Texture width (in texels). The range is from 1 to
Texture height (in texels). The range is from 1 to
Texture depth (in texels). The range is from 1 to
The maximum number of mipmap levels in the texture. See the remarks in
Texture format (see
Value that identifies how the texture is to be read from and written to. The most common value is
Flags (see
Flags (see
Flags (see
A
The TextureLayout parameter selects both the actual layout of the texture in memory and the layout visible to the application while the texture is mapped. These flags may not be requested without CPU access also requested.
It is illegal to set CPU access flags on default textures without also setting Layout to a value other than
Describes the coordinates of a tiled resource.
-The x position of a tiled resource. Used for buffer and 1D, 2D, and 3D textures.
The y position of a tiled resource. Used for 2D and 3D textures.
The z position of a tiled resource. Used for 3D textures.
A subresource index value into mipmaps and arrays. Used for 1D, 2D, and 3D textures.
For mipmaps that use nonstandard tiling, or are packed, or both use nonstandard tiling and are packed, any subresource value that indicates any of the packed mipmaps all refer to the same tile.
Describes the size of a tiled region.
-The number of tiles in the tiled region.
Specifies whether the runtime uses the Width, Height, and Depth members to define the region.
If TRUE, the runtime uses the Width, Height, and Depth members to define the region.
If
Regardless of whether you specify TRUE or
When the region includes mipmaps that are packed with nonstandard tiling, bUseBox must be
The width of the tiled region, in tiles. Used for buffer and 1D, 2D, and 3D textures.
The height of the tiled region, in tiles. Used for 2D and 3D textures.
The depth of the tiled region, in tiles. Used for 3D textures or arrays. For arrays, used for advancing in depth jumps to next slice of same mipmap size, which isn't contiguous in the subresource counting space if there are multiple mipmaps.
Describes the shape of a tile by specifying its dimensions.
-Texels are equivalent to pixels. For untyped buffer resources, a texel is just a byte. For multisample antialiasing (MSAA) surfaces, the numbers are still in terms of pixels/texels. - The values here are independent of the surface dimensions. Even if the surface is smaller than what would fit in a tile, the full tile dimensions are reported here. -
-The width in texels of the tile.
The height in texels of the tile.
The depth in texels of the tile.
Specifies the subresources from a resource that are accessible using an unordered-access view.
-An unordered-access-view description is passed into
Describes the elements in a buffer to use in a unordered-access view.
-This structure is used by a
The zero-based index of the first element to be accessed.
The number of elements in the resource. For structured buffers, this is the number of structures in the buffer.
View options for the resource (see
Describes a unordered-access 1D texture resource.
-This structure is used by a
The mipmap slice index.
Describes an array of unordered-access 1D texture resources.
-This structure is used by a
The mipmap slice index.
The zero-based index of the first array slice to be accessed.
The number of slices in the array.
Describes a unordered-access 2D texture resource.
-This structure is used by a
The mipmap slice index.
Describes an array of unordered-access 2D texture resources.
-This structure is used by a
The mipmap slice index.
The zero-based index of the first array slice to be accessed.
The number of slices in the array.
Describes a unordered-access 3D texture resource.
-This structure is used by a
The mipmap slice index.
The zero-based index of the first depth slice to be accessed.
The number of depth slices.
The data format (see
The resource type (see
Specifies which buffer elements can be accessed (see
Specifies the subresources in a 1D texture that can be accessed (see
Specifies the subresources in a 1D texture array that can be accessed (see
Specifies the subresources in a 2D texture that can be accessed (see
Specifies the subresources in a 2D texture array that can be accessed (see
Specifies subresources in a 3D texture that can be accessed (see
Describes the subresources from a resource that are accessible using an unordered-access view.
-An unordered-access-view description is passed into
Describes a unordered-access 2D texture resource.
-The mipmap slice index.
The index (plane slice number) of the plane to use in the texture.
Describes an array of unordered-access 2D texture resources.
-The mipmap slice index.
The zero-based index of the first array slice to be accessed.
The number of slices in the array.
The index (plane slice number) of the plane to use in an array of textures.
A
A
A
A
A
A
A
A
Defines a color value for Microsoft Direct3D?11 video.
-The anonymous union can represent both RGB and YCbCr colors. The interpretation of the union depends on the context.
-A
A
Specifies an RGB color value.
-The RGB values have a nominal range of [0...1]. For an RGB format with n bits per channel, the value of each color component is calculated as follows:
val = f * ((1 << n)-1)
For example, for RGB-32 (8 bits per channel), val = BYTE(f * 255.0)
.
The red value.
The green value.
The blue value.
The alpha value. Values range from 0 (transparent) to 1 (opaque). -
Describes the content-protection capabilities of a graphics driver.
-A bitwise OR of zero or more flags from the
The number of cryptographic key-exchange types that are supported by the driver. To get the list of key-exchange types, call the
The encyrption block size, in bytes. The size of data to be encrypted must be a multiple of this value.
The total amount of memory, in bytes, that can be used to hold protected surfaces.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides data to the
This structure is passed in the pContentKey parameter of the
Describes a compressed buffer for decoding.
-The type of buffer, specified as a member of the
Reserved.
The offset of the relevant data from the beginning of the buffer, in bytes. This value must be zero. -
The macroblock address of the first macroblock in the buffer. The macroblock address is given in raster scan order. -
The macroblock address of the first macroblock in the buffer. The macroblock address is given in raster scan order. -
The number of macroblocks of data in the buffer. This count includes skipped macroblocks.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
A reference to a buffer that contains an initialization vector (IV) for encrypted data. If the decode buffer does not contain encrypted data, set this member to
The size of the buffer specified in the pIV parameter. If pIV is
If TRUE, the video surfaces are partially encrypted.
A
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Describes a compressed buffer for decoding.
-The type of buffer.
The offset of the relevant data from the beginning of the buffer, in bytes. This value must be zero.
Size of the relevant data.
A reference to a buffer that contains an initialization vector (IV) for encrypted data. If the decode buffer does not contain encrypted data, set this member to
The size of the buffer specified in the pIV parameter. If pIV is
A reference to an array of
Values in the sub sample mapping blocks are relative to the start of the decode buffer.
The number of
Describes the configuration of a Microsoft Direct3D?11 decoder device for DirectX Video Acceleration (DXVA).
-If the bitstream data buffers are encrypted using the D3D11CryptoSession mechanism, this
If the macroblock control data buffers are encrypted using the D3D11CryptoSession mechanism, this
If the residual difference decoding data buffers are encrypted using the D3D11CryptoSession mechanism, this
Indicates whether the host-decoder sends raw bit-stream data. If the value is 1, the data for the pictures will be sent in bit-stream buffers as raw bit-stream content. If the value is 0, picture data will be sent using macroblock control command buffers. If either ConfigResidDiffHost or ConfigResidDiffAccelerator is 1, the value must be 0.
Specifies whether macroblock control commands are in raster scan order or in arbitrary order. If the value is 1, the macroblock control commands within each macroblock control command buffer are in raster-scan order. If the value is 0, the order is arbitrary. For some types of bit streams, forcing raster order either greatly increases the number of required macroblock control buffers that must be processed, or requires host reordering of the control information. Therefore, supporting arbitrary order can be more efficient.
Contains the host residual difference configuration. If the value is 1, some residual difference decoding data may be sent as blocks in the spatial domain from the host. If the value is 0, spatial domain data will not be sent.
Indicates the word size used to represent residual difference spatial-domain blocks for predicted (non-intra) pictures when using host-based residual difference decoding.
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 1, the host will send residual difference spatial-domain blocks for non-intra macroblocks using 8-bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 0, the host will send residual difference spatial-domain blocks of data for non-intra macroblocks using 16-bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 0, ConfigSpatialResid8 must be 0.
For intra pictures, spatial-domain blocks must be sent using 8-bit samples if bits-per-pixel (BPP) is 8, and using 16-bit samples if BPP > 8. If ConfigIntraResidUnsigned is 0, these samples are sent as signed integer values relative to a constant reference value of 2^(BPP?1), and if ConfigIntraResidUnsigned is 1, these samples are sent as unsigned integer values relative to a constant reference value of 0.
If the value is 1, 8-bit difference overflow blocks are subtracted rather than added. The value must be 0 unless ConfigSpatialResid8 is 1.
The ability to subtract differences rather than add them enables 8-bit difference decoding to be fully compliant with the full ?255 range of values required in video decoder specifications, because +255 cannot be represented as the addition of two signed 8-bit numbers, but any number in the range ?255 can be represented as the difference between two signed 8-bit numbers (+255 = +127 minus ?128).
If the value is 1, spatial-domain blocks for intra macroblocks must be clipped to an 8-bit range on the host and spatial-domain blocks for non-intra macroblocks must be clipped to a 9-bit range on the host. If the value is 0, no such clipping is necessary by the host.
The value must be 0 unless ConfigSpatialResid8 is 0 and ConfigResidDiffHost is 1.
If the value is 1, any spatial-domain residual difference data must be sent in a chrominance-interleaved form matching the YUV format chrominance interleaving pattern. The value must be 0 unless ConfigResidDiffHost is 1 and the YUV format is NV12 or NV21.
Indicates the method of representation of spatial-domain blocks of residual difference data for intra blocks when using host-based difference decoding.
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 0, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 1, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
The value of the member must be 0 unless ConfigResidDiffHost is 1.
If the value is 1, transform-domain blocks of coefficient data may be sent from the host for accelerator-based IDCT. If the value is 0, accelerator-based IDCT will not be used. If both ConfigResidDiffHost and ConfigResidDiffAccelerator are 1, this indicates that some residual difference decoding will be done on the host and some on the accelerator, as indicated by macroblock-level control commands.
The value must be 0 if ConfigBitstreamRaw is 1.
If the value is 1, the inverse scan for transform-domain block processing will be performed on the host, and absolute indices will be sent instead for any transform coefficients. If the value is 0, the inverse scan will be performed on the accelerator.
The value must be 0 if ConfigResidDiffAccelerator is 0 or if Config4GroupedCoefs is 1.
If the value is 1, the IDCT specified in Annex W of ITU-T Recommendation H.263 is used. If the value is 0, any compliant IDCT can be used for off-host IDCT.
The H.263 annex does not comply with the IDCT requirements of MPEG-2 corrigendum 2, so the value must not be 1 for use with MPEG-2 video.
The value must be 0 if ConfigResidDiffAccelerator is 0, indicating purely host-based residual difference decoding.
If the value is 1, transform coefficients for off-host IDCT will be sent using the DXVA_TCoef4Group structure. If the value is 0, the DXVA_TCoefSingle structure is used. The value must be 0 if ConfigResidDiffAccelerator is 0 or if ConfigHostInverseScan is 1.
Specifies how many frames the decoder device processes at any one time.
Contains decoder-specific configuration information.
Describes a video stream for a Microsoft Direct3D?11 video decoder or video processor.
-The decoding profile. To get the list of profiles supported by the device, call the
The width of the video frame, in pixels.
The height of the video frame, in pixels.
The output surface format, specified as a
Contains driver-specific data for the
The exact meaning of each structure member depends on the value of Function.
-Describes a video decoder output view.
-The decoding profile. To get the list of profiles supported by the device, call the
The resource type of the view, specified as a member of the
A
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Describes a sub sample mapping block.
-Values in the sub sample mapping blocks are relative to the start of the decode buffer.
-The number of clear (non-encrypted) bytes at the start of the block.
The number of encrypted bytes following the clear bytes.
Describes the capabilities of a Microsoft Direct3D?11 video processor.
-The video processor stores state information for each input stream. These states persist between blits. With each blit, the application selects which streams to enable or disable. Disabling a stream does not affect the state information for that stream.
The MaxStreamStates member gives the maximum number of stream states that can be saved. The MaxInputStreams member gives the maximum number of streams that can be enabled during a blit. These two values can differ.
-A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the D3D11_VIDEO_PROCESSPR_FILTER_CAPS enumeration.
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The number of frame-rate conversion capabilities. To enumerate the frame-rate conversion capabilities, call the
The maximum number of input streams that can be enabled at the same time.
The maximum number of input streams for which the device can store state data.
Specifies the color space for video processing.
-The RGB_Range member applies to RGB output, while the YCbCr_Matrix and YCbCr_xvYCC members apply to YCbCr output. If the driver performs color-space conversion on the background color, it uses the values that apply to both color spaces.
If the driver supports extended YCbCr (xvYCC), it returns the
If extended YCbCr is supported, it can be used with either transfer matrix. Extended YCbCr does not change the black point or white point?the black point is still 16 and the white point is still 235. However, extended YCbCr explicitly allows blacker-than-black values in the range 1?15, and whiter-than-white values in the range 236?254. When extended YCbCr is used, the driver should not clip the luma values to the nominal 16?235 range.
-Specifies whether the output is intended for playback or video processing (such as editing or authoring). The device can optimize the processing based on the type. The default state value is 0 (playback).
Value | Meaning |
---|---|
| Playback |
| Video processing |
?
Specifies the RGB color range. The default state value is 0 (full range).
Value | Meaning |
---|---|
| Full range (0-255) |
| Limited range (16-235) |
?
Specifies the YCbCr transfer matrix. The default state value is 0 (BT.601).
Value | Meaning |
---|---|
| ITU-R BT.601 |
| ITU-R BT.709 |
?
Specifies whether the output uses conventional YCbCr or extended YCbCr (xvYCC). The default state value is zero (conventional YCbCr).
Value | Meaning |
---|---|
| Conventional YCbCr |
| Extended YCbCr (xvYCC) |
?
Specifies the
Introduced in Windows?8.1.
Reserved. Set to zero.
Describes a video stream for a video processor.
-A member of the
The frame rate of the input video stream, specified as a
The width of the input frames, in pixels.
The height of the input frames, in pixels.
The frame rate of the output video stream, specified as a
The width of the output frames, in pixels.
The height of the output frames, in pixels.
A member of the
Specifies a custom rate for frame-rate conversion or inverse telecine (IVTC).
-The CustomRate member gives the rate conversion factor, while the remaining members define the pattern of input and output samples.
-The ratio of the output frame rate to the input frame rate, expressed as a
The number of output frames that will be generated for every N input samples, where N = InputFramesOrFields.
If TRUE, the input stream must be interlaced. Otherwise, the input stream must be progressive.
The number of input fields or frames for every N output frames that will be generated, where N = OutputFrames.
Defines the range of supported values for an image filter.
-The multiplier enables the filter range to have a fractional step value.
For example, a hue filter might have an actual range of [?180.0 ... +180.0] with a step size of 0.25. The device would report the following range and multiplier:
In this case, a filter value of 2 would be interpreted by the device as 0.50 (or 2 ? 0.25).
The device should use a multiplier that can be represented exactly as a base-2 fraction.
-The minimum value of the filter.
The maximum value of the filter.
The default value of the filter.
A multiplier. Use the following formula to translate the filter setting into the actual filter value: Actual Value = Set Value???Multiplier.
Describes a video processor input view.
-The surface format. If zero, the driver uses the DXGI format that was used to create the resource. If you are using feature level 9, the value must be zero.
The resource type of the view, specified as a member of the
A
Describes a video processor output view.
-The resource type of the view, specified as a member of the
A
Use this member of the union when ViewDimension equals
A
Use this member of the union when ViewDimension equals
Defines a group of video processor capabilities that are associated with frame-rate conversion, including deinterlacing and inverse telecine.
-The number of past reference frames required to perform the optimal video processing.
The number of future reference frames required to perform the optimal video processing.
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The number of custom frame rates that the driver supports. To get the list of custom frame rates, call the
Contains stream-level data for the
If the stereo 3D format is
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides information about the input streams passed into the ID3DVideoContext1::VideoProcessorGetBehaviorHints method.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Describes a video sample.
-The width of the video sample.
The height of the video sample.
The format of the video sample.
The colorspace of the sample.
Enables the application to defer the creation of an object. This interface is exposed by activation objects.
-Typically, the application calls some function that returns an
The class identifier that is associated with the activatable runtime class.
An optional friendly name for the activation object. The friendly name is stored in the object's
To create the Windows Runtime object, call
Creates the object associated with this activation object.
-Interface identifier (IID) of the requested interface.
A reference to the requested interface. The caller must release the interface.
Some Microsoft Media Foundation objects must be shut down before being released. If so, the caller is responsible for shutting down the object that is returned in ppv. To shut down the object, do one of the following:
The
After the first call to ActivateObject, subsequent calls return a reference to the same instance, until the client calls either ShutdownObject or
Creates the object associated with this activation object. Riid is provided via reflection on the COM object type
-A reference to the requested interface. The caller must release the interface.
Some Microsoft Media Foundation objects must be shut down before being released. If so, the caller is responsible for shutting down the object that is returned in ppv. To shut down the object, do one of the following:
The
After the first call to ActivateObject, subsequent calls return a reference to the same instance, until the client calls either ShutdownObject or
Creates the object associated with this activation object.
-Interface identifier (IID) of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
Some Microsoft Media Foundation objects must be shut down before being released. If so, the caller is responsible for shutting down the object that is returned in ppv. To shut down the object, do one of the following:
The
After the first call to ActivateObject, subsequent calls return a reference to the same instance, until the client calls either ShutdownObject or
Shuts down the created object.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If you create an object by calling
The component that calls ActivateObject?not the component that creates the activation object?is responsible for calling ShutdownObject. For example, in a typical playback application, the application creates activation objects for the media sinks, but the Media Session calls ActivateObject. Therefore the Media Session, not the application, calls ShutdownObject.
After ShutdownObject is called, the activation object releases all of its internal references to the created object. If you call ActivateObject again, the activation object will create a new instance of the other object.
-
Detaches the created object from the activation object.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
?
The activation object releases all of its internal references to the created object. If you call ActivateObject again, the activation object will create a new instance of the other object.
The DetachObject method does not shut down the created object. If the DetachObject method succeeds, the client must shut down the created object. This rule applies only to objects that have a shutdown method or that support the
Implementation of this method is optional. If the activation object does not support this method, the method returns E_NOTIMPL.
-Provides information about the result of an asynchronous operation.
-Use this interface to complete an asynchronous operation. You get a reference to this interface when your callback object's
If you are implementing an asynchronous method, call
Any custom implementation of this interface must inherit the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
The caller of the asynchronous method specifies the state object, and can use it for any caller-defined purpose. The state object can be
If you are implementing an asynchronous method, set the state object on the through the punkState parameter of the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Get or sets the status of the asynchronous operation.
-The method returns an
Return code | Description |
---|---|
| The operation completed successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Returns an object associated with the asynchronous operation. The type of object, if any, depends on the asynchronous method that was called.
-Receives a reference to the object's
Typically, this object is used by the component that implements the asynchronous method. It provides a way for the function that invokes the callback to pass information to the asynchronous End... method that completes the operation.
If you are implementing an asynchronous method, you can set the object through the punkObject parameter of the
If the asynchronous result object's internal
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns the state object specified by the caller in the asynchronous Begin method.
-Receives a reference to the state object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no state object associated with this asynchronous result. |
?
The caller of the asynchronous method specifies the state object, and can use it for any caller-defined purpose. The state object can be
If you are implementing an asynchronous method, set the state object on the through the punkState parameter of the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns the status of the asynchronous operation.
-The method returns an
Return code | Description |
---|---|
| The operation completed successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the status of the asynchronous operation.
-The status of the asynchronous operation.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If you implement an asynchronous method, call SetStatus to set the status code for the operation.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns an object associated with the asynchronous operation. The type of object, if any, depends on the asynchronous method that was called.
-Receives a reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no object associated with this asynchronous result. |
?
Typically, this object is used by the component that implements the asynchronous method. It provides a way for the function that invokes the callback to pass information to the asynchronous End... method that completes the operation.
If you are implementing an asynchronous method, you can set the object through the punkObject parameter of the
If the asynchronous result object's internal
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Returns the state object specified by the caller in the asynchronous Begin method, without incrementing the object's reference count.
-Returns a reference to the state object's
This method cannot be called remotely.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The
The following functions return
A byte stream for a media souce can be opened with read access. A byte stream for an archive media sink should be opened with both read and write access. (Read access may be required, because the archive sink might need to read portions of the file as it writes.)
Some implementations of this interface also expose one or more of the following interfaces:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the characteristics of the byte stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the length of the stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the current read or write position in the stream.
-The methods that update the current position are Read, BeginRead, Write, BeginWrite, SetCurrentPosition, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Queries whether the current position has reached the end of the stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Reads data from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
This method reads at most cb bytes from the current position in the stream and copies them into the buffer provided by the caller. The number of bytes that were read is returned in the pcbRead parameter. The method does not return an error code on reaching the end of the file, so the application should check the value in pcbRead after the method returns.
This method is synchronous. It blocks until the read operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous read operation from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been read into the buffer, the callback object's
Do not read from, write to, free, or reallocate the buffer while an asynchronous read is pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous read operation.
- Pointer to the
Call this method after the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Writes data to the stream.
-Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
This method writes the contents of the pb buffer to the stream, starting at the current stream position. The number of bytes that were written is returned in the pcbWritten parameter.
This method is synchronous. It blocks until the write operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous write operation to the stream.
-Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been written to the stream, the callback object's
Do not reallocate, free, or write to the buffer while an asynchronous write is still pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous write operation.
-Pointer to the
Call this method when the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Moves the current position in the stream by a specified offset.
- Specifies the origin of the seek as a member of the
Specifies the new position, as a byte offset from the seek origin.
Specifies zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| All pending I/O requests are canceled after the seek request completes successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
-If this method succeeds, it returns
If the byte stream is read-only, this method has no effect.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
-If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the characteristics of the byte stream.
-Receives a bitwise OR of zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| The byte stream can be read. |
| The byte stream can be written to. |
| The byte stream can be seeked. |
| The byte stream is from a remote source, such as a network. |
| The byte stream represents a file directory. |
| Seeking within this stream might be slow. For example, the byte stream might download from a network. |
| The byte stream is currently downloading data to a local cache. Read operations on the byte stream might take longer until the data is completely downloaded. This flag is cleared after all of the data has been downloaded. If the MFBYTESTREAM_HAS_SLOW_SEEK flag is also set, it means the byte stream must download the entire file sequentially. Otherwise, the byte stream can respond to seek requests by restarting the download from a new point in the stream. |
| Another thread or process can open this byte stream for writing. If this flag is present, the length of thebyte stream could change while it is being read. This flag can affect the behavior of byte-stream handlers. For more information, see |
| The byte stream is not currentlyusing the network to receive the content. Networking hardwaremay enter a power saving state when this bit is set. Note??Requires Windows?8 or later. ? |
?
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of the stream.
-Receives the length of the stream, in bytes. If the length is unknown, this value is -1.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the length of the stream.
-Length of the stream in bytes.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the current read or write position in the stream.
-Receives the current position, in bytes.
If this method succeeds, it returns
The methods that update the current position are Read, BeginRead, Write, BeginWrite, SetCurrentPosition, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the current read or write position.
-New position in the stream, as a byte offset from the start of the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
If the new position is larger than the length of the stream, the method returns E_INVALIDARG.
Implementation notes: This method should update the current position in the stream by setting the current position to the value passed in to the qwPosition parameter. Other methods that can update the current position are Read, BeginRead, Write, BeginWrite, and Seek. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the current position has reached the end of the stream.
- Receives the value TRUE if the end of the stream has been reached, or
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Reads data from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Receives the number of bytes that are copied into the buffer. This parameter cannot be
If this method succeeds, it returns
This method reads at most cb bytes from the current position in the stream and copies them into the buffer provided by the caller. The number of bytes that were read is returned in the pcbRead parameter. The method does not return an error code on reaching the end of the file, so the application should check the value in pcbRead after the method returns.
This method is synchronous. It blocks until the read operation completes.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that were read, which is specified by the value returned in the pcbRead parameter, to the current position. Other methods that can update the current position are Read, Write, BeginWrite, Seek, and SetCurrentPosition. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous read operation from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been read into the buffer, the callback object's
Do not read from, write to, free, or reallocate the buffer while an asynchronous read is pending.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that will be read, which is specified by the value returned in the pcbRead parameter, to the current position. Other methods that can update the current position are BeginRead, Write, BeginWrite, Seek, and SetCurrentPosition. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Completes an asynchronous read operation.
- Pointer to the
Receives the number of bytes that were read.
If this method succeeds, it returns
Call this method after the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Writes data to the stream.
-Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
Receives the number of bytes that are written.
If this method succeeds, it returns
This method writes the contents of the pb buffer to the stream, starting at the current stream position. The number of bytes that were written is returned in the pcbWritten parameter.
This method is synchronous. It blocks until the write operation completes.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that were written to the stream, which is specified by the value returned in the pcbWritten, to the current position offset.
Other methods that can update the current position are Read, BeginRead, BeginWrite, Seek, and SetCurrentPosition. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous write operation to the stream.
-Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been written to the stream, the callback object's
Do not reallocate, free, or write to the buffer while an asynchronous write is still pending.
Implementation notes: This method should update the current position in the stream by adding the number of bytes that will be written to the stream, which is specified by the value returned in the pcbWritten, to the current position. Other methods that can update the current position are Read, BeginRead, Write, Seek, and SetCurrentPosition. -
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Completes an asynchronous write operation.
-Pointer to the
Receives the number of bytes that were written.
If this method succeeds, it returns
Call this method when the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Moves the current position in the stream by a specified offset.
- Specifies the origin of the seek as a member of the
Specifies the new position, as a byte offset from the seek origin.
Specifies zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| All pending I/O requests are canceled after the seek request completes successfully. |
?
Receives the new position after the seek.
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Implementation notes: This method should update the current position in the stream by adding the qwSeekOffset to the seek SeekOrigin position. This should be the same value passed back in the pqwCurrentPosition parameter. - Other methods that can update the current position are Read, BeginRead, Write, BeginWrite, and SetCurrentPosition. -
-Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
-If this method succeeds, it returns
If the byte stream is read-only, this method has no effect.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
-If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Controls one or more capture devices. The capture engine implements this interface. To get a reference to this interface, call either MFCreateCaptureEngine or
Creates an instance of the capture engine.
-The CLSID of the object to create. Currently, this parameter must equal
The IID of the requested interface. The capture engine supports the
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
Before calling this method, call the
Initializes the capture engine.
-A reference to the
A reference to the
You can use this parameter to configure the capture engine. Call
An
If you set the
Otherwise, if pAudioSource is
To override the default audio device, set pAudioSource to an
An
If you set the
Otherwise, if pVideoSource is
To override the default video device, set pVideoSource to an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The Initialize method was already called. |
| No capture devices are available. |
You must call this method once before using the capture engine. Calling the method a second time returns
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_INITIALIZED event through the
Gets a reference to the capture source object. Use the capture source to configure the capture devices.
-Initializes the capture engine.
-A reference to the
A reference to the
You can use this parameter to configure the capture engine. Call
An
If you set the
Otherwise, if pAudioSource is
To override the default audio device, set pAudioSource to an
An
If you set the
Otherwise, if pVideoSource is
To override the default video device, set pVideoSource to an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The Initialize method was already called. |
| No capture devices are available. |
?
You must call this method once before using the capture engine. Calling the method a second time returns
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_INITIALIZED event through the
Starts preview.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The preview sink was not initialized. |
?
Before calling this method, configure the preview sink by calling
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_PREVIEW_STARTED event through the
After the preview sink is configured, you can stop and start preview by calling
Stops preview.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The capture engine is not currently previewing. |
?
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_PREVIEW_STOPPED event through the
Starts recording audio and/or video to a file.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The recording sink was not initialized. |
?
Before calling this method, configure the recording sink by calling
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_RECORD_STARTED event through the
To stop recording, call
Stops recording.
-A Boolean value that specifies whether to finalize the output file. To create a valid output file, specify TRUE. Specify
A Boolean value that specifies if the unprocessed samples waiting to be encoded should be flushed.
If this method succeeds, it returns
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_RECORD_STOPPED event through the
Captures a still image from the video stream.
-If this method succeeds, it returns
Before calling this method, configure the photo sink by calling
This method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_ENGINE_PHOTO_TAKEN event through the
Gets a reference to one of the capture sink objects. You can use the capture sinks to configure preview, recording, or still-image capture.
-An
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid argument. |
?
Gets a reference to the capture source object. Use the capture source to configure the capture devices.
-Receives a reference to the
If this method succeeds, it returns
Creates an instance of the capture engine.
-To get a reference to this interface, call the CoCreateInstance function and specify the CLSID equal to
Calling the MFCreateCaptureEngine function is equivalent to calling
Creates an instance of the capture engine.
-The CLSID of the object to create. Currently, this parameter must equal
The IID of the requested interface. The capture engine supports the
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
Before calling this method, call the
Callback interface for receiving events from the capture engine.
-To set the callback interface on the capture engine, call the
Callback interface to receive data from the capture engine.
-To set the callback interface, call one of the following methods.
Extensions for the
Controls the photo sink. The photo sink captures still images from the video stream.
-The photo sink can deliver samples to one of the following destinations:
The application must specify a single destination. Multiple destinations are not supported.
To capture an image, call
Specifies a byte stream that will receive the still image data.
-A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a callback to receive the still-image data.
-A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies the name of the output file for the still image.
-Calling this method overrides any previous call to
Specifies the name of the output file for the still image.
-A null-terminated string that contains the URL of the output file.
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a callback to receive the still-image data.
-A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies a byte stream that will receive the still image data.
-A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Controls the preview sink. The preview sink enables the application to preview audio and video from the camera.
-To start preview, call
Sets a callback to receive the preview data for one stream.
-The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies a window for preview.
-Calling this method overrides any previous call to
Specifies a Microsoft DirectComposition visual for preview.
-Gets or sets the current mirroring state of the video preview stream.
-Sets a custom media sink for preview.
-This method overrides the default selection of the media sink for preview.
-Specifies a window for preview.
-A handle to the window. The preview sink draws the video frames inside this window.
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies a Microsoft DirectComposition visual for preview.
-A reference to a DirectComposition visual that implements the
If this method succeeds, it returns
Updates the video frame. Call this method when the preview window receives a WM_PAINT or WM_SIZE message.
-If this method succeeds, it returns
Sets a callback to receive the preview data for one stream.
-The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Gets the current mirroring state of the video preview stream.
-Receives the value TRUE if mirroring is enabled, or
If this method succeeds, it returns
Enables or disables mirroring of the video preview stream.
-If TRUE, mirroring is enabled. If
If this method succeeds, it returns
Gets the rotation of the video preview stream.
-The zero-based index of the stream. You must specify a video stream.
Receives the image rotation, in degrees.
If this method succeeds, it returns
Rotates the video preview stream.
-The zero-based index of the stream to rotate. You must specify a video stream.
The amount to rotate the video, in degrees. Valid values are 0, 90, 180, and 270. The value zero restores the video to its original orientation.
If this method succeeds, it returns
Sets a custom media sink for preview.
-A reference to the
If this method succeeds, it returns
This method overrides the default selection of the media sink for preview.
-Controls the recording sink. The recording sink creates compressed audio/video files or compressed audio/video streams.
-The recording sink can deliver samples to one of the following destinations:
The application must specify a single destination. Multiple destinations are not supported. (However, if a callback is used, you can provide a separate callback for each stream.)
If the destination is a byte stream or an output file, the application specifies a container type, such as MP4 or ASF. The capture engine then multiplexes the audio and video to produce the format defined by the container type. If the destination is a callback interface, however, the capture engine does not multiplex or otherwise interleave the samples. The callback option gives you the most control over the recorded output, but requires more work by the application.
To start the recording, call
Specifies a byte stream that will receive the data for the recording.
-A reference to the
A
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a callback to receive the recording data for one stream.
-The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies the name of the output file for the recording.
-The capture engine uses the file name extension to select the container type for the output file. For example, if the file name extension is ."mp4", the capture engine creates an MP4 file.
Calling this method overrides any previous call to
Sets a custom media sink for recording.
-This method overrides the default selection of the media sink for recording.
-Specifies a byte stream that will receive the data for the recording.
-A reference to the
A
If this method succeeds, it returns
Calling this method overrides any previous call to
Specifies the name of the output file for the recording.
-A null-terminated string that contains the URL of the output file.
If this method succeeds, it returns
The capture engine uses the file name extension to select the container type for the output file. For example, if the file name extension is ."mp4", the capture engine creates an MP4 file.
Calling this method overrides any previous call to
Sets a callback to receive the recording data for one stream.
-The zero-based index of the stream. The index is returned in the pdwSinkStreamIndex parameter of the
A reference to the
If this method succeeds, it returns
Calling this method overrides any previous call to
Sets a custom media sink for recording.
-A reference to the
If this method succeeds, it returns
This method overrides the default selection of the media sink for recording.
-Gets the rotation that is currently being applied to the recorded video stream.
-The zero-based index of the stream. You must specify a video stream.
Receives the image rotation, in degrees.
If this method succeeds, it returns
Rotates the recorded video stream.
-The zero-based index of the stream to rotate. You must specify a video stream.
The amount to rotate the video, in degrees. Valid values are 0, 90, 180, and 270. The value zero restores the video to its original orientation.
If this method succeeds, it returns
Controls a capture sink, which is an object that receives one or more streams from a capture device.
-The capture engine creates the following capture sinks.
To get a reference to a capture sink, call
Sink | Interface |
---|---|
Photo sink | |
Preview sink | |
Recording sink | |
?
Applications cannot directly create the capture sinks.
If an image stream native media type is set to JPEG, the photo sink should be configured with a format identical to native source format. JPEG native type is passthrough only.
If an image stream native type is set to JPEG, to add an effect, change the native type on the image stream to an uncompressed video media type (such as NV12 or RGB32) and then add the effect.
If the native type is H.264 for the record stream, the record sink should be configured with the same media type. H.264 native type is passthrough only and cannot be decoded.
Record streams that expose H.264 do not expose any other type. H.264 record streams cannot be used in conjunction with effects. To add effects, instead connect the preview stream to the recordsink using AddStream.
-Queries the underlying Sink Writer object for an interface.
-Gets the output format for a stream on this capture sink.
-The zero-based index of the stream to query. The index is returned in the pdwSinkStreamIndex parameter of the
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSinkStreamIndex parameter is invalid. |
?
Queries the underlying Sink Writer object for an interface.
-Connects a stream from the capture source to this capture sink.
-The source stream to connect. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
An
A reference to the
Receives the index of the new stream on the capture sink. Note that this index will not necessarily match the value of dwSourceStreamIndex.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The format specified in pMediaType is not valid for this capture sink. |
| The dwSourceStreamIndex parameter is invalid, or the specified source stream was already connected to this sink. |
?
Prepares the capture sink by loading any required pipeline components, such as encoders, video processors, and media sinks.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. |
?
Calling this method is optional. This method gives the application an opportunity to configure the pipeline components before they are used. The method is asynchronous. If the method returns a success code, the caller will receive an MF_CAPTURE_SINK_PREPARED event through the
Before calling this method, configure the capture sink by adding at least one stream. To add a stream, call
The Prepare method fails if the capture sink is currently in use. For example, calling Prepare on the preview sink fails if the capture engine is currently previewing.
-Removes all streams from the capture sink.
-If this method succeeds, it returns
You can use this method to reconfigure the sink.
-Receives state-change notifications from the presentation clock.
-To receive state-change notifications from the presentation clock, implement this interface and call
This interface must be implemented by:
Presentation time sources. The presentation clock uses this interface to request change states from the time source.
Media sinks. Media sinks use this interface to get notifications when the presentation clock changes.
Other objects that need to be notified can implement this interface.
-Applies to: desktop apps only
Enables two threads to share the same Direct3D 9 device, and provides access to the DirectX Video Acceleration (DXVA) features of the device.
-This interface is exposed by the Direct3D Device Manager. To create the Direct3D device manager, call
To get this interface from the Enhanced Video Renderer (EVR), call
The Direct3D Device Manager supports Direct3D 9 devices only. It does not support DXGI devices.
-Enables two threads to share the same Direct3D 9 device, and provides access to the DirectX Video Acceleration (DXVA) features of the device.
-This interface is exposed by the Direct3D Device Manager. To create the Direct3D device manager, call
To get this interface from the Enhanced Video Renderer (EVR), call
The Direct3D Device Manager supports Direct3D 9 devices only. It does not support DXGI devices.
Windows Store apps must use IMFDXGIDeviceManager and Direct3D 11 Video APIs.
-Applies to: desktop apps only
Creates an instance of the Direct3D Device Manager.
-If this function succeeds, it returns
Sets the Direct3D device or notifies the device manager that the Direct3D device was reset.
-Pointer to the
Token received in the pResetToken parameter of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid token |
| Direct3D device error. |
?
When you first create the Direct3D device manager, call this method with a reference to the Direct3D device. The device manager does not create the device; the caller must provide the device reference initially.
Also call this method if the Direct3D device becomes lost and you need to reset the device or create a new device. This occurs if
The resetToken parameter ensures that only the component which originally created the device manager can invalidate the current device.
If this method succeeds, all open device handles become invalid.
-Gets a handle to the Direct3D device.
-Receives the device handle.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Direct3D device manager was not initialized. The owner of the device must call |
?
To get the Direct3D device's
To test whether a device handle is still valid, call
Closes a Direct3D device handle. Call this method to release a device handle retrieved by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid handle. |
?
Tests whether a Direct3D device handle is valid.
-Handle to a Direct3D device. To get a device handle, call
The method returns an
Return code | Description |
---|---|
| The device handle is valid. |
| The specified handle is not a Direct3D device handle. |
| The device handle is invalid. |
?
If the method returns DXVA2_E_NEW_VIDEO_DEVICE, call
Gives the caller exclusive access to the Direct3D device.
-A handle to the Direct3D device. To get the device handle, call
Receives a reference to the device's
Specifies whether to wait for the device lock. If the device is already locked and this parameter is TRUE, the method blocks until the device is unlocked. Otherwise, if the device is locked and this parmater is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The device handle is invalid. |
| The Direct3D device manager was not initialized. The owner of the device must call |
| The device is locked and fBlock is |
| The specified handle is not a Direct3D device handle. |
?
When you are done using the Direct3D device, call
If the method returns DXVA2_E_NEW_VIDEO_DEVICE, call
If fBlock is TRUE, this method can potentially deadlock. For example, it will deadlock if a thread calls LockDevice and then waits on another thread that calls LockDevice. It will also deadlock if a thread calls LockDevice twice without calling UnlockDevice in between.
-Unlocks the Direct3D device. Call this method to release the device after calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified device handle is not locked, or is not a valid handle. |
?
Gets a DirectX Video Acceleration (DXVA) service interface.
- A handle to a Direct3D device. To get a device handle, call
The interface identifier (IID) of the requested interface. The Direct3D device might support the following DXVA service interfaces:
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The device handle is invalid. |
| The Direct3D device does not support video acceleration. |
| The Direct3D device manager was not initialized. The owner of the device must call |
| The specified handle is not a Direct3D device handle. |
?
If the method returns DXVA2_E_NEW_VIDEO_DEVICE, call
Specifies how the output alpha values are calculated for Microsoft DirectX Video Acceleration High Definition (DXVA-HD) blit operations.
-The Mode member of the
To find out which modes the device supports, call the
Alpha values inside the target rectangle are set to opaque.
Alpha values inside the target rectangle are set to the alpha value specified in the background color. See
Existing alpha values remain unchanged in the output surface.
Alpha values from the input stream are scaled and copied to the corresponding destination rectangle for that stream. If the input stream does not have alpha data, the DXVA-HD device sets the alpha values in the target rectangle to an opaque value. If the input stream is disabled or the source rectangle is empty, the alpha values in the target rectangle are not modified.
Specifies state parameters for blit operations when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
To set a state parameter, call the
Defines video processing capabilities for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The device can blend video content in linear color space. Most video content is gamma corrected, resulting in nonlinear values. If the DXVA-HD device sets this flag, it means the device converts colors to linear space before blending, which produces better results. -
The device supports the xvYCC color space for YCbCr data.
The device can perform range conversion when the input and output are both RGB but use different color ranges (0-255 or 16-235, for 8-bit RGB).
The device can apply a matrix conversion to YCbCr values when the input and output are both YCbCr. For example, the driver can convert colors from BT.601 to BT.709.
Specifies the type of Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-Hardware device. Video processing is performed in the GPU by the driver.
Software device. Video processing is performed in the CPU by a software plug-in.
Reference device. Video processing is performed in the CPU by a software plug-in.
Other. The device is neither a hardware device nor a software plug-in.
Specifies the intended use for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The graphics driver uses one of these enumeration constants as a hint when it creates the DXVA-HD device.
-Normal video playback. The graphics driver should expose a set of capabilities that are appropriate for real-time video playback.
Optimal speed. The graphics driver should expose a minimal set of capabilities that are optimized for performance.
Use this setting if you want better performance and can accept some reduction in video quality. For example, you might use this setting in power-saving mode or to play video thumbnails.
Optimal quality. The grahics driver should expose its maximum set of capabilities.
Specify this setting to get the best video quality possible. It is appropriate for tasks such as video editing, when quality is more important than speed. It is not appropriate for real-time playback.
Defines features that a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device can support.
-The device can set the alpha values on the video output. See
The device can downsample the video output. See
The device can perform luma keying. See
The device can apply alpha values from color palette entries. See
Defines the range of supported values for an image filter.
-The multiplier enables the filter range to have a fractional step value.
For example, a hue filter might have an actual range of [-180.0 ... +180.0] with a step size of 0.25. The device would report the following range and multiplier:
In this case, a filter value of 2 would be interpreted by the device as 0.50 (or 2 ? 0.25).
The device should use a multiplier that can be represented exactly as a base-2 fraction.
-The minimum value of the filter.
The maximum value of the filter.
The default value of the filter.
A multiplier. Use the following formula to translate the filter setting into the actual filter value: Actual Value = Set Value???Multiplier.
Defines capabilities related to image adjustment and filtering for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The device can adjust the brightness level.
The device can adjust the contrast level.
The device can adjust hue.
The device can adjust the saturation level.
The device can perform noise reduction.
The device can perform edge enhancement.
The device can perform anamorphic scaling. Anamorphic scaling can be used to stretch 4:3 content to a widescreen 16:9 aspect ratio.
Describes how a video stream is interlaced.
-Frames are progressive.
Frames are interlaced. The top field of each frame is displayed first.
Frame are interlaced. The bottom field of each frame is displayed first.
Defines capabilities related to input formats for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-These flags define video processing capabilities that are usually not needed, and therefore are not required for DXVA-HD devices to support.
The first three flags relate to RGB support for functions that are normally applied to YCbCr video: deinterlacing, color adjustment, and luma keying. A DXVA-HD device that supports these functions for YCbCr is not required to support them for RGB input. Supporting RGB input for these functions is an additional capability, reflected by these constants. The driver might convert the input to another color space, perform the indicated function, and then convert the result back to RGB.
Similarly, a device that supports de-interlacing is not required to support deinterlacing of palettized formats. This capability is indicated by the
The device can deinterlace an input stream that contains interlaced RGB video.
The device can perform color adjustment on RGB video.
The device can perform luma keying on RGB video.
The device can deinterlace input streams with palettized color formats.
Specifies the inverse telecine (IVTC) capabilities of a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
-The video processor can reverse 3:2 pulldown.
The video processor can reverse 2:2 pulldown.
The video processor can reverse 2:2:2:4 pulldown.
The video processor can reverse 2:3:3:2 pulldown.
The video processor can reverse 3:2:3:2:2 pulldown.
The video processor can reverse 5:5 pulldown.
The video processor can reverse 6:4 pulldown.
The video processor can reverse 8:7 pulldown.
The video processor can reverse 2:2:2:2:2:2:2:2:2:2:2:3 pulldown.
The video processor can reverse other telecine modes not listed here.
Describes how to map color data to a normalized [0...1] range.
These flags are used in the
For YUV colors, these flags specify how to convert between Y'CbCr and Y'PbPr. The Y'PbPr color space has a range of [0..1] for Y' (luma) and [-0.5...0.5] for Pb/Pr (chroma).
Value | Description |
---|---|
Should not be used for YUV data. | |
For 8-bit Y'CbCr components:
For samples with n bits of precision, the general equations are:
The inverse equations to convert from Y'CbCr to Y'PbPr are:
| |
For 8-bit Y'CbCr values, Y' range of [0..1] maps to [48...208]. |
?
For RGB colors, the flags differentiate various RGB spaces.
Value | Description |
---|---|
sRGB | |
Studio RGB; ITU-R BT.709 | |
ITU-R BT.1361 RGB |
?
Video data might contain values above or below the nominal range.
Note??The values named
This enumeration is equivalent to the DXVA_NominalRange enumeration used in DXVA 1.0, although it defines additional values.
If you are using the
Specifies the output frame rates for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
This enumeration type is used in the
Specifies the processing capabilities of a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
-The video processor can perform blend deinterlacing.
In blend deinterlacing, the two fields from an interlaced frame are blended into a single progressive frame. A video processor uses blend deinterlacing when it deinterlaces at half rate, as when converting 60i to 30p. Blend deinterlacing does not require reference frames.
The video processor can perform bob deinterlacing.
In bob deinterlacing, missing field lines are interpolated from the lines above and below. Bob deinterlacing does not require reference frames.
The video processor can perform adaptive deinterlacing.
Adaptive deinterlacing uses spatial or temporal interpolation, and switches between the two on a field-by-field basis, depending on the amount of motion. If the video processor does not receive enough reference frames to perform adaptive deinterlacing, it falls back to bob deinterlacing.
The video processor can perform motion-compensated deinterlacing.
Motion-compensated deinterlacing uses motion vectors to recreate missing lines. If the video processor does not receive enough reference frames to perform motion-compensated deinterlacing, it falls back to bob deinterlacing.
The video processor can perform inverse telecine (IVTC).
If the video processor supports this capability, the ITelecineCaps member of the
The video processor can convert the frame rate by interpolating frames.
Describes the content of a video sample. These flags are used in the
This enumeration is equivalent to the DXVA_SampleFormat enumeration used in DXVA 1.0.
The following table shows the mapping from
No exact match. Use |
?
With the exception of
The value
Specifies the luma key for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-To use this state, the device must support luma keying, indicated by the
If the device does not support luma keying, the
If the input format is RGB, the device must also support the
The values of Lower and Upper give the lower and upper bounds of the luma key, using a nominal range of [0...1]. Given a format with n bits per channel, these values are converted to luma values as follows:
val = f * ((1 << n)-1)
Any pixel whose luma value falls within the upper and lower bounds (inclusive) is treated as transparent.
For example, if the pixel format uses 8-bit luma, the upper bound is calculated as follows:
BYTE Y = BYTE(max(min(1.0, Upper), 0.0) * 255.0)
Note that the value is clamped to the range [0...1] before multiplying by 255.
- If TRUE, luma keying is enabled. Otherwise, luma keying is disabled. The default value is
The lower bound for the luma key. The range is [0?1]. The default state value is 0.0.
The upper bound for the luma key. The range is [0?1]. The default state value is 0.0.
Describes a DirectX surface type for DirectX Video Acceleration (DXVA).
-The surface is a decoder render target.
The surface is a video processor render target.
The surface is a Direct3D texture render target.
Specifies the type of video surface created by a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-If the DXVA-HD device is a software plug-in and the surface type is
A surface for an input stream. This surface type is equivalent to an off-screen plain surface in Microsoft Direct3D. The application can use the surface in Direct3D calls.
A private surface for an input stream. This surface type is equivalent to an off-screen plain surface, except that the application cannot use the surface in Direct3D calls.
A surface for an output stream. This surface type is equivalent to an off-screen plain surface in Direct3D. The application can use the surface in Direct3D calls.
This surface type is recommended for video processing applications that need to lock the surface and access the surface memory. For video playback with optimal performance, a render-target surface or swap chain is recommended instead.
Describes how chroma values are positioned relative to the luma samples in a YUV video frame. These flags are used in the
The following diagrams show the most common arrangements.
-Describes the intended lighting conditions for viewing video content. These flags are used in the
This enumeration is equivalent to the DXVA_VideoLighting enumeration used in DXVA 1.0.
If you are using the
Specifies the color primaries of a video source. These flags are used in the
Color primaries define how to convert RGB colors into the CIE XYZ color space, and can be used to translate colors between different RGB color spaces. An RGB color space is defined by the chromaticity coordinates (x,y) of the RGB primaries plus the white point, as listed in the following table.
Color space | (Rx, Ry) | (Gx, Gy) | (Bx, By) | White point (Wx, Wy) |
---|---|---|---|---|
BT.709 | (0.64, 0.33) | (0.30, 0.60) | (0.15, 0.06) | D65 (0.3127, 0.3290) |
BT.470-2 System M; EBU 3212 | (0.64, 0.33) | (0.29, 0.60) | (0.15, 0.06) | D65 (0.3127, 0.3290) |
BT.470-4 System B,G | (0.67, 0.33) | (0.21, 0.71) | (0.14, 0.08) | CIE III.C (0.310, 0.316) |
SMPTE 170M; SMPTE 240M; SMPTE C | (0.63, 0.34) | (0.31, 0.595) | (0.155, 0.07) | D65 (0.3127, 0.3291) |
?
The z coordinates can be derived from x and y as follows: z = 1 - x - y. To convert between RGB colors to CIE XYZ tristimulus values, compute a matrix T as follows:
Given T, you can use the following formulas to convert between an RGB color value and a CIE XYZ tristimulus value. These formulas assume that the RGB components are linear (not gamma corrected) and are normalized to the range [0...1].
To convert colors directly from one RGB color space to another, use the following formula, where T1 is the matrix for color space RGB1, and T2 is the matrix for color space RGB2.
For a derivation of these formulas, refer to Charles Poynton, Digital Video and HDTV Algorithms and Interfaces (Morgan Kaufmann, 2003).
This enumeration is equivalent to the DXVA_VideoPrimaries enumeration used in DXVA 1.0.
If you are using the
Specifies the conversion function from linear RGB to non-linear RGB (R'G'B'). These flags are used in the
The following table shows the formulas for the most common transfer functions. In these formulas, L is the linear value and L' is the non-linear (gamma corrected) value. These values are relative to a normalized range [0...1].
Color space | Transfer function |
---|---|
sRGB (8-bit) | L' = 12.92L, for L < 0.031308 L' = 1.055L^1/2.4? 0.055, for L >= 0.031308 |
BT.470-2 System B, G | L' = L^0.36 |
BT.470-2 System M | L' = L^0.45 |
BT.709 | L' = 4.50L, for L < 0.018 L' = 1.099L^0.45? 0.099, for L >= 0.018 |
scRGB | L' = L |
SMPTE 240M | L' = 4.0L, for L < 0.0228 L' = 1.1115L^0.45? 0.01115, for L >= 0.0228 |
?
The following table shows the inverse formulas to obtain the original gamma-corrected values:
Color space | Transfer function |
---|---|
sRGB (8-bit) | L = 1/12.92L', for L' < 0.03928 L = ((L' + 0.055)/1055)^2.4, for L' >= 0.03928 |
BT.470-2 System B, G | L = L'^1/0.36 |
BT.470-2 System M | L = L'^1/0.45 |
BT.709 | L = L'/4.50, for L' < 0.081 L = ((L' + 0.099) / 1.099)^1/0.45, for L' >= 0.081 |
scRGB | L = L' |
SMPTE 240M | L = L'/4.0, for L' < 0.0913 L= ((L' + 0.1115)/1.1115)^1/0.45, for L' >= 0.0913 |
?
This enumeration is equivalent to the DXVA_VideoTransferFunction enumeration used in DXVA 1.0.
If you are using the
Bitmask to validate flag values. This value is not a valid flag.
Unknown. Treat as
Linear RGB (gamma = 1.0).
True 1.8 gamma, L' = L^1/1.8.
True 2.0 gamma, L' = L^1/2.0.
True 2.2 gamma, L' = L^1/2.2. This transfer function is used in ITU-R BT.470-2 System M (NTSC).
ITU-R BT.709 transfer function. Gamma 2.2 curve with a linear segment in the lower range. This transfer function is used in BT.709, BT.601, SMPTE 296M, SMPTE 170M, BT.470, and SMPTE 274M. In addition BT-1361 uses this function within the range [0...1].
SMPTE 240M transfer function. Gamma 2.2 curve with a linear segment in the lower range.
sRGB transfer function. Gamma 2.4 curve with a linear segment in the lower range.
True 2.8 gamma. L' = L^1/2.8. This transfer function is used in ITU-R BT.470-2 System B, G (PAL).
Describes the conversion matrices between Y'PbPr (component video) and studio R'G'B'. These flags are used in the
The transfer matrices are defined as follows.
BT.709 transfer matrices:
Y' 0.212600 0.715200 0.072200 R'
- Pb = -0.114572 -0.385428 0.500000 x G'
- Pr 0.500000 -0.454153 -0.045847 B' R' 1.000000 0.000000 1.574800 Y'
- G' = 1.000000 -0.187324 -0.468124 x Pb
- B' 1.000000 1.855600 0.000000 Pr
-
BT.601 transfer matrices:
Y' 0.299000 0.587000 0.114000 R'
- Pb = -0.168736 -0.331264 0.500000 x G'
- Pr 0.500000 -0.418688 -0.081312 B' R' 1.000000 0.000000 1.402000 Y'
- G' = 1.000000 -0.344136 -0.714136 x Pb
- B' 1.000000 1.772000 0.000000 Pr
-
SMPTE 240M (SMPTE RP 145) transfer matrices:
Y' 0.212000 0.701000 0.087000 R'
- Pb = -0.116000 -0.384000 0.500000 x G'
- Pr 0.500000 -0.445000 -0.055000 B' R' 1.000000 -0.000000 1.576000 Y'
- G' = 1.000000 -0.227000 -0.477000 x Pb
- B' 1.000000 1.826000 0.000000 Pr
-
This enumeration is equivalent to the DXVA_VideoTransferMatrix enumeration used in DXVA 1.0.
If you are using the
Creates an instance of the Direct3D Device Manager.
-If this function succeeds, it returns
Windows Store apps must use IMFDXGIDeviceManager and Direct3D 11 Video APIs.
-Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
- A reference to the
The interface identifier (IID) of the requested interface. Any of the following interfaces might be supported by the Direct3D device:
Receives a reference to the interface. The caller must release the interface.
If this function succeeds, it returns
Creates a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-A reference to the
A reference to a
A member of the
A reference to an initialization function for a software device. Set this reference if you are using a software plug-in device. Otherwise, set this parameter to
The function reference type is PDXVAHDSW_Plugin.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Direct3D device does not support DXVA-HD. |
?
Use the
Gets the range of values for an image filter that the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device supports.
-To find out which image filters the device supports, check the FilterCaps member of the
Applies to: desktop apps only
Gets the range of values for an image filter that the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device supports.
-To find out which image filters the device supports, check the FilterCaps member of the
Gets the capabilities of the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-Creates one or more Microsoft Direct3D video surfaces.
-The width of each surface, in pixels.
The height of each surface, in pixels.
The pixel format, specified as a
The memory pool in which the surface is created. This parameter must equal the InputPool member of the
Reserved. Set to 0.
The type of surface to create, specified as a member of the
The number of surfaces to create.
A reference to an array of
Reserved. Set to
If this method succeeds, it returns
Gets the capabilities of the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-A reference to a
If this method succeeds, it returns
Gets a list of the output formats supported by the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The number of formats to retrieve. This parameter must equal the OutputFormatCount member of the
A reference to an array of
If this method succeeds, it returns
The list of formats can include both
Gets a list of the input formats supported by the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The number of formats to retrieve. This parameter must equal the InputFormatCount member of the
A reference to an array of
If this method succeeds, it returns
The list of formats can include both
Gets the capabilities of one or more Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processors.
-The number of elements in the pCaps array. This parameter must equal the VideoProcessorCount member of the
A reference to an array of
If this method succeeds, it returns
Gets a list of custom rates that a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor supports. Custom rates are used for frame-rate conversion and inverse telecine (IVTC).
-A
The number of rates to retrieve. This parameter must equal the CustomRateCount member of the
A reference to an array of
If this method succeeds, it returns
Gets the range of values for an image filter that the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device supports.
-The type of image filter, specified as a member of the
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Filter parameter is invalid or the device does not support the specified filter. |
?
To find out which image filters the device supports, check the FilterCaps member of the
Creates a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
-A
Receives a reference to the
If this method succeeds, it returns
Applies to: desktop apps only
Creates a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-A reference to the
A reference to a
A member of the
Use the
Represents a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
To get a reference to this interface, call the
Sets a state parameter for a blit operation by a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The state parameter to set, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer that contains the state data. The meaning of the data depends on the State parameter. Each state has a corresponding data structure; for more information, see
If this method succeeds, it returns
Gets the value of a state parameter for blit operations performed by a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The state parameter to query, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer allocated by the caller. The method copies the state data into the buffer. The buffer must be large enough to hold the data structure that corresponds to the state parameter. For more information, see
If this method succeeds, it returns
Sets a state parameter for an input stream on a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The zero-based index of the input stream. To get the maximum number of streams, call
The state parameter to set, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer that contains the state data. The meaning of the data depends on the State parameter. Each state has a corresponding data structure; for more information, see
If this method succeeds, it returns
Call this method to set state parameters that apply to individual input streams.
-Gets the value of a state parameter for an input stream on a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-The zero-based index of the input stream. To get the maximum number of streams, call
The state parameter to query, specified as a member of the
The size, in bytes, of the buffer pointed to by pData.
A reference to a buffer allocated by the caller. The method copies the state data into the buffer. The buffer must be large enough to hold the data structure that corresponds to the state parameter. For more information, see
If this method succeeds, it returns
Performs a video processing blit on one or more input samples and writes the result to a Microsoft Direct3D surface.
-A reference to the
Frame number of the output video frame, indexed from zero.
Number of input streams to process.
Pointer to an array of
If this method succeeds, it returns
The maximum value of StreamCount is given in the MaxStreamStates member of the
Provides DirectX Video Acceleration (DXVA) services from a Direct3D device. To get a reference to this interface, call
This is the base interface for DXVA services. The Direct3D device can support any of the following DXVA services, which derive from
Applies to: desktop apps only
Provides DirectX Video Acceleration (DXVA) services from a Direct3D device. To get a reference to this interface, call
This is the base interface for DXVA services. The Direct3D device can support any of the following DXVA services, which derive from
Creates a DirectX Video Acceleration (DXVA) video processor or DXVA decoder render target.
-The width of the surface, in pixels.
The height of the surface, in pixels.
The number of back buffers. The method creates BackBuffers + 1 surfaces.
The pixel format, specified as a
The memory pool in which to create the surface, specified as a
Reserved. Set this value to zero.
The type of surface to create. Use one of the following values.
Value | Meaning |
---|---|
Video decoder render target. | |
Video processor render target. Used for | |
Software render target. This surface type is for use with software DXVA devices. |
?
The address of an array of
A reference to a handle that is used to share the surfaces between Direct3D devices. Set this parameter to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid parameter |
| The DirectX Video Acceleration Manager is not initialized. |
| |
?
If the method returns E_FAIL, try calling
Applies to: desktop apps only
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
- A reference to the
If this function succeeds, it returns
Represents a DirectX Video Acceleration (DXVA) video decoder device.
To get a reference to this interface, call
The
Retrieves the DirectX Video Acceleration (DXVA) decoder service that created this decoder device.
-Retrieves the DirectX Video Acceleration (DXVA) decoder service that created this decoder device.
-Receives a reference to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the parameters that were used to create this device.
-Receives the device
Pointer to a
Pointer to a
Receives an array of
Receives the number of elements in the pppDecoderRenderTargets array. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. At least one parameter must be non- |
?
You can set any parameter to
If you specify a non-
Retrieves a reference to a DirectX Video Acceleration (DXVA) decoder buffer.
-Type of buffer to retrieve. Use one of the following values.
Value | Meaning |
---|---|
Picture decoding parameter buffer. | |
Macroblock control command buffer. | |
Residual difference block data buffer. | |
Deblocking filter control command buffer. | |
Inverse quantization matrix buffer. | |
Slice-control buffer. | |
Bitstream data buffer. | |
Motion vector buffer. | |
Film grain synthesis data buffer. |
?
Receives a reference to the start of the memory buffer.
Receives the size of the buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The method locks the Direct3D surface that contains the buffer. When you are done using the buffer, call
This method might block if too many operations have been queued on the GPU. The method unblocks when a free buffer becomes available.
- Releases a buffer that was obtained by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Starts the decoding operation.
-Pointer to the
Reserved; set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid surface type. See Remarks. |
?
After this method is called, call
Each call to BeginFrame must have a matching call to EndFrame, and BeginFrame calls cannot be nested.
DXVA 1.0 migration note: Unlike the IAMVideoAccelerator::BeginFrame method, which specifies the buffer as an index, this method takes a reference directly to the uncompressed buffer.
The surface pointed to by pRenderTarget must be created by calling
Signals the end of the decoding operation.
-Reserved.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Executes a decoding operation on the current frame.
-Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You must call
Provides access to DirectX Video Acceleration (DXVA) decoder services. Use this interface to query which hardware-accelerated decoding operations are available and to create DXVA video decoder devices.
To get a reference to this interface, call
Applies to: desktop apps only
Provides access to DirectX Video Acceleration (DXVA) decoder services. Use this interface to query which hardware-accelerated decoding operations are available and to create DXVA video decoder devices.
To get a reference to this interface, call
Retrieves an array of GUIDs that identifies the decoder devices supported by the graphics hardware.
-Receives the number of GUIDs.
Receives an array of GUIDs. The size of the array is retrieved in the Count parameter. The method allocates the memory for the array. The caller must free the memory by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Error from the Direct3D device. |
| If the Microsoft Basic Display Adapter is being used or the Direct3D?11 device type is the reference rasterizer. These devices do not support video decoders. |
?
The following decoder GUIDs are defined. Some of these GUIDs have alternate names, shown in parentheses.
Description | |
---|---|
DXVA2_ModeH264_A (DXVA2_ModeH264_MoComp_NoFGT) | H.264 motion compensation (MoComp), no film grain technology (FGT). |
DXVA2_ModeH264_B (DXVA2_ModeH264_MoComp_FGT) | H.264 MoComp, FGT. |
DXVA2_ModeH264_C (DXVA2_ModeH264_IDCT_NoFGT) | H.264 inverse discrete cosine transform (IDCT), no FGT. |
DXVA2_ModeH264_D (DXVA2_ModeH264_IDCT_FGT) | H.264 IDCT, FGT. |
DXVA2_ModeH264_E (DXVA2_ModeH264_VLD_NoFGT) | H.264 VLD, no FGT. |
DXVA2_ModeH264_F (DXVA2_ModeH264_VLD_FGT) | H.264 variable-length decoder (VLD), FGT. |
DXVA2_ModeMPEG2_IDCT | MPEG-2 IDCT. |
DXVA2_ModeMPEG2_MoComp | MPEG-2 MoComp. |
DXVA2_ModeMPEG2_VLD | MPEG-2 VLD. |
DXVA2_ModeVC1_A (DXVA2_ModeVC1_PostProc) | VC-1 post processing. |
DXVA2_ModeVC1_B (DXVA2_ModeVC1_MoComp) | VC-1 MoComp. |
DXVA2_ModeVC1_C (DXVA2_ModeVC1_IDCT) | VC-1 IDCT. |
DXVA2_ModeVC1_D (DXVA2_ModeVC1_VLD) | VC-1 VLD. |
DXVA2_ModeWMV8_A (DXVA2_ModeWMV8_PostProc) | Windows Media Video 8 post processing. |
DXVA2_ModeWMV8_B (DXVA2_ModeWMV8_MoComp) | Windows Media Video 8 MoComp. |
DXVA2_ModeWMV9_A (DXVA2_ModeWMV9_PostProc) | Windows Media Video 9 post processing. |
DXVA2_ModeWMV9_B (DXVA2_ModeWMV9_MoComp) | Windows Media Video 9 MoComp. |
DXVA2_ModeWMV9_C (DXVA2_ModeWMV9_IDCT) | Windows Media Video 9 IDCT. |
?
-
Retrieves the supported render targets for a specified decoder device.
-Receives the number of formats.
Receives an array of formats, specified as
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets the configurations that are available for a decoder device.
-A
A reference to a
Reserved. Set to
Receives the number of configurations.
Receives an array of
If this method succeeds, it returns
Creates a video decoder device.
-Pointer to a
Pointer to a
Pointer to an array of
Size of the ppDecoderRenderTargets array. This value cannot be zero.
Receives a reference to the decoder's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a video decoder device.
-Pointer to a
Pointer to a
Pointer to an array of
Size of the ppDecoderRenderTargets array. This value cannot be zero.
Receives a reference to the decoder's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Applies to: desktop apps only
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
- A reference to the
If this function succeeds, it returns
Sets the type of video memory for uncompressed video surfaces. This interface is used by video decoders and transforms.
The DirectShow enhanced video renderer (EVR) filter exposes this interface as a service on the filter's input pins. To obtain a reference to this interface, call
A video decoder can use this interface to enumerate the EVR filter's preferred surface types and then select the surface type. The decoder should then create surfaces of that type to hold the results of the decoding operation.
This interface does not define a way to clear the surface type. In the case of DirectShow, disconnecting two filters invalidates the surface type.
-
Sets the video surface type that a decoder will use for DirectX Video Acceleration (DVXA) 2.0.
-By calling this method, the caller agrees to create surfaces of the type specified in the dwType parameter.
In DirectShow, during pin connection, a video decoder that supports DVXA 2.0 should call SetSurface with the value
The only way to undo the setting is to break the pin connection.
-
Retrieves a supported video surface type.
-Zero-based index of the surface type to retrieve. Surface types are indexed in order of preference, starting with the most preferred type.
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index was out of range. |
?
Sets the video surface type that a decoder will use for DirectX Video Acceleration (DVXA) 2.0.
-Member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The renderer does not support the specified surface type. |
?
By calling this method, the caller agrees to create surfaces of the type specified in the dwType parameter.
In DirectShow, during pin connection, a video decoder that supports DVXA 2.0 should call SetSurface with the value
The only way to undo the setting is to break the pin connection.
-
Retrieves the parameters that were used to create this device.
-You can set any parameter to
Retrieves the DirectX Video Acceleration (DXVA) video processor service that created this video processor device.
-
Retrieves the capabilities of the video processor device.
-
Retrieves the DirectX Video Acceleration (DXVA) video processor service that created this video processor device.
-Receives a reference to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the parameters that were used to create this device.
-Receives the device
Pointer to a
Receives the render target format, specified as a
Receives the maximum number of streams supported by the device. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. At least one parameter must be non- |
?
You can set any parameter to
Retrieves the capabilities of the video processor device.
-Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the range of values for a video processor (ProcAmp) setting on this video processor device.
-The ProcAmp setting to query. See ProcAmp Settings.
Pointer to a
If this method succeeds, it returns
Retrieves the range of values for an image filter supported by this device.
-Filter setting to query. For more information, see DXVA Image Filter Settings.
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Performs a video process operation on one or more input samples and writes the result to a Direct3D9 surface.
- A reference to the
A reference to a
A reference to an array of
The maximum number of input samples is given by the constant MAX_DEINTERLACE_SURFACES, defined in the header file dxva2api.h.
The number of elements in the pSamples array.
Reserved; set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Internal driver error. |
| Invalid arguments. |
?
When the method returns, the operation might not be complete.
If the method returns E_INVALIDARG, check for the following:
Provides access to DirectX Video Acceleration (DXVA) video processing services.
Use this interface to query which hardware-accelerated video processing operations are available and to create DXVA video processor devices. To obtain a reference to this interface, call
Applies to: desktop apps only
Provides access to DirectX Video Acceleration (DXVA) video processing services.
Use this interface to query which hardware-accelerated video processing operations are available and to create DXVA video processor devices. To obtain a reference to this interface, call
Registers a software video processing device.
-Pointer to an initialization function.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets an array of GUIDs which identify the video processors supported by the graphics hardware.
- Pointer to a
Receives the number of GUIDs.
Receives an array of GUIDs. The size of the array is retrieved in the pCount parameter. The method allocates the memory for the array. The caller must free the memory by calling CoTaskMemFree.
If this method succeeds, it returns
The following video processor GUIDs are predefined.
Description | |
---|---|
DXVA2_VideoProcBobDevice | Bob deinterlace device. This device uses a "bob" algorithm to deinterlace the video. Bob algorithms create missing field lines by interpolating the lines in a single field. |
DXVA2_VideoProcProgressiveDevice | Progressive video device. This device is available for progressive video, which does not require a deinterlace algorithm. |
DXVA2_VideoProcSoftwareDevice | Reference (software) device. |
?
The graphics device may define additional vendor-specific GUIDs. The driver provides the list of GUIDs in descending quality order. The mode with the highest quality is first in the list. To get the capabilities of each mode, call
Gets the render target formats that a video processor device supports. The list may include RGB and YUV formats.
- A
A reference to a
Receives the number of formats.
Receives an array of formats, specified as
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets a list of substream formats supported by a specified video processor device.
- A
A reference to a
The format of the render target surface, specified as a
Receives the number of elements returned in the ppFormats array.
Receives an array of
If this method succeeds, it returns
Gets the capabilities of a specified video processor device.
- A
A reference to a
The format of the render target surface, specified as a
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets the range of values for a video processor (ProcAmp) setting.
-A
A reference to a
The format of the render target surface, specified as a
The ProcAmp setting to query. See ProcAmp Settings.
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the range of values for an image filter supported by a video processor device.
- A
A reference to a
The format of the render target surface, specified as a
The filter setting to query. See DXVA Image Filter Settings.
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a video processor device.
-A
A reference to a
The format of the render target surface, specified as a
The maximum number of substreams that will be used with this device.
Receives a reference to the video processor's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Applies to: desktop apps only
Creates a DirectX Video Acceleration (DXVA) services object. Call this function if your application uses DXVA directly, without using DirectShow or Media Foundation.
- A reference to the
If this function succeeds, it returns
Contains an initialization vector (IV) for 128-bit Advanced Encryption Standard CTR mode (AES-CTR) block cipher encryption.
-For AES-CTR encyption, the pvPVPState member of the
The D3DAES_CTR_IV structure and the
The IV, in big-endian format.
The block count, in big-endian format.
Defines a 16-bit AYUV pixel value.
-Contains the Cr chroma value (also called V).
Contains the Cb chroma value (also called U).
Contains the luma value.
Contains the alpha value.
Defines an 8-bit AYUV pixel value.
-Contains the Cr chroma value (also called V).
Contains the Cb chroma value (also called U).
Contains the luma value.
Contains the alpha value.
Specifies how the output alpha values are calculated for blit operations when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-Specifies the alpha fill mode, as a member of the
If the FeatureCaps member of the
The default state value is
Zero-based index of the input stream to use for the alpha values. This member is used when the alpha fill mode is
To get the maximum number of streams, call
Specifies the background color for blit operations, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-The background color is used to fill the target rectangle wherever no video image appears. Areas outside the target rectangle are not affected. See
The color space of the background color is determined by the color space of the output. See
The alpha value of the background color is used only when the alpha fill mode is
The default background color is full-range RGB black, with opaque alpha.
- If TRUE, the BackgroundColor member specifies a YCbCr color. Otherwise, it specifies an RGB color. The default device state is
A
Specifies whether the output is downsampled in a blit operation, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-If the Enable member is TRUE, the device downsamples the composed target rectangle to the size given in the Size member, and then scales it back to the size of the target rectangle.
The width and height of Size must be greater than zero. If the size is larger than the target rectangle, downsampling does not occur.
To use this state, the device must support downsampling, indicated by the
If the device does not support downsampling, the
Downsampling is sometimes used to reduce the quality of premium content when other forms of content protection are not available.
-If TRUE, downsampling is enabled. Otherwise, downsampling is disabled and the Size member is ignored. The default state value is
The sampling size. The default value is (1,1).
Specifies the output color space for blit operations, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-The RGB_Range member applies to RGB output, while the YCbCr_Matrix and YCbCr_xvYCC members apply to YCbCr (YUV) output. If the device performs color-space conversion on the background color, it uses the values that apply to both color spaces.
Extended YCbCr can be used with either transfer matrix. Extended YCbCr does not change the black point or white point?the black point is still 16 and the white point is still 235. However, extended YCbCr explicitly allows blacker-than-black values in the range 1?15, and whiter-than-white values in the range 236?254. When extended YCbCr is used, the driver should not clip the luma values to the nominal 16?235 range.
If the device supports extended YCbCr, it sets the
If the output format is a wide-gamut RGB format, output might fall outside the nominal [0...1] range of sRGB. This is particularly true if one or more input streams use extended YCbCr.
-Specifies whether the output is intended for playback or video processing (such as editing or authoring). The device can optimize the processing based on the type. The default state value is 0 (playback).
Value | Meaning |
---|---|
| Playback. |
| Video processing. |
?
Specifies the RGB color range. The default state value is 0 (full range).
Value | Meaning |
---|---|
| Full range (0-255). |
| Limited range (16-235). |
?
Specifies the YCbCr transfer matrix. The default state value is 0 (BT.601).
Value | Meaning |
---|---|
| ITU-R BT.601. |
| ITU-R BT.709. |
?
Specifies whether the output uses conventional YCbCr or extended YCbCr (xvYCC). The default state value is zero (conventional YCbCr).
Value | Meaning |
---|---|
| Conventional YCbCr. |
| Extended YCbCr (xvYCC). |
?
Contains data for a private blit state for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-Use this structure for proprietary or device-specific state parameters.
The caller allocates the pData array. Set the DataSize member to the size of the array in bytes. When retrieving the state data, you can set pData to
A
The size, in bytes, of the buffer pointed to by the pData member.
A reference to a buffer that contains the private state data. The DXVA-HD runtime passes this buffer directly to the device without validation.
Specifies the target rectangle for blitting, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-Specifies whether to use the target rectangle. The default state value is
Value | Meaning |
---|---|
| Use the target rectangle specified by the TargetRect member. |
Use the entire destination surface as the target rectangle. Ignore the TargetRect member. |
?
Specifies the target rectangle. The target rectangle is the area within the destination surface where the output will be drawn. The target rectangle is given in pixel coordinates, relative to the destination surface. The default state value is an empty rectangle, (0, 0, 0, 0).
If the Enable member is
Defines a color value for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-This union can represent both RGB and YCbCr colors. The interpretation of the union depends on the context.
-A
A
Specifies an RGB color value.
-The RGB values have a nominal range of [0...1]. For an RGB format with n bits per channel, the value of each color component is calculated as follows:
val = f * ((1 << n)-1)
For example, for RGB-32 (8 bits per channel), val = BYTE(f * 255.0)
.
For full-range RGB, reference black is (0.0, 0.0, 0.0), which corresponds to (0, 0, 0) in an 8-bit representation. For limited-range RGB, reference black is (0.0625, 0.0625, 0.0625), which corresponds to (16, 16, 16) in an 8-bit representation. For wide-gamut formats, the values might fall outside of the [0...1] range.
-The red value.
The green value.
The blue value.
The alpha value. Values range from 0 (transparent) to 1 (opaque).
Defines a color value for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-This union can represent both RGB and YCbCr colors. The interpretation of the union depends on the context.
-A
A
Describes the configuration of a DXVA decoder device.
-Defines the encryption protocol type for bit-stream data buffers. If no encryption is applied, the value is DXVA_NoEncrypt. If ConfigBitstreamRaw is 0, the value must be DXVA_NoEncrypt.
Defines the encryption protocol type for macroblock control data buffers. If no encryption is applied, the value is DXVA_NoEncrypt. If ConfigBitstreamRaw is 1, the value must be DXVA_NoEncrypt.
Defines the encryption protocol type for residual difference decoding data buffers (buffers containing spatial-domain data or sets of transform-domain coefficients for accelerator-based IDCT). If no encryption is applied, the value is DXVA_NoEncrypt. If ConfigBitstreamRaw is 1, the value must be DXVA_NoEncrypt.
Indicates whether the host-decoder sends raw bit-stream data. If the value is 1, the data for the pictures will be sent in bit-stream buffers as raw bit-stream content. If the value is 0, picture data will be sent using macroblock control command buffers. If either ConfigResidDiffHost or ConfigResidDiffAccelerator is 1, the value must be 0.
Specifies whether macroblock control commands are in raster scan order or in arbitrary order. If the value is 1, the macroblock control commands within each macroblock control command buffer are in raster-scan order. If the value is 0, the order is arbitrary. For some types of bit streams, forcing raster order either greatly increases the number of required macroblock control buffers that must be processed, or requires host reordering of the control information. Therefore, supporting arbitrary order can be more efficient.
Contains the host residual difference configuration. If the value is 1, some residual difference decoding data may be sent as blocks in the spatial domain from the host. If the value is 0, spatial domain data will not be sent.
Indicates the word size used to represent residual difference spatial-domain blocks for predicted (non-intra) pictures when using host-based residual difference decoding.
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 1, the host will send residual difference spatial-domain blocks for non-intra macroblocks using 8-bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 1 and ConfigSpatialResid8 is 0, the host will send residual difference spatial-domain blocks of data for non-intra macroblocks using 16- bit signed samples and for intra macroblocks in predicted (non-intra) pictures in a format that depends on the value of ConfigIntraResidUnsigned:
If ConfigResidDiffHost is 0, ConfigSpatialResid8 must be 0.
For intra pictures, spatial-domain blocks must be sent using 8-bit samples if bits-per-pixel (BPP) is 8, and using 16-bit samples if BPP > 8. If ConfigIntraResidUnsigned is 0, these samples are sent as signed integer values relative to a constant reference value of 2^(BPP?1), and if ConfigIntraResidUnsigned is 1, these samples are sent as unsigned integer values relative to a constant reference value of 0.
If the value is 1, 8-bit difference overflow blocks are subtracted rather than added. The value must be 0 unless ConfigSpatialResid8 is 1.
The ability to subtract differences rather than add them enables 8-bit difference decoding to be fully compliant with the full ?255 range of values required in video decoder specifications, because +255 cannot be represented as the addition of two signed 8-bit numbers, but any number in the range ?255 can be represented as the difference between two signed 8-bit numbers (+255 = +127 minus ?128).
If the value is 1, spatial-domain blocks for intra macroblocks must be clipped to an 8-bit range on the host and spatial-domain blocks for non-intra macroblocks must be clipped to a 9-bit range on the host. If the value is 0, no such clipping is necessary by the host.
The value must be 0 unless ConfigSpatialResid8 is 0 and ConfigResidDiffHost is 1.
If the value is 1, any spatial-domain residual difference data must be sent in a chrominance-interleaved form matching the YUV format chrominance interleaving pattern. The value must be 0 unless ConfigResidDiffHost is 1 and the YUV format is NV12 or NV21.
Indicates the method of representation of spatial-domain blocks of residual difference data for intra blocks when using host-based difference decoding.
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 0, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
If ConfigResidDiffHost is 1 and ConfigIntraResidUnsigned is 1, spatial-domain residual difference data blocks for intra macroblocks must be sent as follows:
The value of the member must be 0 unless ConfigResidDiffHost is 1.
If the value is 1, transform-domain blocks of coefficient data may be sent from the host for accelerator-based IDCT. If the value is 0, accelerator-based IDCT will not be used. If both ConfigResidDiffHost and ConfigResidDiffAccelerator are 1, this indicates that some residual difference decoding will be done on the host and some on the accelerator, as indicated by macroblock-level control commands.
The value must be 0 if ConfigBitstreamRaw is 1.
If the value is 1, the inverse scan for transform-domain block processing will be performed on the host, and absolute indices will be sent instead for any transform coefficients. If the value is 0, the inverse scan will be performed on the accelerator.
The value must be 0 if ConfigResidDiffAccelerator is 0 or if Config4GroupedCoefs is 1.
If the value is 1, the IDCT specified in Annex W of ITU-T Recommendation H.263 is used. If the value is 0, any compliant IDCT can be used for off-host IDCT.
The H.263 annex does not comply with the IDCT requirements of MPEG-2 corrigendum 2, so the value must not be 1 for use with MPEG-2 video.
The value must be 0 if ConfigResidDiffAccelerator is 0, indicating purely host-based residual difference decoding.
If the value is 1, transform coefficients for off-host IDCT will be sent using the DXVA_TCoef4Group structure. If the value is 0, the DXVA_TCoefSingle structure is used. The value must be 0 if ConfigResidDiffAccelerator is 0 or if ConfigHostInverseScan is 1.
Specifies how many frames the decoder device processes at any one time.
Contains decoder-specific configuration information.
Describes a video stream for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
The display driver can use the information in this structure to optimize the capabilities of the video processor. For example, some capabilities might not be exposed for high-definition (HD) content, for performance reasons.
-Frame rates are expressed as ratios. For example, 30 frames per second (fps) is expressed as 30:1, and 29.97 fps is expressed as 30000/1001. For interlaced content, a frame consists of two fields, so that the frame rate is half the field rate.
If the application will composite two or more input streams, use the largest stream for the values of InputWidth and InputHeight.
-A member of the
The frame rate of the input video stream, specified as a
The width of the input frames, in pixels.
The height of the input frames, in pixels.
The frame rate of the output video stream, specified as a
The width of the output frames, in pixels.
The height of the output frames, in pixels.
Specifies a custom rate for frame-rate conversion or inverse telecine (IVTC).
-The CustomRate member gives the rate conversion factor, while the remaining members define the pattern of input and output samples.
Here are some example uses for this structure:
Frame rate conversion from 60p to 120p (doubling the frame rate).
Reverse 2:3 pulldown (IVTC) from 60i to 24p.
(Ten interlaced fields are converted into four progressive frames.)
The ratio of the output frame rate to the input frame rate, expressed as a
The number of output frames that will be generated for every N input samples, where N = InputFramesOrFields.
If TRUE, the input stream must be interlaced. Otherwise, the input stream must be progressive.
The number of input fields or frames for every N output frames that will be generated, where N = OutputFrames.
Describes a buffer sent from a decoder to a DirectX Video Acceleration (DXVA) device.
-This structure corresponds closely to the DXVA_BufferDescription structure in DXVA 1, but some of the fields are no longer used in DXVA 2.
-Identifies the type of buffer passed to the accelerator. Must be one of the following values.
Value | Meaning |
---|---|
Picture decoding parameter buffer. | |
Macroblock control command buffer. | |
Residual difference block data buffer. | |
Deblocking filter control command buffer. | |
Inverse quantization matrix buffer. | |
Slice-control buffer. | |
Bitstream data buffer. | |
Motion vector buffer. | |
Film grain synthesis data buffer. |
?
Reserved. Set to zero.
Specifies the offset of the relevant data from the beginning of the buffer, in bytes. Currently this value must be zero.
Specifies the amount of relevant data in the buffer, in bytes. The location of the last byte of content in the buffer is DataOffset + DataSize ? 1.
Specifies the macroblock address of the first macroblock in the buffer. The macroblock address is given in raster scan order.
Specifies the number of macroblocks of data in the buffer. This count includes skipped macroblocks. This value must be zero if the data buffer type is one of the following: picture decoding parameters, inverse-quantization matrix, AYUV, IA44/AI44, DPXD, Highlight, or DCCMD.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
Reserved. Set to zero.
Pointer to a byte array that contains an initialization vector (IV) for encrypted data. If the decode buffer does not contain encrypted data, set this member to
Contains parameters for the
Contains private data for the
This structure corresponds to parameters of the IAMVideoAccelerator::Execute method in DirectX Video Acceleration (DXVA) version 1.
-Describes the format of a video stream.
-Most of the values in this structure can be translated directly to and from
Describes the interlacing of the video frames. Contains a value from the
Describes the chroma siting. Contains a value from the
Describes the nominal range of the Y'CbCr or RGB color data. Contains a value from the
Describes the transform from Y'PbPr (component video) to studio R'G'B'. Contains a value from the
Describes the intended viewing conditions. Contains a value from the
Describes the color primaries. Contains a value from the
Describes the gamma correction transfer function. Contains a value from the
Use this member to access all of the bits in the union.
Defines the range of supported values for an image filter.
-The multiplier enables the filter range to have a fractional step value.
For example, a hue filter might have an actual range of [-180.0 ... +180.0] with a step size of 0.25. The device would report the following range and multiplier:
In this case, a filter value of 2 would be interpreted by the device as 0.50 (or 2 ? 0.25).
The device should use a multiplier that can be represented exactly as a base-2 fraction.
-The minimum value of the filter.
The maximum value of the filter.
The default value of the filter.
A multiplier. Use the following formula to translate the filter setting into the actual filter value: Actual Value = Set Value???Multiplier.
Contains parameters for a DirectX Video Acceleration (DXVA) image filter.
-Filter level.
Filter threshold.
Filter radius.
Returns a
You can use this function for DirectX Video Acceleration (DXVA) operations that require alpha values expressed as fixed-point numbers.
-
Defines a video frequency.
-The value 0/0 indicates an unknown frequency. Values of the form n/0, where n is not zero, are invalid. Values of the form 0/n, where n is not zero, indicate a frequency of zero.
-Numerator of the frequency.
Denominator of the frequency.
Contains values for DirectX Video Acceleration (DXVA) video processing operations.
-Brightness value.
Contrast value.
Hue value.
Saturation value.
Contains a rational number (ratio).
-Values of the form 0/n are interpreted as zero. The value 0/0 is interpreted as zero. However, these values are not necessarily valid in all contexts.
Values of the form n/0, where n is nonzero, are invalid.
-The numerator of the ratio.
The denominator of the ratio.
Contains per-stream data for the
Specifies the planar alpha value for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-For each pixel, the destination color value is computed as follows:
Cd = Cs * (As * Ap * Ae) + Cd * (1.0 - As * Ap * Ae)
where
Cd
= Color value of the destination pixel.Cs
= Color value of source pixel.As
= Per-pixel source alpha.Ap
= Planar alpha value.Ae
= Palette-entry alpha value, or 1.0 (see Note).Note??Palette-entry alpha values apply only to palettized color formats, and only when the device supports the
The destination alpha value is computed according to the
To get the device capabilities, call
If TRUE, alpha blending is enabled. Otherwise, alpha blending is disabled. The default state value is
Specifies the planar alpha value as a floating-point number from 0.0 (transparent) to 1.0 (opaque).
If the Enable member is
Specifies the pixel aspect ratio (PAR) for the source and destination rectangles.
-Pixel aspect ratios of the form 0/n and n/0 are not valid.
If the Enable member is
If TRUE, the SourceAspectRatio and DestinationAspectRatio members contain valid values. Otherwise, the pixel aspect ratios are unspecified.
A
A
Specifies the format for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-The surface format, specified as a
The default state value is
Specifies the destination rectangle for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-Specifies whether to use the destination rectangle, or use the entire output surface. The default state value is
Value | Meaning |
---|---|
| Use the destination rectangle given in the DestinationRect member. |
Use the entire output surface as the destination rectangle. |
?
The destination rectangle, which defines the portion of the output surface where the source rectangle is blitted. The destination rectangle is given in pixel coordinates, relative to the output surface. The default value is an empty rectangle, (0, 0, 0, 0).
If the Enable member is
Specifies the level for a filtering operation on a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream.
-For a list of image filters that are defined for DXVA-HD, see
If TRUE, the filter is enabled. Otherwise, the filter is disabled.
The level for the filter. The meaning of this value depends on the implementation. To get the range and default value of a particular filter, call the
If the Enable member is
Specifies how a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream is interlaced.
-Some devices do not support interlaced RGB. Interlaced RGB support is indicated by the
Some devices do not support interlaced formats with palettized color. This support is indicated by the
To get the device's capabilities, call
The video interlacing, specified as a
The default state value is
Specifies the color space for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream.
-The RGB_Range member applies to RGB input, while the YCbCr_Matrix and YCbCr_xvYCC members apply to YCbCr (YUV) input.
In some situations, the device might perform an intermediate color conversion on the input stream. If so, it uses the flags that apply to both color spaces. For example, suppose the device converts from RGB to YCbCr. If the RGB_Range member is 0 and the YCbCr_Matrix member is 1, the device will convert from full-range RGB to BT.709 YCbCr.
If the device supports xvYCC, it returns the
Specifies whether the input stream contains video or graphics. The device can optimize the processing based on the type. The default state value is 0 (video).
Value | Meaning |
---|---|
| Video. |
| Graphics. |
?
Specifies the RGB color range. The default state value is 0 (full range).
Value | Meaning |
---|---|
| Full range (0-255). |
| Limited range (16-235). |
?
Specifies the YCbCr transfer matrix. The default state value is 0 (BT.601).
Value | Meaning |
---|---|
| ITU-R BT.601. |
| ITU-R BT.709. |
?
Specifies whether the input stream uses conventional YCbCr or extended YCbCr (xvYCC). The default state value is 0 (conventional YCbCr).
Value | Meaning |
---|---|
| Conventional YCbCr. |
| Extended YCbCr (xvYCC). |
?
Specifies the luma key for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-To use this state, the device must support luma keying, indicated by the
If the device does not support luma keying, the
If the input format is RGB, the device must also support the
The values of Lower and Upper give the lower and upper bounds of the luma key, using a nominal range of [0...1]. Given a format with n bits per channel, these values are converted to luma values as follows:
val = f * ((1 << n)-1)
Any pixel whose luma value falls within the upper and lower bounds (inclusive) is treated as transparent.
For example, if the pixel format uses 8-bit luma, the upper bound is calculated as follows:
BYTE Y = BYTE(max(min(1.0, Upper), 0.0) * 255.0)
Note that the value is clamped to the range [0...1] before multiplying by 255.
- If TRUE, luma keying is enabled. Otherwise, luma keying is disabled. The default value is
The lower bound for the luma key. The range is [0?1]. The default state value is 0.0.
The upper bound for the luma key. The range is [0?1]. The default state value is 0.0.
Specifies the output frame rate for an input stream when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-The output rate might require the device to convert the frame rate of the input stream. If so, the value of RepeatFrame controls whether the device creates interpolated frames or simply repeats input frames.
-Specifies how the device performs frame-rate conversion, if required. The default state value is
Value | Meaning |
---|---|
| The device repeats frames. |
The device interpolates frames. |
?
Specifies the output rate, as a member of the
Specifies a custom output rate, as a
To get the list of custom rates supported by the video processor, call
Contains the color palette entries for an input stream, when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-This stream state is used for input streams that have a palettized color format. Palettized formats with 4 bits per pixel (bpp) use the first 16 entries in the list. Formats with 8 bpp use the first 256 entries.
If a pixel has a palette index greater than the number of entries, the device treats the pixel as being white with opaque alpha. For full-range RGB, this value will be (255, 255, 255, 255); for YCbCr the value will be (255, 235, 128, 128).
The caller allocates the pEntries array. Set the Count member to the number of elements in the array. When retrieving the state data, you can set the pEntries member to
If the DXVA-HD device does not have the
To get the device capabilities, call
The number of palette entries. The default state value is 0.
A reference to an array of
Contains data for a private stream state, for a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) input stream.
-Use this structure for proprietary or device-specific state parameters.
The caller allocates the pData array. Set the DataSize member to the size of the array in bytes. When retrieving the state data, you can set the pData member to
A
Value | Meaning |
---|---|
| Retrieves statistics about inverse telecine. The state data (pData) is a |
?
A device can define additional GUIDs for use with custom stream states. The interpretation of the data is then defined by the device.
The size, in bytes, of the buffer pointed to by the pData member.
A reference to a buffer that contains the private state data. The DXVA-HD runtime passes this buffer directly to the device, without validation.
Contains inverse telecine (IVTC) statistics from a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-If the DXVA-HD device supports IVTC statistics, it can detect when the input video contains telecined frames. You can use this information to enable IVTC in the device.
To enable IVTC statistics, do the following:
sizeof( )
.To get the most recent IVTC statistics from the device, call the
Typically, an application would use this feature as follows:
Specifies whether IVTC statistics are enabled. The default state value is
If the driver detects that the frames are telecined, and is able to perform inverse telecine, this field contains a member of the
The number of consecutive telecined frames that the device has detected.
The index of the most recent input field. The value of this member equals the most recent value of the InputFrameOrField member of the
Specifies the source rectangle for an input stream when using Microsoft DirectX Video Acceleration High Definition (DXVA-HD)
-Specifies whether to blit the entire input surface or just the source rectangle. The default state value is
Value | Meaning |
---|---|
| Use the source rectangle specified in the SourceRect member. |
Blit the entire input surface. Ignore the SourceRect member. |
?
The source rectangle, which defines the portion of the input sample that is blitted to the destination surface. The source rectangle is given in pixel coordinates, relative to the input surface. The default state value is an empty rectangle, (0, 0, 0, 0).
If the Enable member is
Contains references to functions implemented by a software plug-in for Microsoft DirectX Video Acceleration High Definition (DXVA-HD).
-If you provide a software plug-in for DXVA-HD, the plug-in must implement a set of functions that are defined by the function reference types in this structure.
At initialization, the DXVA-HD runtime calls the plug-in device's PDXVAHDSW_Plugin function. This function fills in a
Function reference of type PDXVAHDSW_CreateDevice.
Function reference of type PDXVAHDSW_ProposeVideoPrivateFormat.
Function reference of type PDXVAHDSW_GetVideoProcessorDeviceCaps.
Function reference of type PDXVAHDSW_GetVideoProcessorOutputFormats.
Function reference of type PDXVAHDSW_GetVideoProcessorInputFormats.
Function reference of type PDXVAHDSW_GetVideoProcessorCaps.
Function reference of type PDXVAHDSW_GetVideoProcessorCustomRates.
Function reference of type PDXVAHDSW_GetVideoProcessorFilterRange.
Function reference of type PDXVAHDSW_DestroyDevice.
Function reference of type PDXVAHDSW_CreateVideoProcessor.
Function reference of type PDXVAHDSW_SetVideoProcessBltState.
Function reference of type PDXVAHDSW_GetVideoProcessBltStatePrivate.
Function reference of type PDXVAHDSW_SetVideoProcessStreamState.
Function reference of type PDXVAHDSW_GetVideoProcessStreamStatePrivate.
Function reference of type PDXVAHDSW_VideoProcessBltHD.
Function reference of type PDXVAHDSW_DestroyVideoProcessor.
Defines the range of supported values for a DirectX Video Acceleration (DXVA) operation.
-All values in this structure are specified as
Minimum supported value.
Maximum supported value.
Default value.
Minimum increment between values.
Describes a video stream for a DXVA decoder device or video processor device.
-The InputSampleFreq member gives the frame rate of the decoded video stream, as received by the video renderer. The OutputFrameFreq member gives the frame rate of the video that is displayed after deinterlacing. If the input video is interlaced and the samples contain interleaved fields, the output frame rate is twice the input frame rate. If the input video is progressive or contains single fields, the output frame rate is the same as the input frame rate.
Decoders should set the values of InputSampleFreq and OutputFrameFreq if the frame rate is known. Otherwise, set these members to 0/0 to indicate an unknown frame rate.
-Width of the video frame, in pixels.
Height of the video frame, in pixels.
Additional details about the video format, specified as a
Surface format, specified as a
Frame rate of the input video stream, specified as a
Frame rate of the output video, specified as a
Level of data protection required when the user accessible bus (UAB) is present. If TRUE, the video must be protected when a UAB is present. If
Reserved. Must be zero.
Contains parameters for the
Describes the capabilities of a DirectX Video Acceleration (DVXA) video processor mode.
-Identifies the type of device. The following values are defined.
Value | Meaning |
---|---|
DXVA 2.0 video processing is emulated by using DXVA 1.0. An emulated device may be missing significant processing capabilities and have lower image quality and performance. | |
Hardware device. | |
Software device. |
?
The Direct3D memory pool used by the device.
Number of forward reference samples the device needs to perform deinterlacing. For the bob, progressive scan, and software devices, the value is zero.
Number of backward reference samples the device needs to perform deinterlacing. For the bob, progressive scan, and software devices, the value is zero.
Reserved. Must be zero.
Identifies the deinteracing technique used by the device. This value is a bitwise OR of one or more of the following flags.
Value | Meaning |
---|---|
The algorithm is unknown or proprietary. | |
The algorithm creates missing lines by repeating the line either above or below the missing line. This algorithm produces a jagged image and is not recommended. | |
The algorithm creates missing lines by averaging two lines. Slight vertical adjustments are made so that the resulting image does not bob up and down. | |
The algorithm creates missing lines by applying a [?1, 9, 9, ?1]/16 filter across four lines. Slight vertical adjustments are made so that the resulting image does not bob up and down. | |
The algorithm uses median filtering to recreate the pixels in the missing lines. | |
The algorithm uses an edge filter to create the missing lines. In this process, spatial directional filters are applied to determine the orientation of edges in the picture content. Missing pixels are created by filtering along (rather than across) the detected edges. | |
The algorithm uses spatial or temporal interpolation, switching between the two on a field-by-field basis, depending on the amount of motion. | |
The algorithm uses spatial or temporal interpolation, switching between the two on a pixel-by-pixel basis, depending on the amount of motion. | |
The algorithm identifies objects within a sequence of video fields. Before it recreates the missing pixels, it aligns the movement axes of the individual objects in the scene to make them parallel with the time axis. | |
The device can undo the 3:2 pulldown process used in telecine. |
?
Specifies the available video processor (ProcAmp) operations. The value is a bitwise OR of ProcAmp Settings constants.
Specifies operations that the device can perform concurrently with the
Value | Meaning |
---|---|
The device can convert the video from YUV color space to RGB color space, with at least 8 bits of precision for each RGB component. | |
The device can stretch or shrink the video horizontally. If this capability is present, aspect ratio correction can be performed at the same time as deinterlacing. | |
The device can stretch or shrink the video vertically. If this capability is present, image resizing and aspect ratio correction can be performed at the same time. | |
The device can alpha blend the video. | |
The device can operate on a subrectangle of the video frame. If this capability is present, source images can be cropped before further processing occurs. | |
The device can accept substreams in addition to the primary video stream, and can composite them. | |
The device can perform color adjustments on the primary video stream and substreams, at the same time that it deinterlaces the video and composites the substreams. The destination color space is defined in the DestFormat member of the | |
The device can convert the video from YUV to RGB color space when it writes the deinterlaced and composited pixels to the destination surface. An RGB destination surface could be an off-screen surface, texture, Direct3D render target, or combined texture/render target surface. An RGB destination surface must use at least 8 bits for each color channel. | |
The device can perform an alpha blend operation with the destination surface when it writes the deinterlaced and composited pixels to the destination surface. | |
The device can downsample the output frame, as specified by the ConstrictionSize member of the | |
The device can perform noise filtering. | |
The device can perform detail filtering. | |
The device can perform a constant alpha blend to the entire video stream when it composites the video stream and substreams. | |
The device can perform accurate linear RGB scaling, rather than performing them in nonlinear gamma space. | |
The device can correct the image to compensate for artifacts introduced when performing scaling in nonlinear gamma space. | |
The deinterlacing algorithm preserves the original field lines from the interlaced field picture, unless scaling is also applied. For example, in deinterlacing algorithms such as bob and median filtering, the device copies the original field into every other scan line and then applies a filter to reconstruct the missing scan lines. As a result, the original field can be recovered by discarding the scan lines that were interpolated. If the image is scaled vertically, however, the original field lines cannot be recovered. If the image is scaled horizontally (but not vertically), the resulting field lines will be equivalent to scaling the original field picture. (In other words, discarding the interpolated scan lines will yield the same result as stretching the original picture without deinterlacing.) |
?
Specifies the supported noise filters. The value is a bitwise OR of the following flags.
Value | Meaning |
---|---|
Noise filtering is not supported. | |
Unknown or proprietary filter. | |
Median filter. | |
Temporal filter. | |
Block noise filter. | |
Mosquito noise filter. |
?
Specifies the supported detail filters. The value is a bitwise OR of the following flags.
Value | Meaning |
---|---|
Detail filtering is not supported. | |
Unknown or proprietary filter. | |
Edge filter. | |
Sharpen filter. |
?
Specifies an input sample for the
Specifies the capabilities of the Microsoft DirectX Video Acceleration High Definition (DXVA-HD) video processor.
-A
The number of past reference frames required to perform the optimal video processing.
The number of future reference frames required to perform the optimal video processing.
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The number of custom output frame rates. To get the list of custom frame rates, call the
Specifies the capabilities of a Microsoft DirectX Video Acceleration High Definition (DXVA-HD) device.
-In DXVA-HD, the device stores state information for each input stream. These states persist between blits. With each blit, the application selects which streams to enable or disable. Disabling a stream does not affect the state information for that stream.
The MaxStreamStates member gives the maximum number of stream states that can be set by the application. The MaxInputStreams member gives the maximum number of streams that can be enabled during a blit. These two values can differ.
To set the state data for a stream, call
Specifies the device type, as a member of the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
A bitwise OR of zero or more flags from the
The memory pool that is required for the input video surfaces.
The number of supported output formats. To get the list of output formats, call the
The number of supported input formats. To get the list of input formats, call the
The number of video processors. Each video processor represents a distinct set of processing capabilities. To get the capabilities of each video processor, call the
The maximum number of input streams that can be enabled at the same time.
The maximum number of input streams for which the device can store state data.
Enables two threads to share the same Microsoft Direct3D?11 device.
-This interface is exposed by the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager. To create the DXGI Device Manager, call the
When you create an
For Microsoft Direct3D?9 devices, use the IDirect3DDeviceManager9 interface.
Windows Store apps must use
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Creates an instance of the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Sets the Microsoft Direct3D device or notifies the device manager that the Direct3D device was reset.
-A reference to the
When you first create the DXGI Device Manager, call this method with a reference to the Direct3D device. (The device manager does not create the device; the caller must provide the device reference initially.) Also call this method if the Direct3D device becomes lost and you need to reset the device or create a new device.
The resetToken parameter ensures that only the component that originally created the device manager can invalidate the current device.
If this method succeeds, all open device handles become invalid.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Unlocks the Microsoft Direct3D device.
-A handle to the Direct3D device. To get the device handle, call
Call this method to release the device after calling
Enables two threads to share the same Microsoft Direct3D?11 device.
-This interface is exposed by the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager. To create the DXGI Device Manager, call the
When you create an
For Microsoft Direct3D?9 devices, use the IDirect3DDeviceManager9 interface.
Windows Store apps must use
Queries the Microsoft Direct3D device for an interface.
-A handle to the Direct3D device. To get the device handle, call
The interface identifier (IID) of the requested interface. The Direct3D device supports the following interfaces:
Receives a reference to the requested interface. The caller must release the interface.
If the method returns
For more info see, Supporting Direct3D 11 Video Decoding in Media Foundation.
-Gives the caller exclusive access to the Microsoft Direct3D device.
-A handle to the Direct3D device. To get the device handle, call
The interface identifier (IID) of the requested interface. The Direct3D device will support the following interfaces:
Specifies whether to wait for the device lock. If the device is already locked and this parameter is TRUE, the method blocks until the device is unlocked. Otherwise, if the device is locked and this parameter is
Receives a reference to the requested interface. The caller must release the interface.
When you are done using the Direct3D device, call
If the method returns
If fBlock is TRUE, this method can potentially deadlock. For example, it will deadlock if a thread calls LockDevice and then waits on another thread that calls LockDevice. It will also deadlock if a thread calls LockDevice twice without calling UnlockDevice in between.
-Gets a handle to the Microsoft Direct3D device.
-Receives the device handle.
Enables two threads to share the same Microsoft Direct3D?11 device.
-This interface is exposed by the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager. To create the DXGI Device Manager, call the
When you create an
For Microsoft Direct3D?9 devices, use the IDirect3DDeviceManager9 interface.
Windows Store apps must use
Tests whether a Microsoft Direct3D device handle is valid.
-A handle to the Direct3D device. To get the device handle, call
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The specified handle is not a Direct3D device handle. |
| The device handle is invalid. |
?
If the method returns
Unlocks the Microsoft Direct3D device.
-A handle to the Direct3D device. To get the device handle, call
Reserved.
If this method succeeds, it returns
Call this method to release the device after calling
Defines the ASF indexer options.
-The indexer creates a new index object.
The indexer returns values for reverse playback.
The indexer creates an index object for a live ASF stream.
Defines the ASF multiplexer options.
-The multiplexer automatically adjusts the bit rate of the ASF content in response to the characteristics of the streams being multiplexed.
Defines the selection options for an ASF stream.
-No samples from the stream are delivered.
Only samples from the stream that are clean points are delivered.
All samples from the stream are delivered.
Defines the ASF splitter options.
-The splitter delivers samples for the ASF content in reverse order to accommodate reverse playback.
The splitter delivers samples for streams that are protected with Windows Media Digital Rights Management.
Defines status conditions for the
Defines the ASF stream selector options.
-The stream selector will not set thinning. Thinning is the process of removing samples from a stream to reduce the bit rate.
The stream selector will use the average bit rate of streams when selecting streams.
Specifies the type of work queue for the
Defines flags for serializing and deserializing attribute stores.
-If this flag is set,
Specifies how to compare the attributes on two objects.
-Check whether all the attributes in pThis exist in pTheirs and have the same data, where pThis is the object whose Compare method is being called and pTheirs is the object given in the pTheirs parameter.
Check whether all the attributes in pTheirs exist in pThis and have the same data, where pThis is the object whose Compare method is being called and pTheirs is the object given in the pTheirs parameter.
Check whether both objects have identical attributes with the same data.
Check whether the attributes that exist in both objects have the same data.
Find the object with the fewest number of attributes, and check if those attributes exist in the other object and have the same data.
Defines the data type for a key/value pair.
-Unsigned 32-bit integer.
Unsigned 64-bit integer.
Floating-point number.
Byte array.
Specifies values for audio constriction.
-Values defined by the
Audio is not constricted.
Audio is down sampled to 48 kHz/16-bit.
Audio is down sampled to 44 kHz/16-bit.
Audio is down sampled to 14hKz/16-bit.
Audio is muted.
Contains flags for the
Specifies the origin for a seek request.
-The seek position is specified relative to the start of the stream.
The seek position is specified relative to the current read/write position in the stream.
Specifies a type of capture device.
-An audio capture device, such as a microphone.
A video capture device, such as a webcam.
Specifies a type of capture sink.
-A recording sink, for capturing audio and video to a file.
A preview sink, for previewing live audio or video.
A photo sink, for capturing still images.
Defines the values for the source stream category.
-Specifies a video preview stream.
Specifies a video capture stream.
Specifies an independent photo stream.
Specifies a dependent photo stream.
Specifies an audio stream.
Specifies an unsupported stream.
Contains flags that describe the characteristics of a clock. These flags are returned by the
Defines properties of a clock.
-Jitter values are always negative. In other words, the time returned by
Defines the state of a clock.
-The clock is invalid. A clock might be invalid for several reasons. Some clocks return this state before the first start. This state can also occur if the underlying device is lost.
The clock is running. While the clock is running, the time advances at the clock's frequency and current rate.
The clock is stopped. While stopped, the clock reports a time of 0.
The clock is paused. While paused, the clock reports the time it was paused.
Specifies how the topology loader connects a topology node. This enumeration is used with the
The SetOutputStreamState method sets the Device MFT output stream state and media type.
-This interface method helps to transition the output stream to a specified state with specified media type set on the output stream. This will be used by the DTM when the Device Source requests a specific output stream?s state and media type to be changed. Device MFT should change the specified output stream?s media type and state to the requested media type.
If the incoming media type and stream state are same as the current media type and stream state the method return
If the incoming media type and current media type of the stream are the same, Device MFT must change the stream?s state to the requested value and return the appropriate
When a change in the output stream?s media type requires a corresponding change in the input then Device MFT must post the
As an example, consider a Device MFT that has two input streams and three output streams. Let Output 1 and Output 2 source from Input 1 and stream at 720p. Now, let us say Output 2?s media type changes to 1080p. To satisfy this request, Device MFT must change the Input 1 media type to 1080p, by posting
Stream ID of the input stream where the state and media type needs to be changed.
Preferred media type for the input stream is passed in through this parameter. Device MFT should change the media type only if the incoming media type is different from the current media type.
Specifies the DeviceStreamState which the input stream should transition to.
Must be zero.
The DMO_INPUT_DATA_BUFFER_FLAGS
enumeration defines flags that describe an input buffer.
The beginning of the data is a synchronization point.
The buffer's time stamp is valid.
The buffer's indicated time length is valid.
The buffer's indicated time length is valid.
Media Foundation transforms (MFTs) are an evolution of the transform model first introduced with DirectX Media Objects (DMOs). This topic summarizes the main ways in which MFTs differ from DMOs. Read this topic if you are already familiar with the DMO interfaces, or if you want to convert an existing DMO into an MFT.
This topic contains the following sections:
The DMO_INPUT_STREAM_INFO_FLAGS
enumeration defines flags that describe an input stream.
The stream requires whole samples. Samples must not span multiple buffers, and buffers must not contain partial samples.
Each buffer must contain exactly one sample.
All the samples in this stream must be the same size.
The DMO performs lookahead on the incoming data, and may hold multiple input buffers for this stream.
The DMO_PROCESS_OUTPUT_FLAGS
enumeration defines flags that specify output processing requests.
Discard the output when the reference to the output buffer is
The DMO_SET_TYPE_FLAGS
enumeration defines flags for setting the media type on a stream.
The
Test the media type but do not set it.
Clear the media type that was set for the stream.
Contains flags that are used to configure the Microsoft DirectShow enhanced video renderer (EVR) filter.
-Enables dynamic adjustments to video quality during playback.
Specifies the requested access mode for opening a file.
-Read mode.
Write mode.
Read and write mode.
Specifies the behavior when opening a file.
-Use the default behavior.
Open the file with no system caching.
Subsequent open operations can have write access to the file.
Note??Requires Windows?7 or later. ?
Specifies how to open or create a file.
-Open an existing file. Fail if the file does not exist.
Create a new file. Fail if the file already exists.
Open an existing file and truncate it, so that the size is zero bytes. Fail if the file does not already exist.
If the file does not exist, create a new file. If the file exists, open it.
Create a new file. If the file exists, overwrite the file.
Describes the type of data provided by a frame source.
-The values of this enumeration are used with the MF_DEVICESTREAM_ATTRIBUTE_FRAMESOURCE_TYPES attribute.
-The frame source provides color data.
The frame source provides infrared data.
The frame source provides depth data.
The frame source provides custom data.
Specifies the likelihood that the Media Engine can play a specified type of media resource.
-The Media Engine cannot play the resource.
The Media Engine might be able to play the resource.
The Media Engine can probably play the resource.
Contains flags for the
Defines error status codes for the Media Engine.
-The values greater than zero correspond to error codes defined for the MediaError object in HTML5.
-No error.
The process of fetching the media resource was stopped at the user's request.
A network error occurred while fetching the media resource.
An error occurred while decoding the media resource.
The media resource is not supported.
An error occurred while encrypting the media resource.
Supported in Windows?8.1 and later.
Defines event codes for the Media Engine.
-The application receives Media Engine events through the
Values below 1000 correspond to events defined in HTML 5 for media elements.
-The Media Engine has started to load the source. See
The Media Engine is loading the source.
The Media Engine has suspended a load operation.
The Media Engine cancelled a load operation that was in progress.
An error occurred.
Event Parameter | Description |
---|---|
param1 | A member of the |
param2 | An |
?
The Media Engine has switched to the
The Load algorithm is stalled, waiting for data.
The Media Engine is switching to the playing state. See
The media engine has paused. See
The Media Engine has loaded enough source data to determine the duration and dimensions of the source.
The Media Engine has loaded enough data to render some content (for example, a video frame).
Playback has stopped because the next frame is not available.
Playback has started. See
Playback can start, but the Media Engine might need to stop to buffer more data.
The Media Engine can probably play through to the end of the resource, without stopping to buffer data.
The Media Engine has started seeking to a new playback position. See
The Media Engine has seeked to a new playback position. See
The playback position has changed. See
Playback has reached the end of the source. This event is not sent if the GetLoopis TRUE.
The playback rate has changed. See
The duration of the media source has changed. See
The audio volume changed. See
The output format of the media source has changed.
Event Parameter | Description |
---|---|
param1 | Zero if the video format changed, 1 if the audio format changed. |
param2 | Zero. |
?
The Media Engine flushed any pending events from its queue.
The playback position reached a timeline marker. See
The audio balance changed. See
The Media Engine has finished downloading the source data.
The media source has started to buffer data.
The media source has stopped buffering data.
The
The Media Engine's Load algorithm is waiting to start.
Event Parameter | Description |
---|---|
param1 | A handle to a waitable event, of type HANDLE. |
param2 | Zero. |
?
If Media Engine is created with the
If the Media Engine is not created with the
The first frame of the media source is ready to render.
Raised when a new track is added or removed.
Supported in Windows?8.1 and later.
Raised when there is new information about the Output Protection Manager (OPM).
This event will be raised when an OPM failure occurs, but ITA allows fallback without the OPM. In this case, constriction can be applied.
This event will not be raised when there is an OPM failure and the fallback also fails. For example, if ITA blocks playback entirely when OPM cannot be established.
Supported in Windows?8.1 and later.
Raised when one of the component streams of a media stream fails. This event is only raised if the media stream contains other component streams that did not fail.
Raised when one of the component streams of a media stream fails. This event is only raised if the media stream contains other component streams that did not fail.
Specifies media engine extension types.
-Specifies the content protection requirements for a video frame.
-The video frame should be protected.
Direct3D surface protection must be applied to any surface that contains the frame.
Direct3D anti-screen-scrape protection must be applied to any surface that contains the frame.
Defines media key error codes for the media engine.
-Unknown error occurred.
An error with the client occurred.
An error with the service occurred.
An error with the output occurred.
An error occurred related to a hardware change.
An error with the domain occurred.
Defines network status codes for the Media Engine.
-The initial state.
The Media Engine has started the resource selection algorithm, and has selected a media resource, but is not using the network.
The Media Engine is loading a media resource.
The Media Engine has started the resource selection algorithm, but has not selected a media resource.
Defines the status of the Output Protection Manager (OPM).
-Defines preload hints for the Media Engine. These values correspond to the preload attribute of the HTMLMediaElement interface in HTML5.
-The preload attribute is missing.
The preload attribute is an empty string. This value is equivalent to
The preload attribute is "none". This value is a hint to the user agent not to preload the resource.
The preload attribute is "metadata". This value is a hint to the user agent to fetch the resource metadata.
The preload attribute is "auto". This value is a hint to the user agent to preload the entire resource.
Contains flags that specify whether the Media Engine will play protected content, and whether the Media Engine will use the Protected Media Path (PMP).
-These flags are used with the
Defines ready-state values for the Media Engine.
-These values correspond to constants defined for the HTMLMediaElement.readyState attribute in HTML5.
-No data is available.
Some metadata is available, including the duration and, for video files, the video dimensions. No media data is available.
There is media data for the current playback position, but not enough data for playback or seeking.
There is enough media data to enable some playback or seeking. The amount of data might be a little as the next video frame.
There is enough data to play the resource, based on the current rate at which the resource is being fetched.
Specifies the layout for a packed 3D video frame.
-None.
The views are packed side-by-side in a single frame.
The views are packed top-to-bottom in a single frame.
Defines values for the media engine seek mode.
-This enumeration is used with the MediaEngineEx::SetCurrentTimeEx.
-Specifies normal seek.
Specifies an approximate seek.
Identifies statistics that the Media Engine tracks during playback. To get a playback statistic from the Media Engine, call
In the descriptions that follow, the data type and value-type tag for the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Identifies the kind of media stream that failed.
-The stream type is unknown.
The stream is an audio stream.
The stream is a video stream.
Defines the characteristics of a media source. These flags are retrieved by the
To skip forward or backward in a playlist, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Specifies options for the
The following typedef is defined for combining flags from this enumeration.
typedef UINT32 MFP_CREATION_OPTIONS;
- Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains flags for the
Some of these flags, marked [out], convey information back to the MFPlay player object. The application should set or clear these flags as appropriate, before returning from the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains flags that describe a media item.
-The following typedef is defined for combining flags from this enumeration.
typedef UINT32 MFP_MEDIAITEM_CHARACTERISTICS;
-
Not supported.
Note??Earlier versions of this documentation described the _MFT_DRAIN_TYPE enumeration incorrectly. The enumeration is not supported. For more information, see
Defines flags for the
Indicates the status of an input stream on a Media Foundation transform (MFT).
-The input stream can receive more data at this time. To deliver more input data, call
Describes an input stream on a Media Foundation transform (MFT).
-Before the client sets the media types on the transform, the only flags guaranteed to be accurate are the
In the default processing model, an MFT holds a reference count on the sample that it receives in ProcessInput. It does not process the sample immediately inside ProcessInput. When ProcessOutput is called, the MFT produces output data and then discards the input sample. The following variations on this model are defined:
If an MFT never holds onto input samples between ProcessInput and ProcessOutput, it can set the
If an MFT holds some input samples beyond the next call to ProcessOutput, it can set the
Each media sample (
For uncompressed audio formats, this flag is always implied. (It is valid to set the flag, but not required.) An uncompressed audio frame should never span more than one media sample.
Each media sample that the client provides as input must contain exactly one unit of data, as defined for the
If this flag is present, the
An MFT that processes uncompressed audio should not set this flag. The MFT should accept buffers that contain more than a single audio frame, for efficiency.
All input samples must be the same size. The size is given in the cbSize member of the
The MFT might hold one or more input samples after
The MFT does not hold input samples after the
If this flag is absent, the MFT might hold a reference count on the samples that are passed to the ProcessInput method. The client must not re-use or delete the buffer memory until the MFT releases the sample's
If this flag is absent, it does not guarantee that the MFT holds a reference count on the input samples. It is valid for an MFT to release input samples in ProcessInput even if the MFT does not set this flag. However, setting this flag might enable to client to optimize how it re-uses buffers.
An MFT should not set this flag if it ever holds onto an input sample after returning from ProcessInput.
This input stream can be removed by calling
This input stream is optional. The transform can produce output without receiving input from this stream. The caller can deselect the stream by not setting a media type or by setting a
The MFT can perform in-place processing. In this mode, the MFT directly modifies the input buffer. When the client calls ProcessOutput, the same sample that was delivered to this stream is returned in the output stream that has a matching stream identifier. This flag implies that the MFT holds onto the input buffer, so this flag cannot combined with the
If this flag is present, the MFT must set the
Defines flags for the
The values in this enumeration are not bit flags, so they should not be combined with a bitwise OR. Also, the caller should test for these flags with the equality operator, not a bitwise AND:
// Correct.
- if (Buffer.dwStatus == )
- { ...
- } // Incorrect.
- if ((Buffer.dwStatus & ) != 0)
- { ...
- }
-
-
Indicates whether a Media Foundation transform (MFT) can produce output data.
-There is a sample available for at least one output stream. To retrieve the available output samples, call
Describes an output stream on a Media Foundation transform (MFT).
-Before the client sets the media types on the MFT, the only flag guaranteed to be accurate is the
The
MFT_OUTPUT_STREAM_DISCARDABLE: The MFT discards output data only if the client calls ProcessOutput with the
MFT_OUTPUT_STREAM_LAZY_READ: If the client continues to call ProcessInput without collecting the output from this stream, the MFT eventually discards the output. If all output streams have the
If neither of these flags is set, the MFT never discards output data.
-Each media sample (
For uncompressed audio formats, this flag is always implied. (It is valid to set the flag, but not required.) An uncompressed audio frame should never span more than one media sample.
Each output sample contains exactly one unit of data, as defined for the
If this flag is present, the
An MFT that outputs uncompressed audio should not set this flag. For efficiency, it should output more than one audio frame at a time.
All output samples are the same size.
The MFT can discard the output data from this output stream, if requested by the client. To discard the output, set the
This output stream is optional. The client can deselect the stream by not setting a media type or by setting a
The MFT provides the output samples for this stream, either by allocating them internally or by operating directly on the input samples. The MFT cannot use output samples provided by the client for this stream.
If this flag is not set, the MFT must set cbSize to a nonzero value in the
The MFT can either provide output samples for this stream or it can use samples that the client allocates. This flag cannot be combined with the
If the MFT does not set this flag or the
The MFT does not require the client to process the output for this stream. If the client continues to send input data without getting the output from this stream, the MFT simply discards the previous input.
The MFT might remove this output stream during streaming. This flag typically applies to demultiplexers, where the input data contains multiple streams that can start and stop during streaming. For more information, see
Defines flags for the setting or testing the media type on a Media Foundation transform (MFT).
-Test the proposed media type, but do not set it.
Defines the different error states of the Media Source Extension.
-Specifies no error.
Specifies an error with the network.
Specifies an error with decoding.
Specifies an unknown error.
Defines the different ready states of the Media Source Extension.
-The media source is closed.
The media source is open.
The media source is ended.
Specifies how the user's credentials will be used.
-The credentials will be used to authenticate with a proxy.
The credentials will be sent over the network unencrypted.
The credentials must be from a user who is currently logged on.
Describes options for the caching network credentials.
-Allow the credential cache object to save credentials in persistant storage.
Do not allow the credential cache object to cache the credentials in memory. This flag cannot be combined with the
The user allows credentials to be sent over the network in clear text.
By default,
Do not set this flag without notifying the user that credentials might be sent in clear text.
Specifies how the credential manager should obtain user credentials.
-The application implements the credential manager, which must expose the
The credential cache object sets the
The credential manager should prompt the user to provide the credentials.
Note??Requires Windows?7 or later. ?
The credentials are saved to persistent storage. This flag acts as a hint for the application's UI. If the application prompts the user for credentials, the UI can indicate that the credentials have already been saved.
Specifies how the default proxy locator will specify the connection settings to a proxy server. The application must set these values in the MFNETSOURCE_PROXYSETTINGS property.
-
Defines the status of the cache for a media file or entry.
-The cache for a file or entry does not exist.
The cache for a file or entry is growing.
The cache for a file or entry is completed.
Indicates the type of control protocol that is used in streaming or downloading.
-The protocol type has not yet been determined.
The protocol type is HTTP. This includes HTTPv9, WMSP, and HTTP download.
The protocol type is Real Time Streaming Protocol (RTSP).
The content is read from a file. The file might be local or on a remote share.
The protocol type is multicast.
Note??Requires Windows?7 or later. ?Defines statistics collected by the network source. The values in this enumeration define property identifiers (PIDs) for the MFNETSOURCE_STATISTICS property.
To retrieve statistics from the network source, call
In the descriptions that follow, the data type and value-type tag for the
Describes the type of transport used in streaming or downloading data (TCP or UDP).
-The data transport type is UDP.
The data transport type is TCP.
Specifies whether color data includes headroom and toeroom. Headroom allows for values beyond 1.0 white ("whiter than white"), and toeroom allows for values below reference 0.0 black ("blacker than black").
- This enumeration is used with the
For more information about these values, see the remarks for the DXVA2_NominalRange enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
-Unknown nominal range.
Equivalent to
Equivalent to
The normalized range [0...1] maps to [0...255] for 8-bit samples or [0...1023] for 10-bit samples.
The normalized range [0...1] maps to [16...235] for 8-bit samples or [64...940] for 10-bit samples.
The normalized range [0..1] maps to [48...208] for 8-bit samples or [64...940] for 10-bit samples.
The normalized range [0..1] maps to [64...127] for 8-bit samples or [256...508] for 10-bit samples. This range is used in the xRGB color space.
Note??Requires Windows?7 or later. ?
Defines the object types that are created by the source resolver.
-Media source. You can query the object for the
Byte stream. You can query the object for the
Invalid type.
Defines protection levels for MFPROTECTION_ACP.
-Specifies ACP is disabled.
Specifies ACP is level one.
Specifies ACP is level two.
Specifies ACP is level three.
Reserved.
Defines protection levels for MFPROTECTION_CGMSA.
-These flags are equivalent to the OPM_CGMSA_Protection_Level enumeration constants used in the Output Protection Protocol (OPM).
-CGMS-A is disabled.
The protection level is Copy Freely.
The protection level is Copy No More.
The protection level is Copy One Generation.
The protection level is Copy Never.
Redistribution control (also called the broadcast flag) is required. This flag can be combined with the other flags.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Defines event types for the
For each event type, the
In your implementation of OnMediaPlayerEvent, you must cast the pEventHeader parameter to the correct structure type. A set of macros is defined for this purpose. These macros check the value of the event type and return
Event type | Event structure Pointer cast macro |
MFP_GET_PLAY_EVENT | |
MFP_GET_PAUSE_EVENT | |
MFP_GET_STOP_EVENT | |
MFP_GET_POSITION_SET_EVENT | |
MFP_GET_RATE_SET_EVENT | |
MFP_GET_MEDIAITEM_CREATED_EVENT | |
MFP_GET_MEDIAITEM_SET_EVENT | |
MFP_GET_FRAME_STEP_EVENT | |
MFP_GET_MEDIAITEM_CLEARED_EVENT | |
MFP_GET_MF_EVENT | |
MFP_GET_ERROR_EVENT | |
MFP_GET_PLAYBACK_ENDED_EVENT | |
MFP_GET_ACQUIRE_USER_CREDENTIAL_EVENT |
?
-Defines policy settings for the
Specifies the object type for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Specifies the current playback state.
- Contains flags that define the behavior of the
Defines actions that can be performed on a stream.
-No action.
Play the stream.
Copy the stream.
Export the stream to another format.
Extract the data from the stream and pass it to the application. For example, acoustic echo cancellation requires this action.
Reserved.
Reserved.
Reserved.
Last member of the enumeration.
Contains flags for the
If the decoder sets the
Specifies how aggressively a pipeline component should drop samples.
-In drop mode, a component drops samples, more or less aggressively depending on the level of the drop mode. The specific algorithm used depends on the component. Mode 1 is the least aggressive mode, and mode 5 is the most aggressive. A component is not required to implement all five levels.
For example, suppose an encoded video stream has three B-frames between each pair of P-frames. A decoder might implement the following drop modes:
Mode 1: Drop one out of every three B frames.
Mode 2: Drop one out of every two B frames.
Mode 3: Drop all delta frames.
Modes 4 and 5: Unsupported.
The enhanced video renderer (EVR) can drop video frames before sending them to the EVR mixer.
-Normal processing of samples. Drop mode is disabled.
First drop mode (least aggressive).
Second drop mode.
Third drop mode.
Fourth drop mode.
Fifth drop mode (most aggressive, if it is supported; see Remarks).
Maximum number of drop modes. This value is not a valid flag.
Specifies the quality level for a pipeline component. The quality level determines how the component consumes or produces samples.
-Each successive quality level decreases the amount of processing that is needed, while also reducing the resulting quality of the audio or video. The specific algorithm used to reduce quality depends on the component. Mode 1 is the least aggressive mode, and mode 5 is the most aggressive. A component is not required to implement all five levels. Also, the same quality level might not be comparable between two different components.
Video decoders can often reduce quality by leaving out certain post-processing steps. The enhanced video renderer (EVR) can sometimes reduce quality by switching to a different deinterlacing mode.
-Normal quality.
One level below normal quality.
Two levels below normal quality.
Three levels below normal quality.
Four levels below normal quality.
Five levels below normal quality.
Maximum number of quality levels. This value is not a valid flag.
Specifies the direction of playback (forward or reverse).
-Forward playback.
Reverse playback.
Defines the version number for sample protection.
-No sample protection.
Version 1.
Version 2.
Version 3.
Specifies how a video stream is interlaced.
In the descriptions that follow, upper field refers to the field that contains the leading half scan line. Lower field refers to the field that contains the first full scan line.
-Scan lines in the lower field are 0.5 scan line lower than those in the upper field. In NTSC television, a frame consists of a lower field followed by an upper field. In PAL television, a frame consists of an upper field followed by a lower field.
The upper field is also called the even field, the top field, or field 2. The lower field is also called the odd field, the bottom field, or field 1.
If the interlace mode is
The type of interlacing is not known.
Progressive frames.
Specifies how to open or create a file.
-Open an existing file. Fail if the file does not exist.
Create a new file. Fail if the file already exists.
Open an existing file and truncate it, so that the size is zero bytes. Fail if the file does not already exist.
If the file does not exist, create a new file. If the file exists, open it.
Create a new file. If the file exists, overwrite the file.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies whether a stream associated with an
Contains flags for adding a topology to the sequencer source, or updating a topology already in the queue.
-This topology is the last topology in the sequence.
Retrieves an interface from the enhanced video renderer (EVR), or from the video mixer or video presenter.
-This method can be called only from inside the
The presenter can use this method to query the EVR and the mixer. The mixer can use it to query the EVR and the presenter. Which objects are queried depends on the caller and the service
Caller | Service | Objects queried |
---|---|---|
Presenter | MR_VIDEO_RENDER_SERVICE | EVR |
Presenter | MR_VIDEO_MIXER_SERVICE | Mixer |
Mixer | MR_VIDEO_RENDER_SERVICE | Presenter and EVR |
?
The following interfaces are available from the EVR:
IMediaEventSink. This interface is documented in the DirectShow SDK documentation.
The following interfaces are available from the mixer:
Specifies the scope of the search. Currently this parameter is ignored. Use the value
Reserved, must be zero.
Service
Interface identifier of the requested interface.
Array of interface references. If the method succeeds, each member of the array contains either a valid interface reference or
Pointer to a value that specifies the size of the ppvObjects array. The value must be at least 1. In the current implementation, there is no reason to specify an array size larger than one element. The value is not changed on output.
Defines flags for the
Defines the behavior of the
These flags are optional, and are not mutually exclusive. If no flags are set, the Media Session resolves the topology and then adds it to the queue of pending presentations.
- Describes the current status of a call to the
Specifies how the ASF file sink should apply Windows Media DRM.
-Undefined action.
Encode the content using Windows Media DRM. Use this flag if the source content does not have DRM protection.
Transcode the content using Windows Media DRM. Use this flag if the source content has Windows Media DRM protection and you want to change the encoding parameters but not the DRM protection.
Transcrypt the content. Use this flag if the source content has DRM protection and you want to change the DRM protection; for example, if you want to convert from Windows Media DRM version 1 to Windows Media DRM version 7 or later.
Reserved. Do not use.
Contains flags for the
Contains flags that indicate the status of the
Contains values that specify common video formats.
-Reserved; do not use.
NTSC television (720 x 480i).
PAL television (720 x 576i).
DVD, NTSC standard (720 x 480).
DVD, PAL standard (720 x 576).
DV video, PAL standard.
DV video, NTSC standard.
ATSC digital television, SD (480i).
ATSC digital television, HD interlaced (1080i)
ATSC digital television, HD progressive (720p)
Defines stream marker information for the
If the Streaming Audio Renderer receives an
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text is aligned in its parent block element.
-Text is aligned at the start of its parent block element.
Text is aligned at the end of its parent block element.
Text is aligned in the center of its parent block element.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the type of a timed text cue event.
-The cue has become active.
The cue has become inactive.
All cues have been deactivated.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text is decorated (underlined and so on).
-Text isn't decorated.
Text is underlined.
Text has a line through it.
Text has a line over it.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text is aligned with the display.
-Text is aligned before an element.
Text is aligned after an element.
Text is aligned in the center between elements.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the kind error that occurred with a timed text track.
-This enumeration is used to return error information from the
No error occurred.
A fatal error occurred.
An error occurred with the data format of the timed text track.
A network error occurred when trying to load the timed text track.
An internal error occurred.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the font style of the timed text.
-The font style is normal, sometimes referred to as roman.
The font style is oblique.
The font style is italic.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies how text appears when the parent element is scrolled.
-Text pops on when the parent element is scrolled.
Text rolls up when the parent element is scrolled.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the kind of timed text track.
-The kind of timed text track is unknown.
The kind of timed text track is subtitles.
The kind of timed text track is closed captions.
The kind of timed text track is metadata.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the units in which the timed text is measured.
-The timed text is measured in pixels.
The timed text is measured as a percentage.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Specifies the sequence in which text is written on its parent element.
-Text is written from left to right and top to bottom.
Text is written from right to left and top to bottom.
Text is written from top to bottom and right to left.
Text is written from top to bottom and left to right.
Text is written from left to right.
Text is written from right to left.
Text is written from top to bottom.
Contains flags for the
Defines messages for a Media Foundation transform (MFT). To send a message to an MFT, call
Some messages require specific actions from the MFT. These events have "MESSAGE" in the message name. Other messages are informational; they notify the MFT of some action by the client, and do not require any particular response from the MFT. These messages have "NOTIFY" in the messages name. Except where noted, an MFT should not rely on the client sending notification messages.
-Specifies whether the topology loader enables Microsoft DirectX Video Acceleration (DXVA) in the topology.
-This enumeration is used with the
If an MFT supports DXVA, the MFT must return TRUE for the
Previous versions of Microsoft Media Foundation supported DXVA only for decoders.
-The topology loader enables DXVA - on the decoder if possible, and drops optional Media Foundation transforms (MFTs) that do not support DXVA.
The topology loader disables all video acceleration. This setting forces software processing, even when the decoder supports DXVA.
The topology loader enables DXVA on every MFT that supports it.
Specifies whether the topology loader will insert hardware-based Media Foundation transforms (MFTs) into the topology.
- This enumeration is used with the
Use only software MFTs. Do not use hardware-based MFTs. This mode is the default, for backward compatibility with existing applications.
Use hardware-based MFTs when possible, and software MFTs otherwise. This mode is the recommended one.
If hardware-based MFTs are available, the topoloader will insert them. If not, the connection will fail.
Supported in Windows?8.1 and later.
Defines status flags for the
Specifies the status of a topology during playback.
- This enumeration is used with the
For a single topology, the Media Session sends these status flags in numerical order, starting with
This value is not used.
The topology is ready to start. After this status flag is received, you can use the Media Session's
The Media Session has started to read data from the media sources in the topology.
The Media Session modified the topology, because the format of a stream changed.
The media sinks have switched from the previous topology to this topology. This status value is not sent for the first topology that is played. For the first topology, the
Playback of this topology is complete. The Media Session might still use the topology internally. The Media Session does not completely release the topology until it sends the next
Defines the type of a topology node.
-Output node. Represents a media sink in the topology.
Source node. Represents a media stream in the topology.
Transform node. Represents a Media Foundation Transform (MFT) in the topology.
Tee node. A tee node does not hold a reference to an object. Instead, it represents a fork in the stream. A tee node has one input and multiple outputs, and samples from the upstream node are delivered to all of the downstream nodes.
Reserved.
Defines at what times a transform in a topology is drained.
-The transform is drained when the end of a stream is reached. It is not drained when markout is reached at the end of a segment.
The transform is drained whenever a topology ends.
The transform is never drained.
Defines when a transform in a topology is flushed.
-The transform is flushed whenever the stream changes, including seeks and new segments.
The transform is flushed when seeking is performed on the stream.
The transform is never flushed during streaming. It is flushed only when the object is released.
Defines the profile flags that are set in the
These flags are checked by
For more information about the stream settings that an application can specify, see Using the Transcode API.
-If the
The
For the video stream, the required attributes are as follows:
If these attributes are not set,
Use the
For example, assume that your input source is an MP3 file. You set the container to be
Defines flags for the
Contains flags for registering and enumeration Media Foundation transforms (MFTs).
These flags are used in the following functions:
For registration, these flags describe the MFT that is being registered. Some flags do not apply in that context. For enumeration, these flags control which MFTs are selected in the enumeration. For more details about the precise meaning of these flags, see the reference topics for
For registration, the
Defines flags for processing output samples in a Media Foundation transform (MFT).
-Do not produce output for streams in which the pSample member of the
Regenerates the last output sample.
Note Requires Windows?8.
Indicates the status of a call to
If the MFT sets this flag, the ProcessOutput method returns
Call
Call
Call
Until these steps are completed, all further calls to ProcessOutput return
Indicates whether the URL is from a trusted source.
-The validity of the URL cannot be guaranteed because it is not signed. The application should warn the user.
The URL is the original one provided with the content.
The URL was originally signed and has been tampered with. The file should be considered corrupted, and the application should not navigate to the URL without issuing a strong warning the user.
Specifies how 3D video frames are stored in memory.
-This enumeration is used with the
The base view is stored in a single buffer. The other view is discarded.
Each media sample contains multiple buffers, one for each view.
Each media sample contains one buffer, with both views packed side-by-side into a single frame.
Each media sample contains one buffer, with both views packed top-and-bottom into a single frame.
Specifies how to output a 3D stereoscopic video stream.
-This enumeration is used with the
Output the base view only. Discard the other view.
Output a stereo view (two buffers).
Specifies how a 3D video frame is stored in a media sample.
-This enumeration is used with the
The exact layout of the views in memory is specified by the following media type attributes:
Each view is stored in a separate buffer. The sample contains one buffer per view.
All of the views are stored in the same buffer. The sample contains a single buffer.
Specifies the aspect-ratio mode.
-Do not maintain the aspect ratio of the video. Stretch the video to fit the output rectangle.
Preserve the aspect ratio of the video by letterboxing or within the output rectangle.
Correct the aspect ratio if the physical size of the display device does not match the display resolution. For example, if the native resolution of the monitor is 1600 by 1200 (4:3) but the display resolution is 1280 by 1024 (5:4), the monitor will display non-square pixels.
If this flag is set, you must also set the
Apply a non-linear horizontal stretch if the aspect ratio of the destination rectangle does not match the aspect ratio of the source rectangle.
The non-linear stretch algorithm preserves the aspect ratio in the middle of the picture and stretches (or shrinks) the image progressively more toward the left and right. This mode is useful when viewing 4:3 content full-screen on a 16:9 display, instead of pillar-boxing. Non-linear vertical stretch is not supported, because the visual results are generally poor.
This mode may cause performance degradation.
If this flag is set, you must also set the
Contains flags that define the chroma encoding scheme for Y'Cb'Cr' data.
-These flags are used with the
For more information about these values, see the remarks for the DXVA2_VideoChromaSubSampling enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
-Unknown encoding scheme.
Chroma should be reconstructed as if the underlying video was progressive content, rather than skipping fields or applying chroma filtering to minimize artifacts from reconstructing 4:2:0 interlaced chroma.
Chroma samples are aligned horizontally with the luma samples, or with multiples of the luma samples. If this flag is not set, chroma samples are located 1/2 pixel to the right of the corresponding luma sample.
Chroma samples are aligned vertically with the luma samples, or with multiples of the luma samples. If this flag is not set, chroma samples are located 1/2 pixel down from the corresponding luma sample.
The U and V planes are aligned vertically. If this flag is not set, the chroma planes are assumed to be out of phase by 1/2 chroma sample, alternating between a line of U followed by a line of V.
Specifies the chroma encoding scheme for MPEG-2 video. Chroma samples are aligned horizontally with the luma samples, but are not aligned vertically. The U and V planes are aligned vertically.
Specifies the chroma encoding scheme for MPEG-1 video.
Specifies the chroma encoding scheme for PAL DV video.
Chroma samples are aligned vertically and horizontally with the luma samples. YUV formats such as 4:4:4, 4:2:2, and 4:1:1 are always cosited in both directions and should use this flag.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Specifies the type of copy protection required for a video stream.
-Use these flags with the
No copy protection is required.
Analog copy protection should be applied.
Digital copy protection should be applied.
Contains flags that describe a video stream.
These flags are used in the
Developers are encouraged to use media type attributes instead of using the
Flags | Media Type Attribute |
---|---|
| |
| |
| |
| |
Use the |
?
The following flags were defined to describe per-sample interlacing information, but are obsolete:
Instead, components should use sample attributes to describe per-sample interlacing information, as described in the topic Video Interlacing.
-Specifies how a video stream is interlaced.
In the descriptions that follow, upper field refers to the field that contains the leading half scan line. Lower field refers to the field that contains the first full scan line.
-Scan lines in the lower field are 0.5 scan line lower than those in the upper field. In NTSC television, a frame consists of a lower field followed by an upper field. In PAL television, a frame consists of an upper field followed by a lower field.
The upper field is also called the even field, the top field, or field 2. The lower field is also called the odd field, the bottom field, or field 1.
If the interlace mode is
The type of interlacing is not known.
Progressive frames.
Interlaced frames. Each frame contains two fields. The field lines are interleaved, with the upper field appearing on the first line.
Interlaced frames. Each frame contains two fields. The field lines are interleaved, with the lower field appearing on the first line.
Interlaced frames. Each frame contains one field, with the upper field appearing first.
Interlaced frames. Each frame contains one field, with the lower field appearing first.
The stream contains a mix of interlaced and progressive modes.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Describes the optimal lighting for viewing a particular set of video content.
-This enumeration is used with the
The optimal lighting is unknown.
Bright lighting; for example, outdoors.
Medium brightness; for example, normal office lighting.
Dim; for example, a living room with a television and additional low lighting.
Dark; for example, a movie theater.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Contains flags that are used to configure how the enhanced video renderer (EVR) performs deinterlacing.
-To set these flags, call the
These flags control some trade-offs between video quality and rendering speed. The constants named "MFVideoMixPrefs_Allow..." enable lower-quality settings, but only when the quality manager requests a drop in quality. The constants named "MFVideoMixPrefs_Force..." force the EVR to use lower-quality settings regardless of what the quality manager requests. (For more information about the quality manager, see
Currently two lower-quality modes are supported, as described in the following table. Either is preferable to dropping an entire frame.
Mode | Description |
---|---|
Half interface | The EVR's video mixer skips the second field (relative to temporal order) of each interlaced frame. The video mixer still deinterlaces the first field, and this operation typically interpolates data from the second field. The overall frame rate is unaffected. |
Bob deinterlacing | The video mixer uses bob deinterlacing, even if the driver supports a higher-quality deinterlacing algorithm. |
?
-Force the EVR to skip the second field (in temporal order) of every interlaced frame.
If the EVR is falling behind, allow it to skip the second field (in temporal order) of every interlaced frame.
If the EVR is falling behind, allow it to use bob deinterlacing, even if the driver supports a higher-quality deinterlacing mode.
Force the EVR to use bob deinterlacing, even if the driver supports a higher-quality mode.
The bitmask of valid flag values. This constant is not itself a valid flag. -
Specifies whether to pad a video image so that it fits within a specified aspect ratio.
-Use these flags with the
Do not pad the image.
Pad the image so that it can be displayed in a 4?3 area.
Pad the image so that it can be displayed in a 16?9 area.
Specifies the color primaries of a video source. The color primaries define how to convert colors from RGB color space to CIE XYZ color space.
-This enumeration is used with the
For more information about these values, see the remarks for the DXVA2_VideoPrimaries enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
-The color primaries are unknown.
Reserved.
ITU-R BT.709. Also used for sRGB and scRGB.
ITU-R BT.470-4 System M (NTSC).
ITU-R BT.470-4 System B,G (NTSC).
SMPTE 170M.
SMPTE 240M.
EBU 3213.
SMPTE C (SMPTE RP 145).
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Defines algorithms for the video processor which is use by MF_VIDEO_PROCESSOR_ALGORITHM.
-Specifies how to flip a video image.
-Do not flip the image.
Flip the image horizontally.
Flip the image vertically.
Specifies how to rotate a video image.
-Do not rotate the image.
Rotate the image to the correct viewing orientation.
Contains flags that define how the enhanced video renderer (EVR) displays the video.
-To set these flags, call
The flags named "MFVideoRenderPrefs_Allow..." cause the EVR to use lower-quality settings only when requested by the quality manager. (For more information, see
If this flag is set, the EVR does not draw the border color. By default, the EVR draws a border on areas of the destination rectangle that have no video. See
If this flag is set, the EVR does not clip the video when the video window straddles two monitors. By default, if the video window straddles two monitors, the EVR clips the video to the monitor that contains the largest area of video.
Note??Requires Windows?7 or later. ?
Allow the EVR to limit its output to match GPU bandwidth.
Note??Requires Windows?7 or later. ?
Force the EVR to limit its output to match GPU bandwidth.
Note??Requires Windows?7 or later. ?
Force the EVR to batch Direct3D Present calls. This optimization enables the system to enter to idle states more frequently, which can reduce power consumption.
Note??Requires Windows?7 or later. ?
Allow the EVR to batch Direct3D Present calls.
Note??Requires Windows?7 or later. ?
Force the EVR to mix the video inside a rectangle that is smaller than the output rectangle. The EVR will then scale the result to the correct output size. The effective resolution will be lower if this setting is applied.
Note??Requires Windows?7 or later. ?
Allow the EVR to mix the video inside a rectangle that is smaller than the output rectangle.
Note??Requires Windows?7 or later. ?
Prevent the EVR from repainting the video window after a stop command. By default, the EVR repaints the video window black after a stop command.
Describes the rotation of the video image in the counter-clockwise direction.
-This enumeration is used with the
The image is not rotated.
The image is rotated 90 degrees counter-clockwise.
The image is rotated 180 degrees.
The image is rotated 270 degrees counter-clockwise.
Describes the intended aspect ratio for a video stream.
-Use these flags with the
The aspect ratio is unknown.
The source is 16?9 content encoded within a 4?3 area.
The source is 2.35:1 content encoded within a 16?9 or 4?3 area.
Specifies the conversion function from linear RGB to non-linear RGB (R'G'B').
- These flags are used with the
For more information about these values, see the remarks for the DXVA2_VideoTransferFunction enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
- Unknown. Treat as
Linear RGB (gamma = 1.0).
True 1.8 gamma, L' = L^1/1.8.
True 2.0 gamma, L' = L^1/2.0.
True 2.2 gamma, L' = L^1/2.2. This transfer function is used in ITU-R BT.470-2 System M (NTSC).
ITU-R BT.709 transfer function. Gamma 2.2 curve with a linear segment in the lower range. This transfer function is used in BT.709, BT.601, SMPTE 296M, SMPTE 170M, BT.470, and SPMTE 274M. In addition BT-1361 uses this function within the range [0...1].
SPMTE 240M transfer function. Gamma 2.2 curve with a linear segment in the lower range.
sRGB transfer function. Gamma 2.4 curve with a linear segment in the lower range.
True 2.8 gamma. L' = L^1/2.8. This transfer function is used in ITU-R BT.470-2 System B, G (PAL).
Logarithmic transfer (100:1 range); for example, as used in H.264 video.
Note??Requires Windows?7 or later. ?Logarithmic transfer (316.22777:1 range); for example, as used in H.264 video.
Note??Requires Windows?7 or later. ?Symmetric ITU-R BT.709.
Note??Requires Windows?7 or later. ?Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Describes the conversion matrices between Y'PbPr (component video) and studio R'G'B'.
-This enumeration is used with the
For more information about these values, see the remarks for the DXVA2_VideoTransferMatrix enumeration, which is the DirectX Video Acceleration (DXVA) equivalent of this enumeration.
-Unknown transfer matrix. Treat as
ITU-R BT.709 transfer matrix.
ITU-R BT.601 transfer matrix. Also used for SMPTE 170 and ITU-R BT.470-2 System B,G.
SMPTE 240M transfer matrix.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Reserved.
Reserved. This member forces the enumeration type to compile as a DWORD value.
Defines messages for an enhanced video renderer (EVR) presenter. This enumeration is used with the
Contains flags that specify how to convert an audio media type.
-Convert the media type to a
Convert the media type to a
Provides configuration information to the dispatching thread for a callback.
-The GetParameters method returns information about the callback so that the dispatching thread can optimize the process that it uses to invoke the callback.
If the method returns a value other than zero in the pdwFlags parameter, your Invoke method must meet the requirements described here. Otherwise, the callback might delay the pipeline.
If you want default values for both parameters, return E_NOTIMPL. The default values are given in the parameter descriptions on this page.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Receives a flag indicating the behavior of the callback object's
Value | Meaning |
---|---|
| The callback does not take a long time to complete, but has no specific restrictions on what system calls it makes. The callback generally takes less than 30 milliseconds to complete. |
The callback does very minimal processing. It takes less than 1 millisecond to complete. The callback must be invoked from one of the following work queues: | |
Implies The callback must be invoked from one of the following work queues: | |
Blocking callback. | |
Reply callback. |
?
Receives the identifier of the work queue on which the callback is dispatched.
This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
If the work queue is not compatible with the value returned in pdwFlags, the Media Foundation platform returns
Creates the default video presenter for the enhanced video renderer (EVR).
-Pointer to the owner of the object. If the object is aggregated, pass a reference to the aggregating object's
Interface identifier (IID) of the video device interface that will be used for processing the video. Currently the only supported value is IID_IDirect3DDevice9.
IID of the requested interface on the video presenter. The video presenter exposes the
Receives a reference to the requested interface on the video presenter. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates the default video mixer for the enhanced video renderer (EVR).
-Pointer to the owner of this object. If the object is aggregated, pass a reference to the aggregating object's
Interface identifier (IID) of the video device interface that will be used for processing the video. Currently the only supported value is IID_IDirect3DDevice9.
IID of the requested interface on the video mixer. The video mixer exposes the
Receives a reference to the requested interface. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the default video mixer and video presenter for the enhanced video renderer (EVR).
-Pointer to the owner of the video mixer. If the mixer is aggregated, pass a reference to the aggregating object's
Pointer to the owner of the video presenter. If the presenter is aggregated, pass a reference to the aggregating object's
Interface identifier (IID) of the requested interface on the video mixer. The video mixer exposes the
Receives a reference to the requested interface on the video mixer. The caller must release the interface.
IID of the requested interface on the video presenter. The video presenter exposes the
Receives a reference to the requested interface on the video presenter. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates an instance of the enhanced video renderer (EVR) media sink.
-Interface identifier (IID) of the requested interface on the EVR.
Receives a reference to the requested interface. The caller must release the interface.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This function creates the Media Foundation version of the EVR. To create the DirectShow EVR filter, call CoCreateInstance with the class identifier CLSID_EnhancedVideoRenderer.
-Creates a media sample that manages a Direct3D surface.
- A reference to the
Receives a reference to the sample's
If this function succeeds, it returns
The media sample created by this function exposes the following interfaces in addition to
If pUnkSurface is non-
Alternatively, you can set pUnkSurface to
Creates an object that allocates video samples.
-The identifier of the interface to retrieve. Specify one of the following values:
Value | Meaning |
---|---|
| Retrieve an |
| Retrieve an |
| Retrieve an |
?
Receives a reference to the requested interface. The caller must release the interface.
If the function succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Creates a new instance of the MFPlay player object.
-If this function succeeds, it returns
Before calling this function, call CoIntialize(Ex) from the same thread to initialize the COM library.
Internally,
Creates the ASF Header Object object.
-The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF profile object.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an ASF profile object from a presentation descriptor.
-Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a presentation descriptor from an ASF profile object.
-Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF Splitter.
-The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF Multiplexer.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF Indexer object.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a byte stream to access the index in an ASF stream.
-Pointer to the
Byte offset of the index within the ASF stream. To get this value, call
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The call succeeded. |
| The offset specified in cbIndexStartOffset is invalid. |
?
Creates the ASF stream selector.
-Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the ASF media sink.
-Pointer to a byte stream that will be used to write the ASF stream.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an activation object that can be used to create the ASF media sink.
-Null-terminated wide-character string that contains the output file name.
A reference to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an activation object that can be used to create a Windows Media Video (WMV) encoder.
-A reference to the
A reference to the
Receives a reference to the
If this function succeeds, it returns
Creates an activation object that can be used to create a Windows Media Audio (WMA) encoder.
- A reference to the
A reference to the
Receives a reference to the
If this function succeeds, it returns
Creates an activation object for the ASF streaming sink.
The ASF streaming sink enables an application to write streaming Advanced Systems Format (ASF) packets to an HTTP byte stream.
-A reference to a byte stream object in which the ASF media sink writes the streamed content.
Receives a reference to the
If this function succeeds, it returns
To create the ASF streaming sink in another process, call
An application can get a reference to the ASF ContentInfo Object by calling IUnknown::QueryInterface on the media sink object received in the ppIMediaSink parameter. The ContentInfo object is used to set the encoder configuration settings, provide stream properties supplied by an ASF profile, and add metadata information. These configuration settings populate the various ASF header objects of the encoded ASF file. For more information, see - Setting Properties in the ContentInfo Object.
-Creates an activation object for the ASF streaming sink.
The ASF streaming sink enables an application to write streaming Advanced Systems Format (ASF) packets to an HTTP byte stream. The activation object can be used to create the ASF streaming sink in another process.
-A reference to the
A reference to an ASF ContentInfo Object that contains the properties that describe the ASF content. These settings can contain stream settings, encoding properties, and metadata. For more information about these properties, see Setting Properties in the ContentInfo Object.
Receives a reference to the
If this function succeeds, it returns
Starting in Windows?7, Media Foundation provides an ASF streaming sink that writes the content in a live streaming scenario. This function should be used in secure transcode scenarios where this media sink needs to be created and configured in the remote - process. Like the ASF file sink, the new media sink performs ASF related tasks such as writing the ASF header, generating data packets (muxing). The content is written to a caller-implemented byte stream such as an HTTP byte stream. - The caller must also provide an activation object that media sink can use to create the byte stream remotely.
In addition, it performs transcryption for streaming protected content. It hosts the Windows Media Digital Rights Management (DRM) for Network Devices Output Trust Authority (OTA) that handles the license request and response. For more information, see
The new media sink does not perform any time adjustments. If the clock seeks, the timestamps are not changed.
-Initializes Microsoft Media Foundation.
-Version number. Use the value
This parameter is optional when using C++ but required in C. The value must be one of the following flags:
Value | Meaning |
---|---|
| Do not initialize the sockets library. |
| Equivalent to MFSTARTUP_NOSOCKET. |
| Initialize the entire Media Foundation platform. This is the default value when dwFlags is not specified. |
?
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Version parameter requires a newer version of Media Foundation than the version that is running. |
| The Media Foundation platform is disabled because the system was started in "Safe Mode" (fail-safe boot). |
| Media Foundation is not implemented on the system. This error can occur if the media components are not present (See KB2703761 for more info). |
?
An application must call this function before using Media Foundation. Before your application quits, call
Do not call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Shuts down the Microsoft Media Foundation platform. Call this function once for every call to
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Blocks the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function prevents work queue threads from being shut down when
This function holds a lock on the Media Foundation platform. To unlock the platform, call
The
The default implementation of the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Unlocks the Media Foundation platform after it was locked by a call to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
The application must call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Puts an asynchronous operation on a work queue.
- The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
A reference to the
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue. For more information, see |
| The |
?
This function creates an asynchronous result object and puts the result object on the work queue. The work queue calls the
Puts an asynchronous operation on a work queue, with a specified priority.
- The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
The priority of the work item. Work items are performed in order of priority.
A reference to the
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue identifier. |
| The |
?
Puts an asynchronous operation on a work queue.
-The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue identifier. For more information, see |
| The |
?
To invoke the work-item, this function passes pResult to the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Puts an asynchronous operation on a work queue, with a specified priority.
- The identifier for the work queue. This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
The priority of the work item. Work items are performed in order of priority.
A reference to the
Returns an
Return code | Description |
---|---|
| Success. |
| Invalid work queue identifier. |
| The |
?
To invoke the work item, this function passes pResult to the
Queues a work item that waits for an event to be signaled.
-A handle to an event object. To create an event object, call CreateEvent or CreateEventEx.
The priority of the work item. Work items are performed in order of priority.
A reference to the
Receives a key that can be used to cancel the wait. To cancel the wait, call
If this function succeeds, it returns
This function enables a component to wait for an event without blocking the current thread.
The function puts a work item on the specified work queue. This work item waits for the event given in hEvent to be signaled. When the event is signaled, the work item invokes a callback. (The callback is contained in the result object given in pResult. For more information, see
The work item is dispatched on a work queue by the
Do not use any of the following work queues:
Creates a work queue that is guaranteed to serialize work items. The serial work queue wraps an existing multithreaded work queue. The serial work queue enforces a first-in, first-out (FIFO) execution order.
-The identifier of an existing work queue. This must be either a multithreaded queue or another serial work queue. Any of the following can be used:
Receives an identifier for the new serial work queue. Use this identifier when queuing work items.
This function can return one of these values.
Return code | Description |
---|---|
| The function succeeded. |
| The application exceeded the maximum number of work queues. |
| The application did not call |
?
When you are done using the work queue, call
Multithreaded queues use a thread pool, which can reduce the total number of threads in the pipeline. However, they do not serialize work items. A serial work queue enables the application to get the benefits of the thread pool, without needing to perform manual serialization of its own work items.
-
Schedules an asynchronous operation to be completed after a specified interval.
-Pointer to the
Time-out interval, in milliseconds. Set this parameter to a negative value. The callback is invoked after ?Timeout milliseconds. For example, if Timeout is ?5000, the callback is invoked after 5000 milliseconds.
Receives a key that can be used to cancel the timer. To cancel the timer, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
When the timer interval elapses, the timer calls
Schedules an asynchronous operation to be completed after a specified interval.
-Pointer to the
Pointer to the
Time-out interval, in milliseconds. Set this parameter to a negative value. The callback is invoked after ?Timeout milliseconds. For example, if Timeout is ?5000, the callback is invoked after 5000 milliseconds.
Receives a key that can be used to cancel the timer. To cancel the timer, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function creates an asynchronous result object. When the timer interval elapses, the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Attempts to cancel an asynchronous operation that was scheduled with
If this function succeeds, it returns
Because work items are asynchronous, the work-item callback might still be invoked after
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the timer interval for the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Sets a callback function to be called at a fixed interval.
-Pointer to the callback function, of type MFPERIODICCALLBACK.
Pointer to a caller-provided object that implements
Receives a key that can be used to cancel the callback. To cancel the callback, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To get the timer interval for the periodic callback, call
Cancels a callback function that was set by the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
The callback is dispatched on another thread, and this function does not attempt to synchronize with the callback thread. Therefore, it is possible for the callback to be invoked after this function returns.
-Creates a new work queue. This function extends the capabilities of the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The application exceeded the maximum number of work queues. |
| Invalid argument. |
| The application did not call |
?
When you are done using the work queue, call
The
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates a new work queue.
-Receives an identifier for the work queue.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The application exceeded the maximum number of work queues. |
| The application did not call |
?
When you are done using the work queue, call
Locks a work queue.
-The identifier for the work queue. The identifier is returned by the
If this function succeeds, it returns
This function prevents the
Call
Note??The
Unlocks a work queue.
-Identifier for the work queue to be unlocked. The identifier is returned by the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
The application must call
Associates a work queue with a Multimedia Class Scheduler Service (MMCSS) task.
-The identifier of the work queue. For private work queues, the identifier is returned by the
The name of the MMCSS task.For more information, see Multimedia Class Scheduler Service.
The unique task identifier. To obtain a new task identifier, set this value to zero.
A reference to the
A reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function is asynchronous. When the operation completes, the callback object's
To unregister the work queue from the MMCSS task, call
Associates a work queue with a Multimedia Class Scheduler Service (MMCSS) task.
-The identifier of the work queue. For private work queues, the identifier is returned by the
The name of the MMCSS task. For more information, see Multimedia Class Scheduler Service.
The unique task identifier. To obtain a new task identifier, set this value to zero.
The base relative priority for the work-queue threads. For more information, see AvSetMmThreadPriority.
A reference to the
A reference to the
If this function succeeds, it returns
This function extends the
This function is asynchronous. When the operation completes, the callback object's
To unregister the work queue from the MMCSS task, call
Completes an asynchronous request to associate a work queue with a Multimedia Class Scheduler Service (MMCSS) task.
-Pointer to the
The unique task identifier.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Call this function when the
To unregister the work queue from the MMCSS class, call
Unregisters a work queue from a Multimedia Class Scheduler Service (MMCSS) task.
-The identifier of the work queue. For private work queues, the identifier is returned by the
Pointer to the
Pointer to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function unregisters a work queue that was associated with an MMCSS class through the
This function is asynchronous. When the operation completes, the callback object's
Completes an asynchronous request to unregister a work queue from a Multimedia Class Scheduler Service (MMCSS) task.
-Pointer to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Call this function when the
Retrieves the Multimedia Class Scheduler Service (MMCSS) class currently associated with this work queue.
-Identifier for the work queue. The identifier is retrieved by the
Pointer to a buffer that receives the name of the MMCSS class. This parameter can be
On input, specifies the size of the pwszClass buffer, in characters. On output, receives the required size of the buffer, in characters. The size includes the terminating null character.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The pwszClass buffer is too small to receive the task name. |
?
If the work queue is not associated with an MMCSS task, the function retrieves an empty string.
To associate a work queue with an MMCSS task, call
Retrieves the Multimedia Class Scheduler Service (MMCSS) task identifier currently associated with this work queue.
-Identifier for the work queue. The identifier is retrieved by the
Receives the task identifier.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To associate a work queue with an MMCSS task, call
Registers the standard Microsoft Media Foundation platform work queues with the Multimedia Class Scheduler Service (MMCSS). -
-The name of the MMCSS task.
The MMCSS task identifier. On input, specify an existing MCCSS task group ID, or use the value zero to create a new task group. On output, receives the actual task group ID.
The base priority of the work-queue threads.
If this function succeeds, it returns
To unregister the platform work queues from the MMCSS class, call
Unregisters the Microsoft Media Foundation platform work queues from a Multimedia Class Scheduler Service (MMCSS) task.
-If this function succeeds, it returns
Obtains and locks a shared work queue.
-The name of the MMCSS task.
The base priority of the work-queue threads. If the regular-priority queue is being used (wszClass=""), then the value 0 must be passed in.
The MMCSS task identifier. On input, specify an existing MCCSS task group ID , or use the value zero to create a new task group. If the regular priority queue is being used (wszClass=""), then
Receives an identifier for the new work queue. Use this identifier when queuing work items.
If this function succeeds, it returns
A multithreaded work queue uses a thread pool to dispatch work items. Whenever a thread becomes available, it dequeues the next work item from the queue. Work items are dequeued in first-in-first-out order, but work items are not serialized. In other words, the work queue does not wait for a work item to complete before it starts the next work item.
Within a single process, the Microsoft Media Foundation platform creates up to one multithreaded queue for each Multimedia Class Scheduler Service (MMCSS) task. The
The
If the regular priority queue is being used (wszClass=""), then
Gets the relative thread priority of a work queue.
-The identifier of the work queue. For private work queues, the identifier is returned by the
Receives the relative thread priority.
If this function succeeds, it returns
This function returns the relative thread priority set by the
Creates an asynchronous result object. Use this function if you are implementing an asynchronous method.
-Pointer to the object stored in the asynchronous result. This reference is returned by the
Pointer to the
Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To invoke the callback specified in pCallback, call the
Invokes a callback method to complete an asynchronous operation.
-Pointer to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| Invalid work queue. For more information, see |
| The |
?
If you are implementing an asynchronous method, use this function to invoke the caller's
The callback is invoked from a Media Foundation work queue. For more information, see Writing an Asynchronous Method.
The
Creates a byte stream from a file.
- The requested access mode, specified as a member of the
The behavior of the function if the file already exists or does not exist, specified as a member of the
Bitwise OR of values from the
Pointer to a null-terminated string that contains the file name.
Receives a reference to the
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a byte stream that is backed by a temporary local file.
- The requested access mode, specified as a member of the
The behavior of the function if the file already exists or does not exist, specified as a member of the
Bitwise OR of values from the
Receives a reference to the
If this function succeeds, it returns
This function creates a file in the system temporary folder, and then returns a byte stream object for that file. The full path name of the file is storted in the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous request to create a byte stream from a file.
-The requested access mode, specified as a member of the
The behavior of the function if the file already exists or does not exist, specified as a member of the
Bitwise OR of values from the
Pointer to a null-terminated string containing the file name.
Pointer to the
Pointer to the
Receives an
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
When the request is completed, the callback object's
Completes an asynchronous request to create a byte stream from a file.
-Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Call this function when the
Cancels an asynchronous request to create a byte stream from a file.
-A reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
You can use this function to cancel a previous call to
Allocates system memory and creates a media buffer to manage it.
-Size of the buffer, in bytes.
Receives a reference to the
The function allocates a buffer with a 1-byte memory alignment. To allocate a buffer that is aligned to a larger memory boundary, call
When the media buffer object is destroyed, it releases the allocated memory.
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a media buffer that wraps an existing media buffer. The new media buffer points to the same memory as the original media buffer, or to an offset from the start of the memory.
-A reference to the
The start of the new buffer, as an offset in bytes from the start of the original buffer.
The size of the new buffer. The value of cbOffset + dwLength must be less than or equal to the size of valid data the original buffer. (The size of the valid data is returned by the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The requested offset or the requested length is not valid. |
?
The maximum size of the wrapper buffer is limited to the size of the valid data in the original buffer. This might be less than the allocated size of the original buffer. To set the size of the valid data, call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Converts a Media Foundation media buffer into a buffer that is compatible with DirectX Media Objects (DMOs).
-Pointer to the
Pointer to the
Offset in bytes from the start of the Media Foundation buffer. This offset defines where the DMO buffer starts. If this parameter is zero, the DMO buffer starts at the beginning of the Media Foundation buffer.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| Invalid argument. The pIMFMediaBuffer parameter must not be |
?
The DMO buffer created by this function also exposes the
If the Media Foundation buffer specified by pIMFMediaBuffer exposes the
Converts a Microsoft Direct3D?9 format identifier to a Microsoft DirectX Graphics Infrastructure (DXGI) format identifier.
-The D3DFORMAT value or FOURCC code to convert.
Returns a
Converts a Microsoft DirectX Graphics Infrastructure (DXGI) format identifier to a Microsoft Direct3D?9 format identifier.
-The
Returns a D3DFORMAT value or FOURCC code.
Locks the shared Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
-Receives a token that identifies this instance of the DXGI Device Manager. Use this token when calling
Receives a reference to the
If this function succeeds, it returns
This function obtains a reference to a DXGI Device Manager instance that can be shared between components. The Microsoft Media Foundation platform creates this instance of the DXGI Device Manager as a singleton object. Alternatively, you can create a new DXGI Device Manager by calling
The first time this function is called, the Media Foundation platform creates the shared DXGI Device Manager.
When you are done use the
Unlocks the shared Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
-If this function succeeds, it returns
Call this function after a successful call to the
Creates a media buffer object that manages a Direct3D 9 surface.
-Identifies the type of Direct3D 9 surface. Currently this value must be IID_IDirect3DSurface9.
A reference to the
If TRUE, the buffer's
For more information about top-down versus bottom-up images, see Image Stride.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
This function creates a media buffer object that holds a reference to the Direct3D surface specified in punkSurface. Locking the buffer gives the caller access to the surface memory. When the buffer object is destroyed, it releases the surface. For more information about media buffers, see Media Buffers.
Note??This function does not allocate the Direct3D surface itself.? The buffer object created by this function also exposes the
This function does not support DXGI surfaces.
-Creates a media buffer object that manages a Windows Imaging Component (WIC) bitmap.
-Set this parameter to __uuidof(
.
A reference to the
Receives a reference to the
If this function succeeds, it returns
Creates a media buffer to manage a Microsoft DirectX Graphics Infrastructure (DXGI) surface.
-Identifies the type of DXGI surface. This value must be IID_ID3D11Texture2D.
A reference to the
The zero-based index of a subresource of the surface. The media buffer object is associated with this subresource.
If TRUE, the buffer's
For more information about top-down versus bottom-up images, see Image Stride.
Receives a reference to the
If this function succeeds, it returns
The returned buffer object supports the following interfaces:
Creates an object that allocates video samples that are compatible with Microsoft DirectX Graphics Infrastructure (DXGI).
-The identifier of the interface to retrieve. Specify one of the following values.
Value | Meaning |
---|---|
| Retrieve an |
| Retrieve an |
| Retrieve an |
| Retrieve an |
?
Receives a reference to the requested interface. The caller must release the interface.
If this function succeeds, it returns
This function creates an allocator for DXGI video surfaces. The buffers created by this allocator expose the
Creates an instance of the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
- Receives a token that identifies this instance of the DXGI Device Manager. Use this token when calling
Receives a reference to the
If this function succeeds, it returns
When you create an
Allocates system memory with a specified byte alignment and creates a media buffer to manage the memory.
-Size of the buffer, in bytes.
Specifies the memory alignment for the buffer. Use one of the following constants.
Value | Meaning |
---|---|
| Align to 1 bytes. |
| Align to 2 bytes. |
| Align to 4 bytes. |
| Align to 8 bytes. |
| Align to 16 bytes. |
| Align to 32 bytes. |
| Align to 64 bytes. |
| Align to 128 bytes. |
| Align to 256 bytes. |
| Align to 512 bytes. |
?
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
When the media buffer object is destroyed, it releases the allocated memory.
-
Creates a media event object.
-The event type. See
The extended type. See
The event status. See
The value associated with the event, if any. See
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an event queue.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function creates a helper object that you can use to implement the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an empty media sample.
-Receives a reference to the
Initially the sample does not contain any media buffers.
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an empty attribute store.
-Receives a reference to the
The initial number of elements allocated for the attribute store. The attribute store grows as needed.
If this function succeeds, it returns
Attributes are used throughout Microsoft Media Foundation to configure objects, describe media formats, query object properties, and other purposes. For more information, see Attributes in Media Foundation.
For a complete list of all the defined attribute GUIDs in Media Foundation, see Media Foundation Attributes.
-
Initializes the contents of an attribute store from a byte array.
-Pointer to the
Pointer to the array that contains the initialization data.
Size of the pBuf array, in bytes.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The buffer is not valid. |
?
Use this function to deserialize an attribute store that was serialized with the
This function deletes any attributes that were previously stored in pAttributes.
-
Retrieves the size of the buffer needed for the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Use this function to find the size of the array that is needed for the
Converts the contents of an attribute store to a byte array.
-Pointer to the
Pointer to an array that receives the attribute data.
Size of the pBuf array, in bytes. To get the required size of the buffer, call
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The buffer given in pBuf is too small. |
?
The function skips any attributes with
To convert the byte array back into an attribute store, call
To write an attribute store to a stream, call the
Adds information about a Media Foundation transform (MFT) to the registry.
Applications can enumerate the MFT by calling the
If this function succeeds, it returns
The registry entries created by this function are read by the following functions:
Function | Description |
---|---|
| Enumerates MFTs by media type and category. |
| Extended version of |
| Looks up an MFT by CLSID and retrieves the registry information. |
?
This function does not register the CLSID of the MFT for the CoCreateInstance or CoGetClassObject functions.
To remove the entries from the registry, call
The formats given in the pInputTypes and pOutputTypes parameters are intended to help applications search for MFTs by format. Applications can use the
It is recommended to specify at least one input type in pInputTypes and one output type in the pOutputTypes parameter. Otherwise, the MFT might be skipped in the enumeration.
On 64-bit Windows, the 32-bit version of this function registers the MFT in the 32-bit node of the registry. For more information, see 32-bit and 64-bit Application Data in the Registry.
-Unregisters a Media Foundation transform (MFT).
-The CLSID of the MFT.
If this function succeeds, it returns
This function removes the registry entries created by the
It is safe to call
Registers a Media Foundation transform (MFT) in the caller's process.
-A reference to the
A
A wide-character null-terminated string that contains the friendly name of the MFT.
A bitwise OR of zero or more flags from the _MFT_ENUM_FLAG enumeration.
The number of elements in the pInputTypes array.
A reference to an array of
The number of elements in the pOutputTypes array.
A reference to an array of
If this function succeeds, it returns
The primary purpose of this function is to make an MFT available for automatic topology resolution without making the MFT available to other processes or applications.
After you call this function, the MFT can be enumerated by calling the
The pClassFactory parameter specifies a class factory object that creates the MFT. The class factory's IClassFactory::CreateInstance method must return an object that supports the
To unregister the MFT from the current process, call
If you need to register an MFT in the Protected Media Path (PMP) process, use the
Unregisters one or more Media Foundation transforms (MFTs) from the caller's process.
-A reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT specified by the pClassFactory parameter was not registered in this process. |
?
Use this function to unregister a local MFT that was previously registered through the
If the pClassFactory parameter is
Registers a Media Foundation transform (MFT) in the caller's process.
-The class identifier (CLSID) of the MFT.
A
A wide-character null-terminated string that contains the friendly name of the MFT.
A bitwise OR of zero or more flags from the _MFT_ENUM_FLAG enumeration.
The number of elements in the pInputTypes array.
A reference to an array of
The number of elements in the pOutputTypes array.
A reference to an array of
If this function succeeds, it returns
The primary purpose of this function is to make an MFT available for automatic topology resolution without making the MFT available to other processes or applications.
After you call this function, the MFT can be enumerated by calling the
To unregister the MFT from the current process, call
If you need to register an MFT in the Protected Media Path (PMP) process, use the
Unregisters a Media Foundation transform (MFT) from the caller's process.
-The class identifier (CLSID) of the MFT.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT specified by the clsidMFT parameter was not registered in this process. |
?
Use this function to unregister a local MFT that was previously registered through the
Enumerates Media Foundation transforms (MFTs) in the registry.
Starting in Windows?7, applications should use the
If this function succeeds, it returns
This function returns a list of all the MFTs in the specified category that match the search criteria given by the pInputType, pOutputType, and pAttributes parameters. Any of those parameters can be
If no MFTs match the criteria, the method succeeds but returns the value zero in pcMFTs.
-Gets a list of Microsoft Media Foundation transforms (MFTs) that match specified search criteria. This function extends the
If this function succeeds, it returns
The Flags parameter controls which MFTs are enumerated, and the order in which they are returned. The flags for this parameter fall into several groups.
The first set of flags specifies how an MFT processes data.
Flag | Description |
---|---|
| The MFT performs synchronous data processing in software. This is the original MFT processing model, and is compatible with Windows?Vista. |
| The MFT performs asynchronous data processing in software. This processing model requires Windows?7. For more information, see Asynchronous MFTs. |
| The MFT performs hardware-based data processing, using either the AVStream driver or a GPU-based proxy MFT. MFTs in this category always process data asynchronously. For more information, see Hardware MFTs. |
?
Every MFT falls into exactly one of these categories. To enumerate a category, set the corresponding flag in the Flags parameter. You can combine these flags to enumerate more than one category. If none of these flags is specified, the default category is synchronous MFTs (
Next, the following flags include MFTs that are otherwise excluded from the results. By default, flags that match these criteria are excluded from the results. Use any these flags to include them.
Flag | Description |
---|---|
| Include MFTs that must be unlocked by the application. |
| Include MFTs that are registered in the caller's process through either the |
| Include MFTs that are optimized for transcoding rather than playback. |
?
The last flag is used to sort and filter the results:
Flag | Description |
---|---|
| Sort and filter the results. |
?
If the
If you do not set the
Setting the Flags parameter to zero is equivalent to using the value
Setting Flags to
If no MFTs match the search criteria, the function returns
Gets a list of Microsoft Media Foundation transforms (MFTs) that match specified search criteria. This function extends the
If this function succeeds, it returns
The Flags parameter controls which MFTs are enumerated, and the order in which they are returned. The flags for this parameter fall into several groups.
The first set of flags specifies how an MFT processes data.
Flag | Description |
---|---|
| The MFT performs synchronous data processing in software. This is the original MFT processing model, and is compatible with Windows?Vista. |
| The MFT performs asynchronous data processing in software. This processing model requires Windows?7. For more information, see Asynchronous MFTs. |
| The MFT performs hardware-based data processing, using either the AVStream driver or a GPU-based proxy MFT. MFTs in this category always process data asynchronously. For more information, see Hardware MFTs. |
?
Every MFT falls into exactly one of these categories. To enumerate a category, set the corresponding flag in the Flags parameter. You can combine these flags to enumerate more than one category. If none of these flags is specified, the default category is synchronous MFTs (
Next, the following flags include MFTs that are otherwise excluded from the results. By default, flags that match these criteria are excluded from the results. Use any these flags to include them.
Flag | Description |
---|---|
| Include MFTs that must be unlocked by the application. |
| Include MFTs that are registered in the caller's process through either the |
| Include MFTs that are optimized for transcoding rather than playback. |
?
The last flag is used to sort and filter the results:
Flag | Description |
---|---|
| Sort and filter the results. |
?
If the
If you do not set the
Setting the Flags parameter to zero is equivalent to using the value
Setting Flags to
If no MFTs match the search criteria, the function returns
Gets information from the registry about a Media Foundation transform (MFT).
-The CLSID of the MFT.
Receives a reference to a wide-character string containing the friendly name of the MFT. The caller must free the string by calling CoTaskMemFree. This parameter can be
Receives a reference to an array of
Receives the number of elements in the ppInputTypes array. If ppInputTypes is
Receives a reference to an array of
Receives the number of elements in the ppOutputType array. If ppOutputTypes is
Receives a reference to the
This parameter can be
If this function succeeds, it returns
Gets a reference to the Microsoft Media Foundation plug-in manager.
-Receives a reference to the
If this function succeeds, it returns
Gets the merit value of a hardware codec.
-A reference to the
The size, in bytes, of the verifier array.
The address of a buffer that contains one of the following:
Receives the merit value.
If this function succeeds, it returns
The function fails if the MFT does not represent a hardware device with a valid Output Protection Manager (OPM) certificate.
-Registers a scheme handler in the caller's process.
-A string that contains the scheme. The scheme includes the trailing ':' character; for example, "http:".
A reference to the
If this function succeeds, it returns
Scheme handlers are used in Microsoft Media Foundation during the source resolution process, which creates a media source from a URL. For more information, see Scheme Handlers and Byte-Stream Handlers.
Within a process, local scheme handlers take precedence over scheme handlers that are registered in the registry. Local scheme handlers are not visible to other processes.
Use this function if you want to register a custom scheme handler for your application, but do not want the handler available to other applications.
-Registers a byte-stream handler in the caller's process.
-A string that contains the file name extension for this handler.
A string that contains the MIME type for this handler.
A reference to the
If this function succeeds, it returns
Byte-stream handlers are used in Microsoft Media Foundation during the source resolution process, which creates a media source from a URL. For more information, see Scheme Handlers and Byte-Stream Handlers.
Within a process, local byte-stream handlers take precedence over byte-stream handlers that are registered in the registry. Local byte-stream handlers are not visible to other processes.
Use this function if you want to register a custom byte-stream handler for your application, but do not want the handler available to other applications.
Either szFileExtension or szMimeType can be
Creates a wrapper for a byte stream.
-A reference to the
Receives a reference to the
If this function succeeds, it returns
The
Creates an activation object for a Windows Runtime class.
-The class identifier that is associated with the activatable runtime class.
A reference to an optional IPropertySet object, which is used to configure the Windows Runtime class. This parameter can be
The interface identifier (IID) of the interface being requested. The activation object created by this function supports the following interfaces:
Receives a reference to the requested interface. The caller must release the interface.
If this function succeeds, it returns
To create the Windows Runtime object, call
Validates the size of a buffer for a video format block.
-Pointer to a buffer that contains the format block.
Size of the pBlock buffer, in bytes.
The function returns an
Return code | Description |
---|---|
| The buffer that contains the format block is large enough. |
| The buffer that contains the format block is too small, or the format block is not valid. |
| This function does not support the specified format type. |
?
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an empty media type.
- Receives a reference to the
If this function succeeds, it returns
The media type is created without any attributes.
-[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Creates an
If this function succeeds, it returns
Converts a Media Foundation audio media type to a
Pointer to the
Receives a reference to the
Receives the size of the
Contains a flag from the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
If the wFormatTag member of the returned structure is
Retrieves the image size for a video format. Given a
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The |
?
Before calling this function, you must set at least the following members of the
Also, if biCompression is BI_BITFIELDS, the
This function fails if the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the image size, in bytes, for an uncompressed video format.
-Media subtype for the video format. For a list of subtypes, see Media Type GUIDs.
Width of the image, in pixels.
Height of the image, in pixels.
Receives the size of each frame, in bytes. If the format is compressed or is not recognized, the value is zero.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Converts a video frame rate into a frame duration.
-The numerator of the frame rate.
The denominator of the frame rate.
Receives the average duration of a video frame, in 100-nanosecond units.
If this function succeeds, it returns
This function is useful for calculating time stamps on a sample, given the frame rate.
Also, average time per frame is used in the older
For certain common frame rates, the function gets the frame duration from a look-up table:
Frames per second (floating point) | Frames per second (fractional) | Average time per frame |
---|---|---|
59.94 | 60000/1001 | 166833 |
29.97 | 30000/1001 | 333667 |
23.976 | 24000/1001 | 417188 |
60 | 60/1 | 166667 |
30 | 30/1 | 333333 |
50 | 50/1 | 200000 |
25 | 25/1 | 400000 |
24 | 24/1 | 416667 |
?
Most video content uses one of the frame rates listed here. For other frame rates, the function calculates the duration.
-
Calculates the frame rate, in frames per second, from the average duration of a video frame.
-The average duration of a video frame, in 100-nanosecond units.
Receives the numerator of the frame rate.
Receives the denominator of the frame rate.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Average time per frame is used in the older
This function uses a look-up table for certain common durations. The table is listed in the Remarks section for the
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Initializes a media type from an
If this function succeeds, it returns
Initializes a media type from a
Pointer to the
Pointer to a
Size of the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Compares a full media type to a partial media type.
-Pointer to the
Pointer to the
If the full media type is compatible with the partial media type, the function returns TRUE. Otherwise, the function returns
A pipeline component can return a partial media type to describe a range of possible formats the component might accept. A partial media type has at least a major type
This function returns TRUE if the following conditions are both true:
Otherwise, the function returns
Creates a media type that wraps another media type.
- A reference to the
A
A
Applications can define custom subtype GUIDs.
Receives a reference to the
If this function succeeds, it returns
The original media type (pOrig) is stored in the new media type under the
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type that was wrapped in another media type by the
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Creates a video media type from an
If this function succeeds, it returns
Instead of using the
Creates a partial video media type with a specified subtype.
- Pointer to a
Receives a reference to the
If this function succeeds, it returns
This function creates a media type and sets the major type equal to
You can get the same result with the following steps:
Queries whether a FOURCC code or D3DFORMAT value is a YUV format.
-FOURCC code or D3DFORMAT value.
The function returns one of the following values.
Return code | Description |
---|---|
| The value specifies a YUV format. |
| The value does not specify a recognized YUV format. |
?
This function checks whether Format specifies a YUV format. Not every YUV format is recognized by this function. However, if a YUV format is not recognized by this function, it is probably not supported for video rendering or DirectX video acceleration (DXVA).
-This function is not implemented.
-Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Reserved.
Returns E_FAIL.
Calculates the minimum surface stride for a video format.
-FOURCC code or D3DFORMAT value that specifies the video format. If you have a video subtype
Width of the image, in pixels.
Receives the minimum surface stride, in pixels.
If this function succeeds, it returns
This function calculates the minimum stride needed to hold the image in memory. Use this function if you are allocating buffers in system memory. Surfaces allocated in video memory might require a larger stride, depending on the graphics card.
If you are working with a DirectX surface buffer, use the
For planar YUV formats, this function returns the stride for the Y plane. Depending on the format, the chroma planes might have a different stride.
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.? -
Retrieves the image size, in bytes, for an uncompressed video format.
-FOURCC code or D3DFORMAT value that specifies the video format.
Width of the image, in pixels.
Height of the image, in pixels.
Receives the size of one frame, in bytes. If the format is compressed or is not recognized, this value is zero.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function is equivalent to the
Creates a video media type from a
If the function succeeds, it returns
Creates a Media Foundation media type from another format representation.
-Description | |
---|---|
AM_MEDIA_TYPE_REPRESENTATION | Convert a DirectShow |
?
Pointer to a buffer that contains the format representation to convert. The layout of the buffer depends on the value of guidRepresentation.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| The |
?
If the original format is a DirectShow audio media type, and the format type is not recognized, the function sets the following attributes on the converted media type.
Attribute | Description |
---|---|
| Contains the format type |
| Contains the format block. |
?
-[This API is not supported and may be altered or unavailable in the future.]
Creates an audio media type from a
Pointer to a
Receives a reference to the
If this function succeeds, it returns
The
Alternatively, you can call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Returns the FOURCC or D3DFORMAT value for an uncompressed video format.
-Returns a FOURCC or D3DFORMAT value that identifies the video format. If the video format is compressed or not recognized, the return value is D3DFMT_UNKNOWN.
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Initializes an
If this function succeeds, it returns
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Initializes an
If this function succeeds, it returns
This function fills in some reasonable default values for the specified RGB format.
Developers are encouraged to use media type attributes instead of using the
In general, you should avoid calling this function. If you know all of the format details, you can fill in the
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Converts the extended color information from an
If this function succeeds, it returns
[This API is not supported and may be altered or unavailable in the future. Applications should avoid using the
Sets the extended color information in a
If this function succeeds, it returns
This function sets the following fields in the
Copies an image or image plane from one buffer to another.
-Pointer to the start of the first row of pixels in the destination buffer.
Stride of the destination buffer, in bytes.
Pointer to the start of the first row of pixels in the source image.
Stride of the source image, in bytes.
Width of the image, in bytes.
Number of rows of pixels to copy.
If this function succeeds, it returns
This function copies a single plane of the image. For planar YUV formats, you must call the function once for each plane. In this case, pDest and pSrc must point to the start of each plane.
This function is optimized if the MMX, SSE, or SSE2 instruction sets are available on the processor. The function performs a non-temporal store (the data is written to memory directly without polluting the cache).
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.? -Converts an array of 16-bit floating-point numbers into an array of 32-bit floating-point numbers.
-Pointer to an array of float values. The array must contain at least dwCount elements.
Pointer to an array of 16-bit floating-point values, typed as WORD values. The array must contain at least dwCount elements.
Number of elements in the pSrc array to convert.
If this function succeeds, it returns
The function converts dwCount values in the pSrc array and writes them into the pDest array.
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.? -Converts an array of 32-bit floating-point numbers into an array of 16-bit floating-point numbers.
-Pointer to an array of 16-bit floating-point values, typed as WORD values. The array must contain at least dwCount elements.
Pointer to an array of float values. The array must contain at least dwCount elements.
Number of elements in the pSrc array to convert.
If this function succeeds, it returns
The function converts the values in the pSrc array and writes them into the pDest array.
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.? -Creates a system-memory buffer object to hold 2D image data.
-Width of the image, in pixels.
Height of the image, in pixels.
A FOURCC code or D3DFORMAT value that specifies the video format. If you have a video subtype
If TRUE, the buffer's
For more information about top-down versus bottom-up images, see Image Stride.
Receives a reference to the
This function can return one of these values.
Return code | Description |
---|---|
| Success. |
| Unrecognized video format. |
?
The returned buffer object also exposes the
Allocates a system-memory buffer that is optimal for a specified media type.
-A reference to the
The sample duration. This value is required for audio formats.
The minimum size of the buffer, in bytes. The actual buffer size might be larger. Specify zero to allocate the default buffer size for the media type.
The minimum memory alignment for the buffer. Specify zero to use the default memory alignment.
Receives a reference to the
If this function succeeds, it returns
For video formats, if the format is recognized, the function creates a 2-D buffer that implements the
For audio formats, the function allocates a buffer that is large enough to contain llDuration audio samples, or dwMinLength, whichever is larger.
This function always allocates system memory. For Direct3D surfaces, use the
Creates an empty collection object.
-Receives a reference to the collection object's
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Allocates a block of memory.
-Number of bytes to allocate.
Zero or more flags. For a list of valid flags, see HeapAlloc in the Windows SDK documentation.
Reserved. Set to
Reserved. Set to zero.
Reserved. Set to eAllocationTypeIgnore.
If the function succeeds, it returns a reference to the allocated memory block. If the function fails, it returns
In the current version of Media Foundation, this function is equivalent to calling the HeapAlloc function and specifying the heap of the calling process.
To free the allocated memory, call
Frees a block of memory that was allocated by calling the
Calculates ((a * b) + d) / c, where each term is a 64-bit signed value.
-A multiplier.
Another multiplier.
The divisor.
The rounding factor.
Returns the result of the calculation. If numeric overflow occurs, the function returns _I64_MAX (positive overflow) or LLONG_MIN (negative overflow). If Mfplat.dll cannot be loaded, the function returns _I64_MAX.
Gets the class identifier for a content protection system.
-The
Receives the class identifier to the content protection system.
If this function succeeds, it returns
The class identifier can be used to create the input trust authority (ITA) for the content protection system. Call CoCreateInstance or
Creates the Media Session in the application's process.
-The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
If your application does not play protected content, you can use this function to create the Media Session in the application's process. To use the Media Session for protected content, you must call
You can use the pConfiguration parameter to specify any of the following attributes:
Creates an instance of the Media Session inside a Protected Media Path (PMP) process.
- The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
You can use the pConfiguration parameter to set any of the following attributes:
If this function cannot create the PMP Media Session because a trusted binary was revoked, the ppEnablerActivate parameter receives an
If the function successfully creates the PMP Media Session, the ppEnablerActivate parameter receives the value
Do not make calls to the PMP Media Session from a thread that is processing a window message sent from another thread. To test whether the current thread falls into this category, call InSendMessage.
-Creates the source resolver, which is used to create a media source from a URL or byte stream.
-Receives a reference to the source resolver's
If this function succeeds, it returns
[This API is not supported and may be altered or unavailable in the future. Instead, applications should use the PSCreateMemoryPropertyStore function to create property stores.]
Creates an empty property store object.
- Receives a reference to the
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the URL schemes that are registered for the source resolver.
-Pointer to a
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Retrieves the MIME types that are registered for the source resolver.
-Pointer to a
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a topology object.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a topology node.
- The type of node to create, specified as a member of the
Receives a reference to the node's
If this function succeeds, it returns
Gets the media type for a stream associated with a topology node.
-A reference to the
The identifier of the stream to query. This parameter is interpreted as follows:
If TRUE, the function gets an output type. If
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream index is invalid. |
?
This function gets the actual media type from the object that is associated with the topology node. The pNode parameter should specify a node that belongs to a fully resolved topology. If the node belongs to a partial topology, the function will probably fail.
Tee nodes do not have an associated object to query. For tee nodes, the function gets the node's input type, if available. Otherwise, if no input type is available, the function gets the media type of the node's primary output stream. The primary output stream is identified by the
Queries an object for a specified service interface.
This function is a helper function that wraps the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The service requested cannot be found in the object represented by punkObject. |
?
Returns the system time.
-Returns the system time, in 100-nanosecond units.
Creates the presentation clock. The presentation clock is used to schedule the time at which samples are rendered and to synchronize multiple streams. -
-Receives a reference to the clock's
If this function succeeds, it returns
The caller must shut down the presentation clock by calling
Typically applications do not create the presentation clock. The Media Session automatically creates the presentation clock. To get a reference to the presentation clock from the Media Session, call
Creates a presentation time source that is based on the system time.
-Receives a reference to the object's
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a presentation descriptor.
-Number of elements in the apStreamDescriptors array.
Array of
Receives a reference to an
If this function succeeds, it returns
If you are writing a custom media source, you can use this function to create the source presentation descriptor. The presentation descriptor is created with no streams selected. Generally, a media source should select at least one stream by default. To select a stream, call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether a media presentation requires the Protected Media Path (PMP).
-Pointer to the
The function returns an
Return code | Description |
---|---|
| This presentation requires a protected environment. |
| This presentation does not require a protected environment. |
?
If this function returns
If the function returns S_FALSE, you can use the unprotected pipeline. Call
Internally, this function checks whether any of the stream descriptors in the presentation have the
Serializes a presentation descriptor to a byte array.
-Pointer to the
Receives the size of the ppbData array, in bytes.
Receives a reference to an array of bytes containing the serialized presentation descriptor. The caller must free the memory for the array by calling CoTaskMemFree.
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
To deserialize the presentation descriptor, pass the byte array to the
Deserializes a presentation descriptor from a byte array.
-Size of the pbData array, in bytes.
Pointer to an array of bytes that contains the serialized presentation descriptor.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a stream descriptor.
-Stream identifier.
Number of elements in the apMediaTypes array.
Pointer to an array of
Receives a reference to the
If this function succeeds, it returns
If you are writing a custom media source, you can use this function to create stream descriptors for the source. This function automatically creates the stream descriptor media type handler and initializes it with the list of types given in apMediaTypes. The function does not set the current media type on the handler, however. To set the type, call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a media-type handler that supports a single media type at a time.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The media-type handler created by this function supports one media type at a time. Set the media type by calling
Shuts down a Media Foundation object and releases all resources associated with the object.
This function is a helper function that wraps the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
This function is not related to the
Creates the Streaming Audio Renderer.
-If this function succeeds, it returns
To configure the audio renderer, set any of the following attributes on the
Attribute | Description |
---|---|
| The audio endpoint device identifier. |
| The audio endpoint role. |
| Miscellaneous configuration flags. |
| The audio policy class. |
| The audio stream category. |
| Enables low-latency audio streaming. |
?
-
Creates an activation object for the Streaming Audio Renderer.
-If this function succeeds, it returns
To create the audio renderer, call
To configure the audio renderer, set any of the following attributes on the
Attribute | Description |
---|---|
| The audio endpoint device identifier. |
| The audio endpoint role. |
| Miscellaneous configuration flags. |
| The audio policy class. |
| The audio stream category. |
| Enables low-latency audio streaming. |
?
-
Creates an activation object for the enhanced video renderer (EVR) media sink.
-Handle to the window where the video will be displayed.
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To create the EVR, call
To configure the EVR, set any of the following attributes on the
Attribute | Description |
---|---|
| Activation object for a custom mixer. |
| CLSID for a custom mixer. |
| Flags for creating a custom mixer. |
| Activation object for a custom presenter. |
| CLSID for a custom presenter. |
| Flags for creating a custom presenter. |
?
When
Creates a media sink for authoring MP4 files.
-A reference to the
A reference to the
This parameter can be
A reference to the
This parameter can be
Receives a reference to the MP4 media sink's
If this function succeeds, it returns
The MP4 media sink supports a maximum of one video stream and one audio stream. The initial stream formats are given in the pVideoMediaType and pAudioMediaType parameters. To create an MP4 file with one stream, set the other stream type to
The number of streams is fixed when you create the media sink. The sink does not support the
To author 3GP files, use the
Creates a media sink for authoring 3GP files.
-A reference to the
A reference to the
This parameter can be
A reference to the
This parameter can be
Receives a reference to the 3GP media sink's
If this function succeeds, it returns
The 3GP media sink supports a maximum of one video stream and one audio stream. The initial stream formats are given in the pVideoMediaType and pAudioMediaType parameters. To create an MP4 file with one stream, set the other stream type to
The number of streams is fixed when you create the media sink. The sink does not support the
To author MP4 files, use the
Creates the MP3 media sink.
-A reference to the
Receives a reference to the
If this function succeeds, it returns
The MP3 media sink takes compressed MP3 - audio samples as input, and writes an MP3 file with ID3 headers as output. The MP3 media sink does not perform MP3 audio encoding.
-Creates an instance of the AC-3 media sink.
-A reference to the
A reference to the
Attribute | Value |
---|---|
| |
|
?
Receives a reference to the
If this function succeeds, it returns
The AC-3 media sink takes compressed AC-3 audio as input and writes the audio to the byte stream without modification. The primary use for this media sink is to stream AC-3 audio over a network. The media sink does not perform AC-3 audio encoding.
-Creates an instance of the audio data transport stream (ADTS) media sink.
-A reference to the
A reference to the
Attribute | Value |
---|---|
| |
| |
| 0 (raw AAC) or 1 (ADTS) |
?
Receives a reference to the
If this function succeeds, it returns
The ADTS media sink converts Advanced Audio Coding (AAC) audio packets into an ADTS stream. The primary use for this media sink is to stream ADTS over a network. The output is not an audio file, but a stream of audio frames with ADTS headers.
The media sink can accept raw AAC frames (
Creates a generic media sink that wraps a multiplexer Microsoft Media Foundation transform (MFT).
-The subtype
A list of format attributes for the MFT output type. This parameter is optional and can be
A reference to the
Receives a reference to the
If this function succeeds, it returns
This function attempts to find a multiplexer MFT that supports an output type with the following definition:
To provide a list of additional format attributes:
The multiplexer MFT must be registered in the
Creates a media sink for authoring fragmented MP4 files.
-A reference to the
A reference to the
This parameter can be
A reference to the
This parameter can be
Receives a reference to the MP4 media sink's
If this function succeeds, it returns
Creates an Audio-Video Interleaved (AVI) Sink.
-Pointer to the byte stream that will be used to write the AVI file.
Pointer to the media type of the video input stream
Pointer to the media type of the audio input stream
Receives a reference to the
If this function succeeds, it returns
Creates an WAVE archive sink. The WAVE archive sink takes - audio and writes it to an .wav file. -
-Pointer to the byte stream that will be used to write the .wav file.
Pointer to the audio media type.
Receives a reference to the
Creates a new instance of the topology loader.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an activation object for the sample grabber media sink.
- Pointer to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
To create the sample grabber sink, call
Before calling ActivateObject, you can configure the sample grabber by setting any of the following attributes on the ppIActivate reference:
Creates the default implementation of the quality manager.
-Receives a reference to the quality manager's
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates the sequencer source.
-Reserved. Must be
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a
Sequencer element identifier. This value specifies the segment in which to begin playback. The element identifier is returned in the
Starting position within the segment, in 100-nanosecond units.
Pointer to a
If this function succeeds, it returns
The
Creates a media source that aggregates a collection of media sources.
-A reference to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pSourceCollection collection does not contain any elements. |
?
The aggregated media source is useful for combining streams from separate media sources. For example, you can use it to combine a video capture source and an audio capture source.
-
Creates a credential cache object. An application can use this object to implement a custom credential manager.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates a default proxy locator.
-The name of the protocol.
Note??In this release of Media Foundation, the default proxy locator does not support RTSP. ?Pointer to the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the scheme handler for the network source.
-Interface identifier (IID) of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface. The scheme handler exposes the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the protected media path (PMP) server object.
-A member of the
Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates the remote desktop plug-in object. Use this object if the application is running in a Terminal Services client session.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
| Remote desktop connections are not allowed by the current session policy. |
?
[This API is not supported and may be altered or unavailable in the future. Instead, applications should use the PSCreateMemoryPropertyStore function to create named property stores.]
Creates an empty property store to hold name/value pairs.
-Receives a reference to the
The function returns an
Return code | Description |
---|---|
| The function succeeded. |
?
Creates an instance of the sample copier transform.
-Receives a reference to the
If this function succeeds, it returns
The sample copier is a Media Foundation transform (MFT) that copies data from input samples to output samples without modifying the data. The following data is copied from the sample:
This MFT is useful in the following situation:
The following diagram shows this situation with a media source and a media sink.
In order for the media sink to receive data from the media source, the data must be copied into the media samples owned by the media sink. The sample copier can be used for this purpose.
A specific example of such a media sink is the Enhanced Video Renderer (EVR). The EVR allocates samples that contain Direct3D surface buffers, so it cannot receive video samples directly from a media source. Starting in Windows?7, the topology loader automatically handles this case by inserting the sample copier between the media source and the EVR.
-Creates an empty transcode profile object.
The transcode profile stores configuration settings for the output file. These configuration settings are specified by the caller, and include audio and video stream properties, encoder settings, and container settings. To set these properties, the caller must call the appropriate
The configured transcode profile is passed to the
If this function succeeds, it returns
The
For example code that uses this function, see the following topics:
Creates a partial transcode topology.
The underlying topology builder creates a partial topology by connecting the required pipeline objects: - source, encoder, and sink. The encoder and the sink are configured according to the settings specified by the caller in the transcode profile.
To create the transcode profile object, call the
The configured transcode profile is passed to the
The function returns an
Return code | Description |
---|---|
| The function call succeeded, and ppTranscodeTopo receives a reference to the transcode topology. |
| pwszOutputFilePath contains invalid characters. |
| No streams are selected in the media source. |
| The profile does not contain the |
| For one or more streams, cannot find an encoder that accepts the media type given in the profile. |
| The profile does not specify a media type for any of the selected streams on the media source. |
?
For example code that uses this function, see the following topics:
Creates a topology for transcoding to a byte stream.
-A reference to the
A reference to the
A reference to the
Receives a reference to the
If this function succeeds, it returns
This function creates a partial topology that contains the media source, the encoder, and the media sink.
-Gets a list of output formats from an audio encoder.
-Specifies the subtype of the output media. The encoder uses this value as a filter when it is enumerating the available output types. For information about the audio subtypes, see Audio Subtype GUIDs.
Bitwise OR of zero or more flags from the _MFT_ENUM_FLAG enumeration.
A reference to the
Value | Meaning |
---|---|
Set this attribute to unlock an encoder that has field-of-use descriptions. | |
Specifies a device conformance profile for a Windows Media encoder. | |
Sets the tradeoff between encoding quality and encoding speed. |
?
Receives a reference to the
This function assumes the encoder will be used in its default encoding mode, which is typically constant bit-rate (CBR) encoding. Therefore, the types returned by the function might not work with other modes, such as variable bit-rate (VBR) encoding.
Internally, this function works by calling
Creates the transcode sink activation object.
The transcode sink activation object can be used to create any of the following file sinks:
The transcode sink activation object exposes the
If this function succeeds, it returns
Creates an
Creates a Microsoft Media Foundation byte stream that wraps an
A reference to the
Receives a reference to the
Returns an
This function enables applications to pass an
Returns an
If this function succeeds, it returns
This function enables an application to pass a Media Foundation byte stream to an API that takes an
Creates a Microsoft Media Foundation byte stream that wraps an IRandomAccessStream object.
-If this function succeeds, it returns
Creates an IRandomAccessStream object that wraps a Microsoft Media Foundation byte stream.
-If this function succeeds, it returns
The returned byte stream object exposes the
Create an
If this function succeeds, it returns
Creates properties from a
If this function succeeds, it returns
Enumerates a list of audio or video capture devices.
-Pointer to an attribute store that contains search criteria. To create the attribute store, call
Value | Meaning |
---|---|
Specifies whether to enumerate audio or video devices. (Required.) | |
For audio capture devices, specifies the device role. (Optional.) | |
For video capture devices, specifies the device category. (Optional.) |
?
Receives an array of
Receives the number of elements in the pppSourceActivate array. If no capture devices match the search criteria, this parameter receives the value 0.
If this function succeeds, it returns
Each returned
Attribute | Description |
---|---|
| The display name of the device. |
| The major type and subtype GUIDs that describe the device's output format. |
| The type of capture device (audio or video). |
| The audio endpoint ID string. (Audio devices only.) |
| The device category. (Video devices only.) |
| Whether a device is a hardware or software device. (Video devices only.) |
| The symbolic link for the device driver. (Video devices only.) |
?
To create a media source from an
Creates a media source for a hardware capture device.
-Pointer to the
Receives a reference to the media source's
If this function succeeds, it returns
The pAttributes parameter specifies an attribute store. To create the attribute store, call the
For audio capture devices, optionally set one of the following attributes:
Attribute | Description |
---|---|
| Specifies the audio endpoint ID of the audio capture device. |
| Specifies the device role. If this attribute is set, the function uses the default audio capture device for that device role. Do not combine this attribute with the |
?
If neither attribute is specified, the function selects the default audio capture device for the eCommunications role.
For video capture devices, you must set the following attribute:
Attribute | Description |
---|---|
| Specifies the symbolic link to the device. |
?
-Creates an activation object that represents a hardware capture device.
-Pointer to the
Receives a reference to the
This function creates an activation object that can be used to create a media source for a hardware device. To create the media source itself, call
The pAttributes parameter specifies an attribute store. To create the attribute store, call the
For audio capture devices, optionally set one of the following attributes:
Attribute | Description |
---|---|
| Specifies the audio endpoint ID of the audio capture device. |
| Specifies the device role. If this attribute is set, the function uses the default audio capture device for that device role. Do not combine this attribute with the |
?
If neither attribute is specified, the function selects the default audio capture device for the eCommunications role.
For video capture devices, you must set the following attribute:
Attribute | Description |
---|---|
| Specifies the symbolic link to the device. |
?
-Creates an
Loads a dynamic link library that is signed for the protected environment.
-The name of the dynamic link library to load. This dynamic link library must be signed for the protected environment.
Receives a reference to the
A singlemodule load count is maintained on the dynamic link library (as it is with LoadLibrary). This load count is freed when the final release is called on the
Returns an
Gets the local system ID.
-Application-specific verifier value.
Length in bytes of verifier.
Returned ID string. This value must be freed by the caller by calling CoTaskMemFree.
The function returns an
Creates an
Checks whether a hardware security processor is supported for the specified media protection system.
-The identifier of the protection system that you want to check.
TRUE if the hardware security processor is supported for the specified protection system; otherwise
Creates an
Locks the shared Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager.
-Receives a token that identifies this instance of the DXGI Device Manager. Use this token when calling
Receives a reference to the
If this function succeeds, it returns
This function obtains a reference to a DXGI Device Manager instance that can be shared between components. The Microsoft Media Foundation platform creates this instance of the DXGI Device Manager as a singleton object. Alternatively, you can create a new DXGI Device Manager by calling
The first time this function is called, the Media Foundation platform creates the shared DXGI Device Manager.
When you are done use the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Creates an instance of the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The supplied |
| The supplied LPCWSTR is null. |
?
Creates the source reader from a URL.
-The URL of a media file to open.
Pointer to the
Receives a reference to the
If this function succeeds, it returns
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates the source reader from a byte stream.
-A reference to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates the source reader from a media source.
-A reference to the
Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The source contains protected content. |
?
Call CoInitialize(Ex) and
By default, when the application releases the source reader, the source reader shuts down the media source by calling
To change this default behavior, set the
When using the Source Reader, do not call any of the following methods on the media source:
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates the sink writer from a URL or byte stream.
-A null-terminated string that contains the URL of the output file. This parameter can be
Pointer to the
If this parameter is a valid reference, the sink writer writes to the provided byte stream. (The byte stream must be writable.) Otherwise, if pByteStream is
Pointer to the
Receives a reference to the
Call CoInitialize(Ex) and
The first three parameters to this function can be
Description | pwszOutputURL | pByteStream | pAttributes |
---|---|---|---|
Specify a byte stream, with no URL. | non- | Required (must not be | |
Specify a URL, with no byte stream. | not | Optional (may be | |
Specify both a URL and a byte stream. | non- | non- | Optional (may be |
?
The pAttributes parameter is required in the first case and optional in the others.
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates the sink writer from a media sink.
-Pointer to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
Call CoInitialize(Ex) and
When you are done using the media sink, call the media sink's
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-
Writes the contents of an attribute store to a stream.
-Pointer to the
Bitwise OR of zero or more flags from the
Pointer to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If dwOptions contains the
If the
Otherwise, the function calls CoMarshalInterface to serialize a proxy for the object.
If dwOptions does not include the
To load the attributes from the stream, call
The main purpose of this function is to marshal attributes across process boundaries.
-
Loads attributes from a stream into an attribute store.
-Pointer to the
Bitwise OR of zero or more flags from the
Pointer to the
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Use this function to deserialize an attribute store that was serialized with the
If dwOptions contains the
If the
Otherwise, the function calls CoUnmarshalInterface to deserialize a proxy for the object.
This function deletes any attributes that were previously stored in pAttr.
-Creates a generic activation object for Media Foundation transforms (MFTs).
-Receives a reference to the
If this function succeeds, it returns
Most applications will not use this function; it is used internally by the
An activation object is a helper object that creates another object, somewhat similar to a class factory. The
Attribute | Description |
---|---|
| Required. Contains the CLSID of the MFT. The activation object creates the MFT by passing this CLSID to the CoCreateInstance function. |
| Optional. Specifies the category of the MFT. |
| Contains various flags that describe the MFT. For hardware-based MFTs, set the |
| Optional. Contains the merit value of a hardware codec. If this attribute is set and its value is greater than zero, the activation object calls |
| Required for hardware-based MFTs. Specifies the symbolic link for the hardware device. The device proxy uses this value to configure the MFT. |
| Optional. Contains an If this attribute is set and the |
| Optional. Contains the encoding profile for an encoder. The value of this attribute is an If this attribute is set and the value of the |
| Optional. Specifies the preferred output format for an encoder. If this attribute set and the value of the |
?
For more information about activation objects, see Activation Objects.
-Enumerates a list of audio or video capture devices.
-Pointer to an attribute store that contains search criteria. To create the attribute store, call
Value | Meaning |
---|---|
Specifies whether to enumerate audio or video devices. (Required.) | |
For audio capture devices, specifies the device role. (Optional.) | |
For video capture devices, specifies the device category. (Optional.) |
?
Receives an array of
Receives the number of elements in the pppSourceActivate array. If no capture devices match the search criteria, this parameter receives the value 0.
If this function succeeds, it returns
Each returned
Attribute | Description |
---|---|
| The display name of the device. |
| The major type and subtype GUIDs that describe the device's output format. |
| The type of capture device (audio or video). |
| The audio endpoint ID string. (Audio devices only.) |
| The device category. (Video devices only.) |
| Whether a device is a hardware or software device. (Video devices only.) |
| The symbolic link for the device driver. (Video devices only.) |
?
To create a media source from an
Applies to: desktop apps only
Creates an activation object for the sample grabber media sink.
- Pointer to the
Pointer to the
Receives a reference to the
If this function succeeds, it returns
To create the sample grabber sink, call
Before calling ActivateObject, you can configure the sample grabber by setting any of the following attributes on the ppIActivate reference:
Applies to: desktop apps | Metro style apps
Copies an image or image plane from one buffer to another.
-Pointer to the start of the first row of pixels in the destination buffer.
Stride of the destination buffer, in bytes.
Pointer to the start of the first row of pixels in the source image.
Stride of the source image, in bytes.
Width of the image, in bytes.
Number of rows of pixels to copy.
If this function succeeds, it returns
This function copies a single plane of the image. For planar YUV formats, you must call the function once for each plane. In this case, pDest and pSrc must point to the start of each plane.
This function is optimized if the MMX, SSE, or SSE2 instruction sets are available on the processor. The function performs a non-temporal store (the data is written to memory directly without polluting the cache).
Note??Prior to Windows?7, this function was exported from evr.dll. Starting in Windows?7, this function is exported from mfplat.dll, and evr.dll exports a stub function that calls into mfplat.dll. For more information, see Library Changes in Windows?7.
-
Uses profile data from a profile object to configure settings in the ContentInfo object.
-If there is already information in the ContentInfo object when this method is called, it is replaced by the information from the profile object.
-
Retrieves an Advanced Systems Format (ASF) profile that describes the ASF content.
-The profile is set by calling either
The ASF profile object returned by this method does not include any of the MF_PD_ASF_xxx attributes (see Presentation Descriptor Attributes). To get these attributes, do the following:
Call
(Optional.) Call
An ASF profile is a template for file encoding, and is intended mainly for creating ASF content. If you are reading an existing ASF file, it is recommended that you use the presentation descriptor to get information about the file. One exception is that the profile contains the mutual exclusion and stream prioritization objects, which are not exposed directly from the presentation descriptor.
-Retrieves the size of the header section of an Advanced Systems Format (ASF) file.
-The
Receives the size, in bytes, of the header section of the content. The value includes the size of the ASF Header Object plus the size of the header section of the Data Object. Therefore, the resulting value is the offset to the start of the data packets in the ASF Data Object.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer does not contain valid ASF data. |
| The buffer does not contain enough valid data. |
?
The header of an ASF file or stream can be passed to the
Parses the information in an ASF header and uses that information to set values in the ContentInfo object. You can pass the entire header in a single buffer or send it in several pieces.
-Pointer to the
Offset, in bytes, of the first byte in the buffer relative to the beginning of the header.
The method returns an
Return code | Description |
---|---|
| The header is completely parsed and validated. |
| The input buffer does not contain valid ASF data. |
| The input buffer is to small. |
| The method succeeded, but the header passed was incomplete. This is the successful return code for all calls but the last one when passing the header in pieces. |
?
If you pass the header in pieces, the ContentInfo object will keep references to the buffer objects until the entire header is parsed. Therefore, do not write over the buffers passed into this method.
The start of the Header object has the following layout in memory:
Field Name | Size in bytes |
---|---|
Object ID | 16 |
Object Size | 8 |
Number of Header Objects | 4 |
Reserved1 | 1 |
Reserved2 | 1 |
?
The first call to ParseHeader reads everything up to and including Rerserved2, so it requires a minimum of 30 bytes. (Note that the
Encodes the data in the MFASFContentInfo object into a binary Advanced Systems Format (ASF) header.
- A reference to the
Size of the encoded ASF header in bytes. If pIHeader is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ASF Header Objects do not exist for the media that the ContentInfo object holds reference to. |
| The ASF Header Object size exceeds 10 MB. |
| The buffer passed in pIHeader is not large enough to hold the ASF Header Object information. |
?
The size received in the pcbHeader parameter includes the padding size. The content information shrinks or expands the padding data depending on the size of the ASF Header Objects.
During this call, the stream properties are set based on the encoding properties of the profile. These properties are available through the
Retrieves an Advanced Systems Format (ASF) profile that describes the ASF content.
-Receives an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The profile is set by calling either
The ASF profile object returned by this method does not include any of the MF_PD_ASF_xxx attributes (see Presentation Descriptor Attributes). To get these attributes, do the following:
Call
(Optional.) Call
An ASF profile is a template for file encoding, and is intended mainly for creating ASF content. If you are reading an existing ASF file, it is recommended that you use the presentation descriptor to get information about the file. One exception is that the profile contains the mutual exclusion and stream prioritization objects, which are not exposed directly from the presentation descriptor.
-
Uses profile data from a profile object to configure settings in the ContentInfo object.
-The
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If there is already information in the ContentInfo object when this method is called, it is replaced by the information from the profile object.
-
Creates a presentation descriptor for ASF content.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a property store that can be used to set encoding properties.
-Stream number to configure. Set to zero to configure file-level encoding properties.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the flags that indicate the selected indexer options.
-You must call this method before initializing the indexer object with
Sets indexer options.
-Bitwise OR of zero or more flags from the MFASF_INDEXER_FLAGS enumeration specifying the indexer options to use.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The indexer object was initialized before setting flags for it. For more information, see Remarks. |
?
Retrieves the flags that indicate the selected indexer options.
-Receives a bitwise OR of zero or more flags from the MFASF_INDEXER_FLAGS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pdwFlags is |
?
You must call this method before initializing the indexer object with
Initializes the indexer object. This method reads information in a ContentInfo object about the configuration of the content and the properties of the existing index, if present. Use this method before using the indexer for either writing or reading an index. You must make this call before using any of the other methods of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid ASF data. |
| Unexpected error. |
?
The indexer needs to examine the data in the ContentInfo object to properly write or read the index for the content. The indexer will not make changes to the content information and will not hold any references to the
In the ASF header, the maximum data-packet size must equal the minimum data-packet size. Otherwise, the method returns
Retrieves the offset of the index object from the start of the content.
-Pointer to the
Receives the offset of the index relative to the beginning of the content described by the ContentInfo object. This is the position relative to the beginning of the ASF file.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pIContentInfo is |
?
The index continues from the offset retrieved by this method to the end of the file.
You must call
If the index is retrieved by using more than one call to
Adds byte streams to be indexed.
-An array of
The number of references in the ppIByteStreams array.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The indexer object has already been initialized and it has packets which have been indexed. |
?
For a reading scenario, only one byte stream should be used by the indexer object. For an index generating scenario, it depends how many index objects are needed to be generated.
-
Retrieves the number of byte streams that are in use by the indexer object.
-Receives the number of byte streams that are in use by the indexer object.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pcByteStreams is |
?
Retrieves the index settings for a specified stream and index type.
-Pointer to an
A variable that retrieves a Boolean value specifying whether the index described by pIndexIdentifier has been created.
A buffer that receives the index descriptor. The index descriptor consists of an
On input, specifies the size, in bytes, of the buffer that pbIndexDescriptor points to. The value can be zero if pbIndexDescriptor is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer size specified in pcbIndexDescriptor is too small. |
?
To read an existing ASF index, call
If an index exists for the stream and the value passed into pcbIndexDescriptor is smaller than the required size of the pbIndexDescriptor buffer, the method returns
If there is no index for the specified stream, the method returns
Configures the index for a stream.
-The index descriptor to set. The index descriptor is an
The size, in bytes, of the index descriptor.
A Boolean value. Set to TRUE to have the indexer create an index of the type specified for the stream specified in the index descriptor.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At attempt was made to change the index status in a seek-only scenario. For more information, see Remarks. |
?
You must make all calls to SetIndexStatus before making any calls to
The indexer object is configured to create temporal indexes for each stream by default. Call this method only if you want to override the default settings.
You cannot use this method in an index reading scenario. You can only use this method when writing indexes.
-Given a desired seek time, gets the offset from which the client should start reading data.
-The value of the index entry for which to get the position. The format of this value varies depending on the type of index, which is specified in the index identifier. For time-based indexing, the variant type is VT_I8 and the value is the desired seek time, in 100-nanosecond units.
Pointer to an
Receives the offset within the data segment of the ASF Data Object. The offset is in bytes, and is relative to the start of packet 0. The offset gives the starting location from which the client should begin reading from the stream. This location might not correspond exactly to the requested seek time.
For reverse playback, if no key frame exists after the desired seek position, this parameter receives the value MFASFINDEXER_READ_FOR_REVERSEPLAYBACK_OUTOFDATASEGMENT. In that case, the seek position should be 1 byte pass the end of the data segment.
Receives the approximate time stamp of the data that is located at the offset returned in the pcbOffsetWithinData parameter. The accuracy of this value is equal to the indexing interval of the ASF index, typically about 1 second.
If the approximate time stamp cannot be determined, this parameter receives the value MFASFINDEXER_APPROX_SEEK_TIME_UNKNOWN.
Receives the payload number of the payload that contains the information for the specified stream. Packets can contain multiple payloads, each containing data for a different stream. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The requested seek time is out of range. |
| No index exists of the specified type for the specified stream. |
?
Accepts an ASF packet for the file and creates index entries for them.
- Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The argument passed in is |
| The indexer is not initialized. |
?
The ASF indexer creates indexes for a file internally. You can get the completed index for all data packets sent to the indexer by committing the index with
When this method creates index entries, they are immediately available for use by
The media sample specified in pIASFPacketSample must hold a buffer that contains a single ASF packet. Get the sample from the ASF multiplexer by calling the
You cannot use this method while reading an index, only when writing an index.
-
Adds information about a new index to the ContentInfo object associated with ASF content. You must call this method before copying the index to the content so that the index will be readable by the indexer later.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The caller made an invalid request. For more information, see Remarks. |
?
For the index to function properly, you must call this method after all ASF packets in the file have been passed to the indexer by using the
An application must use the CommitIndex method only when writing a new index otherwise CommitIndex may return
You cannot use this method in an index reading scenario. You can only use this method when writing indexes.
-
Retrieves the size, in bytes, of the buffer required to store the completed index.
-Receives the size of the index, in bytes
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index has not been committed. For more information; see Remarks. |
?
Use this method to get the size of the index and then allocate a buffer big enough to hold it.
The index must be committed with a call to
Call
You cannot use this method in a reading scenario. You can only use this method when writing indexes.
-
Retrieves the completed index from the ASF indexer object.
-Pointer to the
The offset of the data to be retrieved, in bytes from the start of the index data. Set to 0 for the first call. If subsequent calls are needed (the buffer is not large enough to hold the entire index), set to the byte following the last one retrieved.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index was not committed before attempting to get the completed index. For more information, see Remarks. |
?
This method uses as much of the buffer as possible, and updates the length of the buffer appropriately.
If pIIndexBuffer is large enough to contain the entire buffer, cbOffsetWithinIndex should be 0, and the call needs to be made only once. Otherwise, there should be no gaps between successive buffers.
The user must write this data to the content at cbOffsetFromIndexStart bytes after the end of the ASF data object. You can call
This call will not succeed unless
You cannot use this method in an index reading scenario. You can only use this method when writing indexes.
-Provides methods to create Advanced Systems Format (ASF) data packets. The methods of this interface process input samples into the packets that make up an ASF data section. The ASF multiplexer exposes this interface. To create the ASF multiplexer, call
Sets the maximum time by which samples from various streams can be out of synchronization. The multiplexer will not accept a sample with a time stamp that is out of synchronization with the latest samples from any other stream by an amount that exceeds the synchronization tolerance.
-The synchronization tolerance is the maximum difference in presentation times at any given point between samples of different streams that the ASF multiplexer can accommodate. That is, if the synchronization tolerance is 3 seconds, no stream can be more than 3 seconds behind any other stream in the time stamps passed to the multiplexer. The multiplexer determines a default synchronization tolerance to use, but this method overrides it (usually to increase it). More tolerance means the potential for greater latency in the multiplexer. If the time stamps are synchronized among the streams, actual latency will be much lower than msSyncTolerance.
-
Initializes the multiplexer with the data from an ASF ContentInfo object.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This call must be made once at the beginning of encoding, with pIContentInfo pointing to the ASF ContentInfo object that describes the content to be encoded. This enables the ASF multiplexer to see, among other things, which streams will be present in the encoding session. This call typically does not affect the data in the ASF ContentInfo object.
-
Sets multiplexer options.
-Bitwise OR of zero or more members of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves flags indicating the configured multiplexer options.
-Receives a bitwise OR of zero or more values from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Delivers input samples to the multiplexer.
-The stream number of the stream to which the sample belongs.
Pointer to the
The adjustment to apply to the time stamp of the sample. This parameter is used if the caller wants to shift the sample time on pISample. This value should be positive if the time stamp should be pushed ahead and negative if the time stamp should be pushed back. This time stamp is added to sample time on pISample, and the resulting time is used by the multiplexer instead of the original sample time. If no adjustment is needed, set this value to 0.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There are too many packets waiting to be retrieved from the multiplexer. Call |
| The sample that was processed violates the bandwidth limitations specified for the stream in the ASF ContentInfo object. When this error is generated, the sample is dropped. |
| The value passed in wStreamNumber is invalid. |
| The presentation time of the input media sample is earlier than the send time. |
?
The application passes samples to ProcessSample, and the ASF multiplexer queues them internally until they are ready to be placed into ASF packets. Call
After each call to ProcessSample, call GetNextPacket in a loop to get all of the available data packets. For a code example, see Generating New ASF Data Packets.
-
Retrieves the next output ASF packet from the multiplexer.
- Receives zero or more status flags. If more than one packet is waiting, the method sets the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The client needs to call this method, ideally after every call to
If no packets are ready, the method returns
Signals the multiplexer to process all queued output media samples. Call this method after passing the last sample to the multiplexer.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You must call Flush after the last sample has been passed into the ASF multiplexer and before you call
Collects data from the multiplexer and updates the ASF ContentInfo object to include that information in the ASF Header Object.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There are pending output media samples waiting in the multiplexer. Call |
?
For non-live encoding scenarios (such as encoding to a file), the user should call End to update the specified ContentInfo object, adding data that the multiplexer has collected during the packet generation process. The user should then call
During live encoding, it is usually not possible to rewrite the header, so this call is not required for live encoding. (The header in those cases will simply lack some of the information that was not available until the end of the encoding session.)
-
Retrieves multiplexer statistics.
-The stream number for which to obtain statistics.
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the maximum time by which samples from various streams can be out of synchronization. The multiplexer will not accept a sample with a time stamp that is out of synchronization with the latest samples from any other stream by an amount that exceeds the synchronization tolerance.
-Synchronization tolerance in milliseconds.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The synchronization tolerance is the maximum difference in presentation times at any given point between samples of different streams that the ASF multiplexer can accommodate. That is, if the synchronization tolerance is 3 seconds, no stream can be more than 3 seconds behind any other stream in the time stamps passed to the multiplexer. The multiplexer determines a default synchronization tolerance to use, but this method overrides it (usually to increase it). More tolerance means the potential for greater latency in the multiplexer. If the time stamps are synchronized among the streams, actual latency will be much lower than msSyncTolerance.
-Configures an Advanced Systems Format (ASF) mutual exclusion object, which manages information about a group of streams in an ASF profile that are mutually exclusive. When streams or groups of streams are mutually exclusive, only one of them is read at a time, they are not read concurrently.
A common example of mutual exclusion is a set of streams that each include the same content encoded at a different bit rate. The stream that is used is determined by the available bandwidth to the reader.
An
An ASF profile object can support multiple mutual exclusions. Each must be configured using a separate ASF mutual exclusion object.
-
Retrieves the type of mutual exclusion represented by the Advanced Systems Format (ASF) mutual exclusion object.
-A variable that receives the type identifier. For a list of predefined mutual exclusion type constants, see ASF Mutual Exclusion Type GUIDs.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sometimes, content must be made mutually exclusive in more than one way. For example, a video file might contain audio streams of several bit rates for each of several languages. To handle this type of complex mutual exclusion, you must configure more than one ASF mutual exclusion object. For more information, see
Sets the type of mutual exclusion that is represented by the Advanced Systems Format (ASF) mutual exclusion object.
-The type of mutual exclusion that is represented by the ASF mutual exclusion object. For a list of predefined mutual exclusion type constants, see ASF Mutual Exclusion Type GUIDs.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sometimes, content must be made mutually exclusive in more than one way. For example, a video file might contain audio streams in several bit rates for each of several languages. To handle this type of complex mutual exclusion, you must configure more than one ASF mutual exclusion object. For more information, see
Retrieves the number of records in the Advanced Systems Format mutual exclusion object.
-Receives the count of records.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Each record includes one or more streams. Every stream in a record is mutually exclusive of streams in every other record.
Use this method in conjunction with
Retrieves the stream numbers contained in a record in the Advanced Systems Format mutual exclusion object.
-The number of the record for which to retrieve the stream numbers.
An array that receives the stream numbers. Set to
On input, the number of elements in the array referenced by pwStreamNumArray. On output, the method sets this value to the count of stream numbers in the record. You can call GetStreamsForRecord with pwStreamNumArray set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Adds a stream number to a record in the Advanced Systems Format mutual exclusion object.
-The record number to which the stream is added. A record number is set by the
The stream number to add to the record.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified stream number is already associated with the record. |
?
Each record includes one or more streams. Every stream in a record is mutually exclusive of all streams in every other record.
-
Removes a stream number from a record in the Advanced Systems Format mutual exclusion object.
-The record number from which to remove the stream number.
The stream number to remove from the record.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream number is not listed for the specified record. |
?
Removes a record from the Advanced Systems Format (ASF) mutual exclusion object.
-The index of the record to remove.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When a record is removed, the ASF mutual exclusion object indexes the remaining records so that they are sequential starting with zero. You should enumerate the records to ensure that you have the correct index for each record. If the record removed is the one with the highest index, removing it has no effect on the other indexes.
-
Adds a record to the mutual exclusion object. A record specifies streams that are mutually exclusive with the streams in all other records.
-Receives the index assigned to the new record. Record indexes are zero-based and sequential.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
A record can include one or more stream numbers. All of the streams in a record are mutually exclusive with all the streams in all other records in the ASF mutual exclusion object.
You can use records to create complex mutual exclusion scenarios by using multiple ASF mutual exclusion objects.
-
Creates a copy of the Advanced Systems Format mutual exclusion object.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The cloned object is a new object, completely independent of the object from which it was cloned.
-
Retrieves the number of streams in the profile.
-
Adds a stream to the profile or reconfigures an existing stream.
-If the stream number in the ASF stream configuration object is already included in the profile, the information in the new object replaces the old one. If the profile does not contain a stream for the stream number, the ASF stream configuration object is added as a new stream.
-
Retrieves the number of streams in the profile.
-Receives the number of streams in the profile.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a stream from the profile by stream index, and/or retrieves the stream number for a stream index.
-The index of the stream to retrieve. Stream indexes are sequential and zero-based. You can get the number of streams that are in the profile by calling the
Receives the stream number of the requested stream. Stream numbers are one-based and are not necessarily sequential. This parameter can be set to
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method does not create a copy of the stream configuration object. The reference that is retrieved points to the object within the profile object. You must not make any changes to the stream configuration object using this reference, because doing so can affect the profile object in unexpected ways.
To change the configuration of the stream configuration object in the profile, you must first clone the stream configuration object by calling
Retrieves an Advanced Systems Format (ASF) stream configuration object for a stream in the profile. This method references the stream by stream number instead of stream index.
-The stream number for which to obtain the interface reference.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method does not create a copy of the stream configuration object. The reference that is retrieved points to the object within the profile object. You must not make any changes to the stream configuration object using this reference, because doing so can affect the profile object in unexpected ways.
To change the configuration of the stream configuration object in the profile, you must first clone the stream configuration object by calling
Adds a stream to the profile or reconfigures an existing stream.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the stream number in the ASF stream configuration object is already included in the profile, the information in the new object replaces the old one. If the profile does not contain a stream for the stream number, the ASF stream configuration object is added as a new stream.
-
Removes a stream from the Advanced Systems Format (ASF) profile object.
-Stream number of the stream to remove.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
After a stream is removed, the ASF profile object reassigns stream indexes so that the index values are sequential starting from zero. Any previously stored stream index numbers are no longer valid after deleting a stream.
-
Creates an Advanced Systems Format (ASF) stream configuration object.
-Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| ppIStream is |
| stream configuration object could not be created due to insufficient memory. |
?
The ASF stream configuration object created by this method is not included in the profile. To include the stream, you must first configure the stream configuration and then call
Retrieves the number of Advanced Systems Format (ASF) mutual exclusion objects that are associated with the profile.
-Receives the number of mutual exclusion objects.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Multiple mutual exclusion objects may be required for streams that are mutually exclusive in more than one way. For more information, see
Retrieves an Advanced Systems Format (ASF) mutual exclusion object from the profile.
-Index of the mutual exclusion object in the profile.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method does not create a copy of the mutual exclusion object. The returned reference refers to the mutual exclusion contained in the profile object. You must not make any changes to the mutual exclusion object using this reference, because doing so can affect the profile object in unexpected ways.
To change the configuration of the mutual exclusion object in the profile, you must first clone the mutual exclusion object by calling
Adds a configured Advanced Systems Format (ASF) mutual exclusion object to the profile.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You can create a mutual exclusion object by calling the
Removes an Advanced Systems Format (ASF) mutual exclusion object from the profile.
-The index of the mutual exclusion object to remove from the profile.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When a mutual exclusion object is removed from the profile, the ASF profile object reassigns the mutual exclusion indexes so that they are sequential starting with zero. Any previously stored indexes are no longer valid after calling this method.
-
Creates a new Advanced Systems Format (ASF) mutual exclusion object. Mutual exclusion objects can be added to a profile by calling the AddMutualExclusion method.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The ASF mutual exclusion object created by this method is not associated with the profile. Call
Reserved.
If this method succeeds, it returns
Reserved.
If this method succeeds, it returns
If this method succeeds, it returns
Reserved.
Returns E_NOTIMPL.
Creates a copy of the Advanced Systems Format profile object.
-Receives a reference to the
If this method succeeds, it returns
The cloned object is completely independent of the original.
-
Retrieves the option flags that are set on the ASF splitter.
-
Resets the Advanced Systems Format (ASF) splitter and configures it to parse data from an ASF data section.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pIContentInfo parameter is |
?
Sets option flags on the Advanced Systems Format (ASF) splitter.
-A bitwise combination of zero or more members of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The splitter is not initialized. |
| The dwFlags parameter does not contain a valid flag. |
| The |
?
This method can only be called after the splitter is initialized.
-
Retrieves the option flags that are set on the ASF splitter.
-Receives the option flags. This value is a bitwise OR of zero or more members of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pdwFlags is |
?
Sets the streams to be parsed by the Advanced Systems Format (ASF) splitter.
-An array of variables containing the list of stream numbers to select.
The number of valid elements in the stream number array.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pwStreamNumbers is |
| Invalid stream number was passed in the array. |
?
Calling this method supersedes any previous stream selections; only the streams specified in the pwStreamNumbers array will be selected.
By default, no streams are selected by the splitter.
You can obtain a list of the currently selected streams by calling the
Gets a list of currently selected streams.
- The address of an array of WORDs. This array receives the stream numbers of the selected streams. This parameter can be
On input, points to a variable that contains the number of elements in the pwStreamNumbers array. Set the variable to zero if pwStreamNumbers is
On output, receives the number of elements that were copied into pwStreamNumbers. Each element is the identifier of a selected stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The pwStreamNumbers array is smaller than the number of selected streams. See Remarks. |
?
To get the number of selected streams, set pwStreamNumbers to *pwNumStreams
equal to the number of selected streams. Then allocate an array of that size and call the method again, passing the array in the pwStreamNumbers parameter.
The following code shows these steps:
DisplaySelectedStreams( *pSplitter) - { WORD count = 0; hr = pSplitter->GetSelectedStreams( null , &count); if (hr ==) { WORD *pStreamIds = new (std::nothrow) WORD[count]; if (pStreamIds) { hr = pSplitter->GetSelectedStreams(pStreamIds, &count); if (SUCCEEDED(hr)) { for (WORD i = 0; i < count; i++) { printf("Selected stream ID: %d\n", pStreamIds[i]); } } delete [] pStreamIds; } else { hr = E_OUTOFMEMORY; } } return hr; - } -
Alternatively, you can allocate an array that is equal to the total number of streams and pass that to pwStreamNumbers.
Before calling this method, initialize *pwNumStreams
to the number of elements in pwStreamNumbers. If pwStreamNumbers is *pwNumStreams
to zero.
By default, no streams are selected by the splitter. Select streams by calling the
Sends packetized Advanced Systems Format (ASF) data to the ASF splitter for processing.
-Pointer to the
The offset into the data buffer where the splitter should begin parsing. This value is typically set to 0.
The length, in bytes, of the data to parse. This value is measured from the offset specified by cbBufferOffset. Set to 0 to process to the end of the buffer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pIBuffer parameter is The specified offset value in cbBufferOffset is greater than the length of the buffer. The total value of cbBufferOffset and cbLength is greater than the length of the buffer. |
| The |
| The splitter cannot process more input at this time. |
?
After using this method to parse data, you must call
If your ASF data contains variable-sized packets, you must set the
If the method returns ME_E_NOTACCEPTING, call GetNextSample to get the output samples, or call
The splitter might hold a reference count on the input buffer. Therefore, do not write over the valid data in the buffer after calling this method.
-
Retrieves a sample from the Advanced Systems Format (ASF) splitter after the data has been parsed.
-Receives one of the following values.
Value | Meaning |
---|---|
More samples are ready to be retrieved. Call GetNextSample in a loop until the pdwStatusFlags parameter receives the value zero. | |
| No additional samples are ready. Call |
?
If the method returns a sample in the ppISample parameter, this parameter receives the number of the stream to which the sample belongs.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ASF data in the buffer is invalid. |
| There is a gap in the ASF data. |
?
Before calling this method, call
The ASF splitter skips samples for unselected streams. To select streams, call
Resets the Advanced Systems Format (ASF) splitter and releases all pending samples.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Any samples waiting to be retrieved when Flush is called are lost.
-
Retrieves the send time of the last sample received.
-Receives the send time of the last sample received.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pdwLastSendTime is |
?
Retrieves information about an existing payload extension.
-
Retrieves the stream number of the stream.
-
Retrieves the media type of the stream.
-To reduce unnecessary copying, the method returns a reference to the media type that is stored internally by the object. Do not modify the returned media type, as the results are not defined.
-Gets the major media type of the stream.
-Receives the major media type for the stream. For a list of possible values, see Major Media Types.
If this method succeeds, it returns
Retrieves the stream number of the stream.
-The method returns the stream number.
Assigns a stream number to the stream.
-The number to assign to the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Stream numbers start from 1 and do not need to be sequential.
-
Retrieves the media type of the stream.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To reduce unnecessary copying, the method returns a reference to the media type that is stored internally by the object. Do not modify the returned media type, as the results are not defined.
-
Sets the media type for the Advanced Systems Format (ASF) stream configuration object.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Some validation of the media type is performed by this method. However, a media type can be successfully set, but cause an error when the stream is added to the profile.
-
Retrieves the number of payload extensions that are configured for the stream.
-Receives the number of payload extensions.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves information about an existing payload extension.
-The payload extension index. Valid indexes range from 0, to one less than the number of extensions obtained by calling
Receives a
Receives the number of bytes added to each sample for the extension.
Pointer to a buffer that receives information about this extension system. This information is the same for all samples and is stored in the content header (not in each sample). This parameter can be
On input, specifies the size of the buffer pointed to by pbExtensionSystemInfo. On output, receives the required size of the pbExtensionSystemInfo buffer in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The buffer specified in pbExtensionSystemInfo is too small. |
| The wPayloadExtensionNumber parameter is out of range. |
?
Configures a payload extension for the stream.
-Pointer to a
Number of bytes added to each sample for the extension.
A reference to a buffer that contains information about this extension system. This information is the same for all samples and is stored in the content header (not with each sample). This parameter can be
Amount of data, in bytes, that describes this extension system. If this value is 0, then pbExtensionSystemInfo can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Removes all payload extensions that are configured for the stream.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
None.
-
Creates a copy of the Advanced Systems Format (ASF) stream configuration object.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The cloned object is completely independent of the original.
-Note??This interface is not implemented in this version of Media Foundation.?
Adds a stream to the stream priority list.
-The stream priority list is built by appending entries to the list with each call to AddStream. The list is evaluated in descending order of importance. The most important stream should be added first, and the least important should be added last.
-Note??This interface is not implemented in this version of Media Foundation.?
Retrieves the number of entries in the stream priority list.
-Receives the number of streams in the stream priority list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Note??This interface is not implemented in this version of Media Foundation.?
Retrieves the stream number of a stream in the stream priority list.
-Zero-based index of the entry to retrieve from the stream priority list. To get the number of entries in the priority list, call
Receives the stream number of the stream priority entry.
Receives a Boolean value. If TRUE, the stream is mandatory.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
?
Note??This interface is not implemented in this version of Media Foundation.?
Adds a stream to the stream priority list.
-Stream number of the stream to add.
If TRUE, the stream is mandatory.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. |
?
The stream priority list is built by appending entries to the list with each call to AddStream. The list is evaluated in descending order of importance. The most important stream should be added first, and the least important should be added last.
-Note??This interface is not implemented in this version of Media Foundation.?
Removes a stream from the stream priority list.
-Index of the entry in the stream priority list to remove. Values range from zero, to one less than the stream count retrieved by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When a stream is removed from the stream priority list, the index values of all streams that follow it in the list are decremented.
-Note??This interface is not implemented in this version of Media Foundation.?
Creates a copy of the ASF stream prioritization object.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The new object is completely independent of the original.
-
Retrieves the number of bandwidth steps that exist for the content. This method is used for multiple bit rate (MBR) content.
-Bandwidth steps are bandwidth levels used for multiple bit rate (MBR) content. If you stream MBR content, you can choose the bandwidth step that matches the network conditions to avoid interruptions during playback.
-
Sets options for the stream selector.
-
Retrieves the number of streams that are in the Advanced Systems Format (ASF) content.
-Receives the number of streams in the content.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the number of outputs for the Advanced Systems Format (ASF) content.
-Receives the number of outputs.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Outputs are streams in the ASF data section that will be parsed.
-
Retrieves the number of streams associated with an output.
-The output number for which to retrieve the stream count.
Receives the number of streams associated with the output.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid output number. |
?
An output is a stream in an ASF data section that will be parsed. If mutual exclusion is used, mutually exclusive streams share the same output.
-
Retrieves the stream numbers for all of the streams that are associated with an output.
-The output number for which to retrieve stream numbers.
Address of an array that receives the stream numbers associated with the output. The caller allocates the array. The array size must be at least as large as the value returned by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid output number. |
?
An output is a stream in an ASF data section that will be parsed. If mutual exclusion is used, mutually exclusive streams share the same output.
-
Retrieves the output number associated with a stream.
-The stream number for which to retrieve an output number.
Receives the output number.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. |
?
Outputs are streams in the ASF data section that will be parsed.
-
Retrieves the manual output override selection that is set for a stream.
-Stream number for which to retrieve the output override selection.
Receives the output override selection. The value is a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the selection status of an output, overriding other selection criteria.
-Output number for which to set selection.
Member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the number of mutual exclusion objects associated with an output.
-Output number for which to retrieve the count of mutually exclusive relationships.
Receives the number of mutual exclusions.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a mutual exclusion object for an output.
-Output number for which to retrieve a mutual exclusion object.
Mutual exclusion number. This is an index of mutually exclusive relationships associated with the output. Set to a number between 0, and 1 less than the number of mutual exclusion objects retrieved by calling
Receives a reference to the mutual exclusion object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Outputs are streams in the ASF data section that will be parsed.
-
Selects a mutual exclusion record to use for a mutual exclusion object associated with an output.
-The output number for which to set a stream.
Index of the mutual exclusion for which to select.
Record of the specified mutual exclusion to select.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
An output is a stream in an Advanced Systems Format (ASF) data section that will be parsed. If mutual exclusion is used, mutually exclusive streams share the same output.
An ASF file can contain multiple mutually exclusive relationships, such as a file with both language based and bit-rate based mutual exclusion. If an output is involved in multiple mutually exclusive relationships, a record from each must be selected.
-
Retrieves the number of bandwidth steps that exist for the content. This method is used for multiple bit rate (MBR) content.
-Receives the number of bandwidth steps.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Bandwidth steps are bandwidth levels used for multiple bit rate (MBR) content. If you stream MBR content, you can choose the bandwidth step that matches the network conditions to avoid interruptions during playback.
-
Retrieves the stream numbers that apply to a bandwidth step. This method is used for multiple bit rate (MBR) content.
-Bandwidth step number for which to retrieve information. Set this value to a number between 0, and 1 less than the number of bandwidth steps returned by
Receives the bit rate associated with the bandwidth step.
Address of an array that receives the stream numbers. The caller allocates the array. The array size must be at least as large as the value returned by the
Address of an array that receives the selection status of each stream, as an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Bandwidth steps are bandwidth levels used for MBR content. If you stream MBR content, you can choose the bandwidth step that matches the network conditions to avoid interruptions during playback.
-
Retrieves the index of a bandwidth step that is appropriate for a specified bit rate. This method is used for multiple bit rate (MBR) content.
-The bit rate to find a bandwidth step for.
Receives the step number. Use this number to retrieve information about the step by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
In a streaming multiple bit rate (MBR) scenario, call this method with the current data rate of the network connection to determine the correct step to use. You can also call this method periodically throughout streaming to ensure that the best step is used.
-
Sets options for the stream selector.
-Bitwise OR of zero or more members of the MFASF_STREAMSELECTOR_FLAGS enumeration specifying the options to use.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
[
Represents a description of an audio format.
-Windows Server?2008 and Windows?Vista:??If the major type of a media type is
To convert an audio media type into a
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[GetAudioFormat is no longer available for use as of Windows?7. Instead, use the media type attributes to get the properties of the audio format.]
Returns a reference to a
If you need to convert the media type into a
There are no guarantees about how long the returned reference is valid.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[GetAudioFormat is no longer available for use as of Windows?7. Instead, use the media type attributes to get the properties of the audio format.]
Returns a reference to a
This method returns a reference to a
If you need to convert the media type into a
There are no guarantees about how long the returned reference is valid.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Configures the audio session that is associated with the streaming audio renderer (SAR). Use this interface to change how the audio session appears in the Windows volume control.
The SAR exposes this interface as a service. To get a reference to the interface, call
Retrieves the group of sessions to which this audio session belongs.
-If two or more audio sessions share the same group, the Windows volume control displays one slider control for the entire group. Otherwise, it displays a slider for each session. For more information, see IAudioSessionControl::SetGroupingParam in the core audio API documentation.
-
Assigns the audio session to a group of sessions.
-A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If two or more audio sessions share the same group, the Windows volume control displays one slider control for the entire group. Otherwise, it displays a slider for each session. For more information, see IAudioSessionControl::SetGroupingParam in the core audio API documentation.
-
Retrieves the group of sessions to which this audio session belongs.
-Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If two or more audio sessions share the same group, the Windows volume control displays one slider control for the entire group. Otherwise, it displays a slider for each session. For more information, see IAudioSessionControl::SetGroupingParam in the core audio API documentation.
-
Sets the display name of the audio session. The Windows volume control displays this name.
-A null-terminated wide-character string that contains the display name.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application does not set a display name, Windows creates one.
-
Retrieves the display name of the audio session. The Windows volume control displays this name.
-Receives a reference to the display name string. The caller must free the memory allocated for the string by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application does not set a display name, Windows creates one.
-Sets the icon resource for the audio session. The Windows volume control displays this icon.
-A wide-character string that specifies the icon. See Remarks.
If this method succeeds, it returns
The icon path has the format "path,index" or "path,-id", where path is the fully qualified path to a DLL, executable file, or icon file; index is the zero-based index of the icon within the file; and id is a resource identifier. Note that resource identifiers are preceded by a minus sign (-) to distinguish them from indexes. The path can contain environment variables, such as "%windir%". For more information, see IAudioSessionControl::SetIconPath in the Windows SDK.
-
Retrieves the icon resource for the audio session. The Windows volume control displays this icon.
-Receives a reference to a wide-character string that specifies a shell resource. The format of the string is described in the topic
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application did not set an icon path, the method returns an empty string ("").
For more information, see IAudioSessionControl::GetIconPath in the core audio API documentation.
-Controls the volume levels of individual audio channels.
The streaming audio renderer (SAR) exposes this interface as a service. To get a reference to the interface, call
If your application does not require channel-level volume control, you can use the
Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation). For each channel, the attenuation level is the product of:
For example, if the master volume is 0.8 and the channel volume is 0.5, the attenuation for that channel is 0.8 ? 0.5 = 0.4. Volume levels can exceed 1.0 (positive gain), but the audio engine clips any audio samples that exceed zero decibels.
Use the following formula to convert the volume level to the decibel (dB) scale:
Attenuation (dB) = 20 * log10(Level)
For example, a volume level of 0.50 represents 6.02 dB of attenuation.
-
Retrieves the number of channels in the audio stream.
-
Retrieves the number of channels in the audio stream.
-Receives the number of channels in the audio stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the volume level for a specified channel in the audio stream.
-Zero-based index of the audio channel. To get the number of channels, call
Volume level for the channel.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the volume level for a specified channel in the audio stream.
-Zero-based index of the audio channel. To get the number of channels, call
Receives the volume level for the channel.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the individual volume levels for all of the channels in the audio stream.
-Number of elements in the pfVolumes array. The value must equal the number of channels. To get the number of channels, call
Address of an array of size dwCount, allocated by the caller. The array specifies the volume levels for all of the channels. Before calling the method, set each element of the array to the desired volume level for the channel.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the volume levels for all of the channels in the audio stream.
-Number of elements in the pfVolumes array. The value must equal the number of channels. To get the number of channels, call
Address of an array of size dwCount, allocated by the caller. The method fills the array with the volume level for each channel in the stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Represents a buffer that contains a two-dimensional surface, such as a video frame.
-To get a reference to this interface, call QueryInterface on the media buffer.
To use a 2-D buffer, it is important to know the stride, which is the number of bytes needed to go from one row of pixels to the next. The stride may be larger than the image width, because the surface may contain padding bytes after each row of pixels. Stride can also be negative, if the pixels are oriented bottom-up in memory. For more information, see Image Stride.
Every video format defines a contiguous or packed representation. This representation is compatible with the standard layout of a DirectX surface in system memory, with no additional padding. For RGB video, the contiguous representation has a pitch equal to the image width in bytes, rounded up to the nearest DWORD boundary. For YUV video, the layout of the contiguous representation depends on the YUV format. For planar YUV formats, the Y plane might have a different pitch than the U and V planes.
If a media buffer supports the
Call the Lock2D method to access the 2-D buffer in its native format. The native format might not be contiguous. The buffer's
For uncompressed images, the amount of valid data in the buffer is determined by the width, height, and pixel layout of the image. For this reason, if you call Lock2D to access the buffer, do not rely on the values returned by
Queries whether the buffer is contiguous in its native format.
-For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Retrieves the number of bytes needed to store the contents of the buffer in contiguous format.
-For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Gives the caller access to the memory in the buffer.
-Receives a reference to the first byte of the top row of pixels in the image. The top row is defined as the top row when the image is presented to the viewer, and might not be the first row in memory.
Receives the surface stride, in bytes. The stride might be negative, indicating that the image is oriented from the bottom up in memory.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Cannot lock the Direct3D surface. |
| The buffer cannot be locked at this time. |
?
If p is a reference to the first byte in a row of pixels, p + (*plPitch) points to the first byte in the next row of pixels. A buffer might contain padding after each row of pixels, so the stride might be wider than the width of the image in bytes. Do not access the memory that is reserved for padding bytes, because it might not be read-accessible or write-accessible. For more information, see Image Stride.
The reference returned in pbScanline0 remains valid as long as the caller holds the lock. When you are done accessing the memory, call
The values returned by the
The
When the underlying buffer is a Direct3D surface, the method fails if the surface is not lockable.
-
Unlocks a buffer that was previously locked. Call this method once for each call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a reference to the buffer memory and the surface stride.
-Receives a reference to the first byte of the top row of pixels in the image.
Receives the stride, in bytes. For more information, see Image Stride.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| You must lock the buffer before calling this method. |
?
Before calling this method, you must lock the buffer by calling
Queries whether the buffer is contiguous in its native format.
-Receives a Boolean value. The value is TRUE if the buffer is contiguous, and
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Retrieves the number of bytes needed to store the contents of the buffer in contiguous format.
-Receives the number of bytes needed to store the contents of the buffer in contiguous format.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Copies this buffer into the caller's buffer, converting the data to contiguous format.
-Pointer to the destination buffer where the data will be copied. The caller allocates the buffer.
Size of the destination buffer, in bytes. To get the required size, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid size specified in pbDestBuffer. |
?
If the original buffer is not contiguous, this method converts the contents into contiguous format during the copy. For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in
Copies data to this buffer from a buffer that has a contiguous format.
-Pointer to the source buffer. The caller allocates the buffer.
Size of the source buffer, in bytes. To get the maximum size of the buffer, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method copies the contents of the source buffer into the buffer that is managed by this
For a definition of contiguous as it applies to 2-D buffers, see the Remarks section in the
Represents a buffer that contains a two-dimensional surface, such as a video frame.
-This interface extends the
Gives the caller access to the memory in the buffer.
-A member of the
Receives a reference to the first byte of the top row of pixels in the image. The top row is defined as the top row when the image is presented to the viewer, and might not be the first row in memory.
Receives the surface stride, in bytes. The stride might be negative, indicating that the image is oriented from the bottom up in memory.
Receives a reference to the start of the accessible buffer in memory.
Receives the length of the buffer, in bytes.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. The buffer might already be locked with an incompatible locking flag. See Remarks. |
| There is insufficient memory to complete the operation. |
?
When you are done accessing the memory, call
This method is equivalent to the
The ppbBufferStart and pcbBufferLength parameters receive the bounds of the buffer memory. Use these values to guard against buffer overruns. Use the values of ppbScanline0 and plPitch to access the image data. If the image is bottom-up in memory, ppbScanline0 will point to the last scan line in memory and plPitch will be negative. For more information, see Image Stride.
The lockFlags parameter specifies whether the buffer is locked for read-only access, write-only access, or read/write access.
When possible, use a read-only or write-only lock, and avoid locking the buffer for read/write access. If the buffer represents a DirectX Graphics Infrastructure (DXGI) surface, a read/write lock can cause an extra copy between CPU memory and GPU memory.
-Copies the buffer to another 2D buffer object.
-A reference to the
If this method succeeds, it returns
The destination buffer must be at least as large as the source buffer.
-Enables
Indicates that a
Indicates that a
Controls how a byte stream buffers data from a network.
To get a reference to this interface, call QueryInterface on the byte stream object.
-If a byte stream implements this interface, a media source can use it to control how the byte stream buffers data. This interface is designed for byte streams that read data from a network.
A byte stream that implements this interface should also implement the
The byte stream must send a matching
After the byte stream sends an
The byte stream should not send any more buffering events after it reaches the end of the file.
If buffering is disabled, the byte stream does not send any buffering events. Internally, however, it might still buffer data while it waits for I/O requests to complete. Therefore,
If the byte stream is buffering data internally and the media source calls EnableBuffering with the value TRUE, the byte stream can send
After the presentation has started, the media source should forward and
Sets the buffering parameters.
-
Sets the buffering parameters.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Enables or disables buffering.
-Specifies whether the byte stream buffers data. If TRUE, buffering is enabled. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before calling this method, call
Stops any buffering that is in progress.
-The method returns an
Return code | Description |
---|---|
| The byte stream successfully stopped buffering. |
| No buffering was in progress. |
?
If the byte stream is currently buffering data, it stops and sends an
Controls how a network byte stream transfers data to a local cache. Optionally, this interface is exposed by byte streams that read data from a network, for example, through HTTP.
To get a reference to this interface, call QueryInterface on the byte stream object.
-Stops the background transfer of data to the local cache.
-If this method succeeds, it returns
The byte stream resumes transferring data to the cache if the application does one of the following:
Controls how a network byte stream transfers data to a local cache. This interface extends the
Byte streams object in Microsoft Media Foundation can optionally implement this interface. To get a reference to this interface, call QueryInterface on the byte stream object.
-Limits the cache size.
-Queries whether background transfer is active.
-Background transfer might stop because the cache limit was reached (see
Gets the ranges of bytes that are currently stored in the cache.
-Receives the number of ranges returned in the ppRanges array.
Receives an array of
If this method succeeds, it returns
Limits the cache size.
-The maximum number of bytes to store in the cache, or ULONGLONG_MAX for no limit. The default value is no limit.
If this method succeeds, it returns
Queries whether background transfer is active.
-Receives the value TRUE if background transfer is currently active, or
If this method succeeds, it returns
Background transfer might stop because the cache limit was reached (see
Creates a media source from a byte stream.
-Applications do not use this interface directly. This interface is exposed by byte-stream handlers, which are used by the source resolver. When the byte-stream handler is given a byte stream, it parses the stream and creates a media source. Byte-stream handlers are registered by file name extension or MIME type.
-
Retrieves the maximum number of bytes needed to create the media source or determine that the byte stream handler cannot parse this stream.
-
Begins an asynchronous request to create a media source from a byte stream.
-Pointer to the byte stream's
String that contains the original URL of the byte stream. This parameter can be
Bitwise OR of zero or more flags. See Source Resolver Flags.
Pointer to the
Receives an
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Unable to parse the byte stream. |
?
The dwFlags parameter must contain the
The byte-stream handler is responsible for parsing the stream and validating the contents. If the stream is not valid or the byte stream handler cannot parse the stream, the handler should return a failure code. The byte stream is not guaranteed to match the type of stream that the byte handler is designed to parse.
If the pwszURL parameter is not
When the operation completes, the byte-stream handler calls the
Completes an asynchronous request to create a media source.
-Pointer to the
Receives a member of the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation was canceled. See |
| Unable to parse the byte stream. |
?
Call this method from inside the
Cancels the current request to create a media source.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You can use this method to cancel a previous call to BeginCreateObject. Because that method is asynchronous, however, it might be completed before the operation can be canceled. Therefore, your callback might still be invoked after you call this method.
-
Retrieves the maximum number of bytes needed to create the media source or determine that the byte stream handler cannot parse this stream.
-Receives the maximum number of bytes that are required.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a proxy to a byte stream. The proxy enables a media source to read from a byte stream in another process.
-Creates a proxy to a byte stream. The proxy enables a media source to read from a byte stream in another process.
-A reference to the
Reserved. Set to
The interface identifer (IID) of the interface being requested.
Receives a reference to the interface. The caller must release the interface.
If this method succeeds, it returns
Seeks a byte stream by time position.
-A byte stream can implement this interface if it supports time-based seeking. For example, a byte stream that reads data from a server might implement the interface. Typically, a local file-based byte stream would not implement it.
To get a reference to this interface, call QueryInterface on the byte stream object.
-Queries whether the byte stream supports time-based seeking.
-Queries whether the byte stream supports time-based seeking.
-Receives the value TRUE if the byte stream supports time-based seeking, or
If this method succeeds, it returns
Seeks to a new position in the byte stream.
-The new position, in 100-nanosecond units.
If this method succeeds, it returns
If the byte stream reads from a server, it might cache the seek request until the next read request. Therefore, the byte stream might not send a request to the server immediately.
-Gets the result of a time-based seek.
-Receives the new position after the seek, in 100-nanosecond units.
Receives the stop time, in 100-nanosecond units. If the stop time is unknown, the value is zero.
Receives the total duration of the file, in 100-nanosecond units. If the duration is unknown, the value is ?1.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The byte stream does not support time-based seeking, or no data is available. |
?
This method returns the server response from a previous time-based seek.
Note??This method normally cannot be invoked until some data is read from the byte stream, because theExtends the
Dynamically sets the output media type of the record sink or preview sink.
-The stream index to change the output media type on.
The new output media type.
The new encoder attributes. This can be null.
The method returns an
Return code | Description |
---|---|
| The method succeeded |
| The sink does not support the media type. |
?
This is an asynchronous call. Listen to the MF_CAPTURE_ENGINE_OUTPUT_MEDIA_TYPE_SET event - to be notified when the output media type has been set.
-Controls the capture source object. The capture source manages the audio and video capture devices.
-To get a reference to the capture source, call
Gets the number of device streams.
-Gets the current capture device's
If this method succeeds, it returns
Gets the current capture device's
If this method succeeds, it returns
Gets a reference to the underlying Source Reader object.
-This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid argument. |
| The capture source was not initialized. Possibly there is no capture device on the system. |
?
Adds an effect to a capture stream.
-The capture stream. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to one of the following:
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| No compatible media type could be found. |
| The dwSourceStreamIndex parameter is invalid. |
?
The effect must be implemented as a Media Foundation Transform (MFT). The pUnknown parameter can point to an instance of the MFT, or to an activation object for the MFT. For more information, see Activation Objects.
The effect is applied to the stream before the data reaches the capture sinks.
-Removes an effect from a capture stream.
-The capture stream. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. Possibly the specified effect could not be found. |
| The dwSourceStreamIndex parameter is invalid. |
?
This method removes an effect that was previously added using the
Removes all effects from a capture stream.
-The capture stream. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
?
Gets a format that is supported by one of the capture streams.
-The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the media type to retrieve.
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
| The dwMediaTypeIndex parameter is out of range. |
?
To enumerate all of the available formats on a stream, call this method in a loop while incrementing dwMediaTypeIndex, until the method returns
Some cameras might support a range of frame rates. The minimum and maximum frame rates are stored in the
Sets the output format for a capture stream.
-The capture stream to set. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
?
This method sets the native output type on the capture device. The device must support the specified format. To get the list of available formats, call
Gets the current media type for a capture stream.
-Specifies which stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. To get the number of streams, call |
| The first image stream. |
| The first video stream. |
| The first audio stream. |
?
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwSourceStreamIndex parameter is invalid. |
?
Gets the number of device streams.
-Receives the number of device streams.
If this method succeeds, it returns
Gets the stream category for the specified source stream index.
-The index of the source stream.
Receives the
If this method succeeds, it returns
Gets the current mirroring state of the video preview stream.
-The zero-based index of the stream.
Receives the value TRUE if mirroring is enabled, or
If this method succeeds, it returns
Enables or disables mirroring of the video preview stream.
-The zero-based index of the stream.
If TRUE, mirroring is enabled; if
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The device stream does not have mirroring capability. |
| The source is not initialized. |
?
Gets the actual device stream index translated from a friendly stream name.
-The friendly name. Can be one of the following:
Receives the value of the stream index that corresponds to the friendly name.
If this method succeeds, it returns
Used to enable the client to notify the Content Decryption Module (CDM) when global resources should be brought into a consistent state prior to suspending. -
-Indicates that the suspend process is starting and resources should be brought into a consistent state.
-If this method succeeds, it returns
The actual suspend is about to occur and no more calls will be made into the Content Decryption Module (CDM).
-If this method succeeds, it returns
Provides timing information from a clock in Microsoft Media Foundation.
Clocks and some media sinks expose this interface through QueryInterface.
-The
Retrieves the characteristics of the clock.
-
Retrieves the clock's continuity key. (Not supported.)
-Continuity keys are currently not supported in Media Foundation. Clocks must return the value zero in the pdwContinuityKey parameter.
-
Retrieves the properties of the clock.
-
Retrieves the characteristics of the clock.
-Receives a bitwise OR of values from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the last clock time that was correlated with system time.
-Reserved, must be zero.
Receives the last known clock time, in units of the clock's frequency.
Receives the system time that corresponds to the clock time returned in pllClockTime, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The clock does not have a time source. |
?
At some fixed interval, a clock correlates its internal clock ticks with the system time. (The system time is the time returned by the high-resolution performance counter.) This method returns:
The clock time is returned in the pllClockTime parameter and is expressed in units of the clock's frequency. If the clock's
The system time is returned in the phnsSystemTime parameter, and is always expressed in 100-nanosecond units.
To find out how often the clock correlates its clock time with the system time, call GetProperties. The correlation interval is given in the qwCorrelationRate member of the
Some clocks support rate changes through the
For the presentation clock, the clock time is the presentation time, and is always relative to the starting time specified in
Retrieves the clock's continuity key. (Not supported.)
-Receives the continuity key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Continuity keys are currently not supported in Media Foundation. Clocks must return the value zero in the pdwContinuityKey parameter.
-
Retrieves the current state of the clock.
-Reserved, must be zero.
Receives the clock state, as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the properties of the clock.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates a media source or a byte stream from a URL.
-Applications do not use this interface. This interface is exposed by scheme handlers, which are used by the source resolver. A scheme handler is designed to parse one type of URL scheme. When the scheme handler is given a URL, it parses the resource that is located at that URL and creates either a media source or a byte stream.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called by the media pipeline to provide the app with an instance of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The ICollection interface is the base interface for classes in the System.Collections namespace.
The ICollection interface extends IEnumerable; IDictionary and IList are more specialized interfaces that extend ICollection. An IDictionary implementation is a collection of key/value pairs, like the Hashtable class. An IList implementation is a collection of values and its members can be accessed by index, like the ArrayList class.
Some collections that limit access to their elements, such as the Queue class and the Stack class, directly implement the ICollection interface.
If neither the IDictionary interface nor the IList interface meet the requirements of the required collection, derive the new collection class from the ICollection interface instead for more flexibility.
For the generic version of this interface, see System.Collections.Generic.ICollection.
Windows 98, Windows Server 2000 SP4, Windows CE, Windows Millennium Edition, Windows Mobile for Pocket PC, Windows Mobile for Smartphone, Windows Server 2003, Windows XP Media Center Edition, Windows XP Professional x64 Edition, Windows XP SP2, Windows XP Starter Edition
The Microsoft .NET Framework 3.0 is supported on Windows Vista, Microsoft Windows XP SP2, and Windows Server 2003 SP1. .NET FrameworkSupported in: 3.0, 2.0, 1.1, 1.0.NET Compact FrameworkSupported in: 2.0, 1.0XNA FrameworkSupported in: 1.0ReferenceICollection MembersSystem.Collections NamespaceIDictionaryIListSystem.Collections.Generic.ICollection -
Retrieves the number of objects in the collection.
-
Retrieves the number of objects in the collection.
-Receives the number of objects in the collection.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves an object in the collection.
-Zero-based index of the object to retrieve. Objects are indexed in the order in which they were added to the collection.
Receives a reference to the object's
This method does not remove the object from the collection. To remove an object, call
Adds an object to the collection.
-Pointer to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If pUnkElement is
Removes an object from the collection.
-Zero-based index of the object to remove. Objects are indexed in the order in which they were added to the collection.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Adds an object at the specified index in the collection.
-The zero-based index where the object will be added to the collection.
The object to insert.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Removes all items from the collection.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Allows a decryptor to manage hardware keys and decrypt hardware samples.
-Allows the display driver to return IHV-specific information used when initializing a new hardware key.
-The number of bytes in the buffer that InputPrivateData specifies.
The contents of this parameter are defined by the implementation of the protection system that runs in the security processor. The contents may contain data about license or stream properties.
The return data is also defined by the implementation of the protection system implementation that runs in the security processor. The contents may contain data associated with the underlying hardware key.
If this method succeeds, it returns
Implements one step that must be performed for the user to access media content. For example, the steps might be individualization followed by license acquisition. Each of these steps would be encapsulated by a content enabler object that exposes the
Retrieves the type of operation that this content enabler performs.
-The following GUIDs are defined for the pType parameter.
Value | Description |
---|---|
MFENABLETYPE_MF_RebootRequired | The user must reboot his or her computer. |
MFENABLETYPE_MF_UpdateRevocationInformation | Update revocation information. |
MFENABLETYPE_MF_UpdateUntrustedComponent | Update untrusted components. |
MFENABLETYPE_WMDRMV1_LicenseAcquisition | License acquisition for Windows Media Digital Rights Management (DRM) version 1. |
MFENABLETYPE_WMDRMV7_Individualization | Individualization. |
MFENABLETYPE_WMDRMV7_LicenseAcquisition | License acquisition for Windows Media DRM version 7 or later. |
?
-
Queries whether the content enabler can perform all of its actions automatically.
-If this method returns TRUE in the pfAutomatic parameter, call the
If this method returns
Retrieves the type of operation that this content enabler performs.
-Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The following GUIDs are defined for the pType parameter.
Value | Description |
---|---|
MFENABLETYPE_MF_RebootRequired | The user must reboot his or her computer. |
MFENABLETYPE_MF_UpdateRevocationInformation | Update revocation information. |
MFENABLETYPE_MF_UpdateUntrustedComponent | Update untrusted components. |
MFENABLETYPE_WMDRMV1_LicenseAcquisition | License acquisition for Windows Media Digital Rights Management (DRM) version 1. |
MFENABLETYPE_WMDRMV7_Individualization | Individualization. |
MFENABLETYPE_WMDRMV7_LicenseAcquisition | License acquisition for Windows Media DRM version 7 or later. |
?
-
Retrieves a URL for performing a manual content enabling action.
-Receives a reference to a buffer that contains the URL. The caller must release the memory for the buffer by calling CoTaskMemFree.
Receives the number of characters returned in ppwszURL, including the terminating
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No URL is available. |
?
If the enabling action can be performed by navigating to a URL, this method returns the URL. If no such URL exists, the method returns a failure code.
The purpose of the URL depends on the content enabler type, which is obtained by calling
Enable type | Purpose of URL |
---|---|
Individualization | Not applicable. |
License acquisition | URL to obtain the license. Call |
Revocation | URL to a webpage where the user can download and install an updated component. |
?
-
Retrieves the data for a manual content enabling action.
-Receives a reference to a buffer that contains the data. The caller must free the buffer by calling CoTaskMemFree.
Receives the size of the ppbData buffer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No data is available. |
?
The purpose of the data depends on the content enabler type, which is obtained by calling
Enable type | Purpose of data |
---|---|
Individualization | Not applicable. |
License acquisition | HTTP POST data. |
Revocation | |
?
-
Queries whether the content enabler can perform all of its actions automatically.
-Receives a Boolean value. If TRUE, the content enabler can perform the enabing action automatically.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If this method returns TRUE in the pfAutomatic parameter, call the
If this method returns
Performs a content enabling action without any user interaction.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is asynchronous. When the operation is complete, the content enabler sends an
To find out whether the content enabler supports this method, call
Requests notification when the enabling action is completed.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The method succeeded and no action was required. |
?
If you use a manual enabling action, call this method to be notified when the operation completes. If this method returns
You do not have to call MonitorEnable when you use automatic enabling by calling
Cancels a pending content enabling action.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The content enabler sends an
Gets the required number of bytes that need to be prepended to the input and output buffers when you call the security processor through the InvokeFunction method. When you specify this number of bytes, the Media Foundation transform (MFT) decryptor can allocate the total amount of bytes and can avoid making copies of the data when the decrytor moves the data to the security processor.
-Calls into the implementation of the protection system in the security processor.
-The identifier of the function that you want to run. This identifier is defined by the implementation of the protection system.
The number of bytes of in the buffer that InputBuffer specifies, including private data.
A reference to the data that you want to provide as input.
Pointer to a value that specifies the length in bytes of the data that the function wrote to the buffer that OutputBuffer specifies, including the private data.
Pointer to the buffer where you want the function to write its output.
If this method succeeds, it returns
Gets the required number of bytes that need to be prepended to the input and output buffers when you call the security processor through the InvokeFunction method. When you specify this number of bytes, the Media Foundation transform (MFT) decryptor can allocate the total amount of bytes and can avoid making copies of the data when the decrytor moves the data to the security processor.
-If this method succeeds, it returns
Enables playback of protected content by providing the application with a reference to a content enabler object.
Applications that play protected content should implement this interface.
-A content enabler is an object that performs some action that is required to play a piece of protected content. For example, the action might be obtaining a DRM license. Content enablers expose the
To use this interface, do the following:
Implement the interface in your application.
Create an attribute store by calling
Set the
Call
If the content requires a content enabler, the application's BeginEnableContent method is called. Usually this method called during the
Many content enablers send machine-specific data to the network, which can have privacy implications. One of the purposes of the
Begins an asynchronous request to perform a content enabling action.
This method requests the application to perform a specific step needed to acquire rights to the content, using a content enabler object.
- Pointer to the
Pointer to the
Pointer to the
Reserved. Currently this parameter is always
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Do not block within this callback method. Instead, perform the content enabling action asynchronously on another thread. When the operation is finished, notify the protected media path (PMP) through the pCallback parameter.
If you return a success code from this method, you must call Invoke on the callback. Conversely, if you return an error code from this method, you must not call Invoke. If the operation fails after the method returns a success code, use status code on the
After the callback is invoked, the PMP will call the application's
This method is not necessarily called every time the application plays protected content. Generally, the method will not be called if the user has a valid, up-to-date license for the content. Internally, the input trust authority (ITA) determines whether BeginEnableContent is called, based on the content provider's DRM policy. For more information, see Protected Media Path.
-
Ends an asynchronous request to perform a content enabling action. This method is called by the protected media path (PMP) to complete an asynchronous call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When the BeginEnableContent method completes asynchronously, the application notifies the PMP by invoking the asynchronous callback. The PMP calls EndEnableContent on the application to get the result code. This method is called on the application's thread from inside the callback method. Therefore, it must not block the thread that invoked the callback.
The application must return the success or failure code of the asynchronous processing that followed the call to BeginEnableContent.
-Enables the presenter for the enhanced video renderer (EVR) to request a specific frame from the video mixer.
The sample objects created by the
Called by the mixer to get the time and duration of the sample requested by the presenter.
-Receives the desired sample time that should be mixed.
Receives the sample duration that should be mixed.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time stamp was set for this sample. See |
?
Called by the presenter to set the time and duration of the sample that it requests from the mixer.
-The time of the requested sample.
The duration of the requested sample.
This value should be set prior to passing the buffer to the mixer for a Mix operation. The mixer sets the actual start and duration times on the sample before sending it back.
-
Clears the time stamps previously set by a call to
After this method is called, the
This method also clears the time stamp and duration and removes all attributes from the sample.
--
The SetInputStreamState method sets the Device MFT input stream state and media type.
-Stream ID of the input stream where the state and media type needs to be changed.
Preferred media type for the input stream is passed in through this parameter. Device MFT should change the media type only if the incoming media type is different from the current media type.
Specifies the DeviceStreamState which the input stream should transition to.
When
The method returns an
Return code | Description |
---|---|
| Initialization succeeded |
| Device MFT could not support the request at this time. |
| An invalid stream ID was passed. |
| The requested stream transition is not possible. |
?
This interface function helps to transition the input stream to a specified state with a specified media type set on the input stream. This will be used by device transform manager (DTM) when the Device MFT requests a specific input stream?s state and media type to be changed. Device MFT would need to request such a change when one of the Device MFT's output changes.
As an example, consider a Device MFT that has two input streams and three output streams. Let Output 1 and Output 2 source from Input 1 and stream at 720p. Now, if Output 2?s media type changes to 1080p, Device MFT has to change Input 1's media type to 1080p. To achieve this, Device MFT should request DTM to call this method using the
The SetOutputStreamState method sets the Device MFT output stream state and media type.
-Stream ID of the input stream where the state and media type needs to be changed.
Preferred media type for the input stream is passed in through this parameter. Device MFT should change the media type only if the incoming media type is different from the current media type.
Specifies the DeviceStreamState which the input stream should transition to.
Must be zero.
The method returns an
Return code | Description |
---|---|
| Transitioning the stream state succeeded. |
| Device MFT could not support the request at this time. |
| An invalid stream ID was passed. |
| The requested stream transition is not possible. |
?
This interface method helps to transition the output stream to a specified state with specified media type set on the output stream. This will be used by the DTM when the Device Source requests a specific output stream?s state and media type to be changed. Device MFT should change the specified output stream?s media type and state to the requested media type.
If the incoming media type and stream state are same as the current media type and stream state the method return
If the incoming media type and current media type of the stream are the same, Device MFT must change the stream?s state to the requested value and return the appropriate
When a change in the output stream?s media type requires a corresponding change in the input then Device MFT must post the
As an example, consider a Device MFT that has two input streams and three output streams. Let Output 1 and Output 2 source from Input 1 and stream at 720p. Now, let us say Output 2?s media type changes to 1080p. To satisfy this request, Device MFT must change the Input 1 media type to 1080p, by posting
Initializes the Digital Living Network Alliance (DLNA) media sink.
The DLNA media sink exposes this interface. To get a reference to this interface, call CoCreateInstance. The CLSID is CLSID_MPEG2DLNASink.
-Initializes the Digital Living Network Alliance (DLNA) media sink.
-Pointer to a byte stream. The DLNA media sink writes data to this byte stream. The byte stream must be writable.
If TRUE, the DLNA media sink accepts PAL video formats. Otherwise, it accepts NTSC video formats.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The method was already called. |
| The media sink's |
?
Configures Windows Media Digital Rights Management (DRM) for Network Devices on a network sink.
The Advanced Systems Format (ASF) streaming media sink exposes this interface. To get a reference to the
For more information, see Remarks.
-To stream protected content over a network, the ASF streaming media sink provides an output trust authority (OTA) that supports Windows Media DRM for Network Devices and implements the
The application gets a reference to
To stream the content, the application does the following:
To stream DRM-protected content over a network from a server to a client, an application must use the Microsoft Media Foundation Protected Media Path (PMP). The media sink and the application-provided HTTP byte stream exist in mfpmp.exe. Therefore, the byte stream must expose the
When the clock starts for the first time or restarts , the encrypter that is used for encrypting samples is retrieved, and the license response is cached.
Gets the license response for the specified request.
-Pointer to a byte array that contains the license request.
Size, in bytes, of the license request.
Receives a reference to a byte array that contains the license response. The caller must free the array by calling CoTaskMemFree.
Receives the size, in bytes, of the license response.
Receives the key identifier. The caller must release the string by calling SysFreeString.
The function returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink was shut down. |
?
Not implemented in this release.
-Receives a reference to a byte array that contains the license response. The caller must free the array by calling CoTaskMemFree.
Receives the size, in bytes, of the license response.
The method returns E_NOTIMPL.
Represents a buffer that contains a Microsoft DirectX Graphics Infrastructure (DXGI) surface.
-To create a DXGI media buffer, first create the DXGI surface. Then call
Gets the index of the subresource that is associated with this media buffer.
-The subresource index is specified when you create the media buffer object. See
For more information about texture subresources, see
Queries the Microsoft DirectX Graphics Infrastructure (DXGI) surface for an interface.
-The interface identifer (IID) of the interface being requested.
Receives a reference to the interface. The caller must release the interface.
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The object does not support the specified interface. |
| Invalid request. |
?
You can use this method to get a reference to the
Gets the index of the subresource that is associated with this media buffer.
-Receives the zero-based index of the subresource.
If this method succeeds, it returns
The subresource index is specified when you create the media buffer object. See
For more information about texture subresources, see
Gets an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The object does not support the specified interface. |
| The specified key was not found. |
?
Stores an arbitrary
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| An item already exists with this key. |
?
To retrieve the reference from the object, call
Provides functionality for getting the
Gets the
Gets the
If this method succeeds, it returns
Enables an application to use a Media Foundation transform (MFT) that has restrictions on its use.
-If you register an MFT that requires unlocking, include the
Unlocks a Media Foundation transform (MFT) so that the application can use it.
-A reference to the
If this method succeeds, it returns
This method authenticates the caller, using a private communication channel between the MFT and the object that implements the
Retrieves the number of input pins on the EVR filter. The EVR filter always has at least one input pin, which corresponds to the reference stream.
-
Retrieves the number of input pins on the EVR filter. The EVR filter always has at least one input pin, which corresponds to the reference stream.
-
Sets the number of input pins on the EVR filter.
-Specifies the total number of input pins on the EVR filter. This value includes the input pin for the reference stream, which is created by default. For example, to mix one substream plus the reference stream, set this parameter to 2.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid number of streams. The minimum is one, and the maximum is 16. |
| This method has already been called, or at least one pin is already connected. |
?
After this method has been called, it cannot be called a second time on the same instance of the EVR filter. Also, the method fails if any input pins are connected.
-
Retrieves the number of input pins on the EVR filter. The EVR filter always has at least one input pin, which corresponds to the reference stream.
-Receives the number of streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Configures the DirectShow Enhanced Video Renderer (EVR) filter. To get a reference to this interface, call QueryInterface on the EVR filter.
-Gets or sets the configuration parameters for the Microsoft DirectShow Enhanced Video Renderer Filter filter.
-Sets the configuration parameters for the Microsoft DirectShow Enhanced Video Renderer Filter (EVR).
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
Gets the configuration parameters for the Microsoft DirectShow Enhanced Video Renderer Filter filter.
-If this method succeeds, it returns
Optionally supported by media sinks to perform required tasks before shutdown. This interface is typically exposed by archive sinks?that is, media sinks that write to a file. It is used to perform tasks such as flushing data to disk or updating a file header.
To get a reference to this interface, call QueryInterface on the media sink.
-If a media sink exposes this interface, the Media Session will call BeginFinalize on the sink before the session closes.
-
Notifies the media sink to asynchronously take any steps it needs to finish its tasks.
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Many archive media sinks have steps they need to do at the end of archiving to complete their file operations, such as updating the header (for some formats) or flushing all pending writes to disk. In some cases, this may include expensive operations such as indexing the content. BeginFinalize is an asynchronous way to initiate final tasks.
When the finalize operation is complete, the callback object's
Completes an asynchronous finalize operation.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method after the
Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Enables a media source in the application process to create objects in the protected media path (PMP) process.
-This interface is used when a media source resides in the application process but the Media Session resides in a PMP process. The media source can use this interface to create objects in the PMP process. For example, to play DRM-protected content, the media source typically must create an input trust authority (ITA) in the PMP process.
To use this interface, the media source implements the
You can also get a reference to this interface by calling
Applications implement this interface in order to provide custom a custom HTTP or HTTPS download implementation. Use the
Applications implement this interface in order to provide custom a custom HTTP or HTTPS download implementation. Use the
Callback interface to notify the application when an asynchronous method completes.
-For more information about asynchronous methods in Microsoft Media Foundation, see Asynchronous Callback Methods.
This interface is also used to perform a work item in a Media Foundation work-queue. For more information, see Work Queues.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
This value can specify one of the standard Media Foundation work queues, or a work queue created by the application. For list of standard Media Foundation work queues, see Work Queue Identifiers. To create a new work queue, call
If the work queue is not compatible with the value returned in pdwFlags, the Media Foundation platform returns
Applies to: desktop apps | Metro style apps
Called when an asynchronous operation is completed.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Within your implementation of Invoke, call the corresponding End... method.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Provides logging information about the parent object the async callback is associated with.
-Media sources are objects that generate media data in the Media Foundation pipeline. This section describes the media source APIs in detail. Read this section if you are implementing a custom media source, or using a media source outside of the Media Foundation pipeline.
If your application uses the control layer, it needs to use only a limited subset of the media source APIs. For information, see the topic Using Media Sources with the Media Session.
-
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The
The following functions return
A byte stream for a media souce can be opened with read access. A byte stream for an archive media sink should be opened with both read and write access. (Read access may be required, because the archive sink might need to read portions of the file as it writes.)
Some implementations of this interface also expose one or more of the following interfaces:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the characteristics of the byte stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the length of the stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Retrieves the current read or write position in the stream.
-The methods that update the current position are Read, BeginRead, Write, BeginWrite, SetCurrentPosition, and Seek.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Queries whether the current position has reached the end of the stream.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Reads data from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
This method reads at most cb bytes from the current position in the stream and copies them into the buffer provided by the caller. The number of bytes that were read is returned in the pcbRead parameter. The method does not return an error code on reaching the end of the file, so the application should check the value in pcbRead after the method returns.
This method is synchronous. It blocks until the read operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous read operation from the stream.
-Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been read into the buffer, the callback object's
Do not read from, write to, free, or reallocate the buffer while an asynchronous read is pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous read operation.
- Pointer to the
Call this method after the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Writes data to the stream.
-Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
If this method succeeds, it returns
This method writes the contents of the pb buffer to the stream, starting at the current stream position. The number of bytes that were written is returned in the pcbWritten parameter.
This method is synchronous. It blocks until the write operation completes.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Begins an asynchronous write operation to the stream.
-Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the
Pointer to the
If this method succeeds, it returns
When all of the data has been written to the stream, the callback object's
Do not reallocate, free, or write to the buffer while an asynchronous write is still pending.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Completes an asynchronous write operation.
-Pointer to the
Call this method when the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Moves the current position in the stream by a specified offset.
- Specifies the origin of the seek as a member of the
Specifies the new position, as a byte offset from the seek origin.
Specifies zero or more flags. The following flags are defined.
Value | Meaning |
---|---|
| All pending I/O requests are canceled after the seek request completes successfully. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
-If this method succeeds, it returns
If the byte stream is read-only, this method has no effect.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
-If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
The GetCurrentOperationMode
method retrieves the optimization features in effect.
Zero-based index of an output stream on the DMO.
Pointer to a variable that receives the current features. The returned value is a bitwise combination of zero or more flags from the DMO_VIDEO_OUTPUT_STREAM_FLAGS enumeration.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| |
| Success |
?
The GetCurrentSampleRequirements
method retrieves the optimization features required to process the next sample, given the features already agreed to by the application.
Zero-based index of an output stream on the DMO.
Pointer to a variable that receives the required features. The returned value is a bitwise combination of zero or more flags from the DMO_VIDEO_OUTPUT_STREAM_FLAGS enumeration.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| |
| Success |
?
After an application calls the
Before processing a sample, the application can call this method. If the DMO does not require a given feature in order to process the next sample, it omits the corresponding flag from the pdwRequestedFeatures parameter. For the next sample only, the application can ignore the feature. The results of this method are valid only for the next call to the
The DMO will return only the flags that were agreed to in the SetOperationMode method. In other words, you cannot dynamically enable new features with this method.
-
The Next
method retrieves a specified number of items in the enumeration sequence.
Number of items to retrieve.
Array of size cItemsToFetch that is filled with the CLSIDs of the enumerated DMOs.
Array of size cItemsToFetch that is filled with the friendly names of the enumerated DMOs.
Pointer to a variable that receives the actual number of items retrieved. Can be
Returns an
Return code | Description |
---|---|
| Invalid argument. |
| Insufficient memory. |
| |
| Retrieved fewer items than requested. |
| Retrieved the requested number of items. |
?
If the method succeeds, the arrays given by the pCLSID and Names parameters are filled with CLSIDs and wide-character strings. The value of *pcItemsFetched specifies the number of items returned in these arrays.
The method returns
The caller must free the memory allocated for each string returned in the Names parameter, using the CoTaskMemFree function.
-
The Reset
method resets the enumeration sequence to the beginning.
Returns
The
interface provides methods for manipulating a data buffer. Buffers passed to the
The
interface provides methods for manipulating a Microsoft DirectX Media Object (DMO).
The GetOutputStreamInfo
method retrieves information about an output stream; for example, whether the stream is discardable, and whether it uses a fixed sample size. This information never changes.
Zero-based index of an output stream on the DMO.
Pointer to a variable that receives a bitwise combination of zero or more DMO_OUTPUT_STREAM_INFO_FLAGS flags.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| |
| Success |
?
The GetInputType
method retrieves a preferred media type for a specified input stream.
Zero-based index of an input stream on the DMO.
Zero-based index on the set of acceptable media types.
Pointer to a
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Type index is out of range. |
| Insufficient memory. |
| |
| Success. |
?
Call this method to enumerate an input stream's preferred media types. The DMO assigns each media type an index value in order of preference. The most preferred type has an index of zero. To enumerate all the types, make successive calls while incrementing the type index until the method returns DMO_E_NO_MORE_ITEMS. The DMO is not guaranteed to enumerate every media type that it supports.
The format block in the returned type might be
If the method succeeds, call MoFreeMediaType to free the format block. (This function is also safe to call when the format block is
To set the media type, call the
To test whether a particular media type is acceptable, call SetInputType with the
To test whether the dwTypeIndex parameter is in range, set pmt to
The SetInputType
method sets the media type on an input stream, or tests whether a media type is acceptable.
Zero-based index of an input stream on the DMO.
Pointer to a
Bitwise combination of zero or more flags from the DMO_SET_TYPE_FLAGS enumeration.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| Media type was not accepted |
| Media type is not acceptable |
| Media type was set successfully, or is acceptable |
?
Call this method to test, set, or clear the media type on an input stream:
The media types that are currently set on other streams can affect whether the media type is acceptable.
-
The GetInputCurrentType
method retrieves the media type that was set for an input stream, if any.
Zero-based index of an input stream on the DMO.
Pointer to a
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Media type was not set. |
| Insufficient memory. |
| Success. |
?
The caller must set the media type for the stream before calling this method. To set the media type, call the
If the method succeeds, call MoFreeMediaType to free the format block.
-
The GetInputSizeInfo
method retrieves the buffer requirements for a specified input stream.
Zero-based index of an input stream on the DMO.
Pointer to a variable that receives the minimum size of an input buffer for this stream, in bytes.
Pointer to a variable that receives the maximum amount of data that the DMO will hold for lookahead, in bytes. If the DMO does not perform lookahead on the stream, the value is zero.
Pointer to a variable that receives the required buffer alignment, in bytes. If the input stream has no alignment requirement, the value is 1.
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Media type was not set. |
| Success. |
?
The buffer requirements may depend on the media types of the various streams. Before calling this method, set the media type of each stream by calling the
If the DMO performs lookahead on the input stream, it returns the
A buffer is aligned if the buffer's start address is a multiple of *pcbAlignment. The alignment must be a power of two. Depending on the microprocessor, reads and writes to an aligned buffer might be faster than to an unaligned buffer. Also, some microprocessors do not support unaligned reads and writes.
-
The Flush
method flushes all internally buffered data.
Returns
The DMO performs the following actions when this method is called:
Media types, maximum latency, and locked state do not change.
When the method returns, every input stream accepts data. Output streams cannot produce any data until the application calls the
The Discontinuity
method signals a discontinuity on the specified input stream.
Zero-based index of an input stream on the DMO.
Returns an
Return code | Description |
---|---|
| Invalid stream index |
| The DMO is not accepting input. |
| The input and output types have not been set. |
| Success |
?
A discontinuity represents a break in the input. A discontinuity might occur because no more data is expected, the format is changing, or there is a gap in the data. After a discontinuity, the DMO does not accept further input on that stream until all pending data has been processed. The application should call the
This method might fail if it is called before the client sets the input and output types on the DMO.
-
The ProcessInput
method delivers a buffer to the specified input stream.
Zero-based index of an input stream on the DMO.
Pointer to the buffer's
Bitwise combination of zero or more flags from the DMO_INPUT_DATA_BUFFER_FLAGS enumeration.
Time stamp that specifies the start time of the data in the buffer. If the buffer has a valid time stamp, set the
Reference time specifying the duration of the data in the buffer. If this value is valid, set the
Returns an
Return code | Description |
---|---|
| Invalid stream index. |
| Data cannot be accepted. |
| No output to process. |
| Success. |
?
The input buffer specified in the pBuffer parameter is read-only. The DMO will not modify the data in this buffer. All write operations occur on the output buffers, which are given in a separate call to the
If the DMO does not process all the data in the buffer, it keeps a reference count on the buffer. It releases the buffer once it has generated all the output, unless it needs to perform lookahead on the data. (To determine whether a DMO performs lookahead, call the
If this method returns DMO_E_NOTACCEPTING, call ProcessOutput until the input stream can accept more data. To determine whether the stream can accept more data, call the
If the method returns S_FALSE, no output was generated from this input and the application does not need to call ProcessOutput. However, a DMO is not required to return S_FALSE in this situation; it might return
The ProcessOutput
method generates output from the current input data.
Bitwise combination of zero or more flags from the DMO_PROCESS_OUTPUT_FLAGS enumeration.
Number of output buffers.
Pointer to an array of
Pointer to a variable that receives a reserved value (zero). The application should ignore this value.
Returns an
Return code | Description |
---|---|
| Failure |
| Invalid argument |
| |
| No output was generated |
| Success |
?
The pOutputBuffers parameter points to an array of
Each
When the application calls ProcessOutput
, the DMO processes as much input data as possible. It writes the output data to the output buffers, starting from the end of the data in each buffer. (To find the end of the data, call the
If the DMO fills an entire output buffer and still has input data to process, the DMO returns the
If the method returns S_FALSE, no output was generated. However, a DMO is not required to return S_FALSE in this situation; it might return
Discarding data:
You can discard data from a stream by setting the
For each stream in which pBuffer is
To check whether a stream is discardable or optional, call the
The Lock
method acquires or releases a lock on the DMO. Call this method to keep the DMO serialized when performing multiple operations.
Value that specifies whether to acquire or release the lock. If the value is non-zero, a lock is acquired. If the value is zero, the lock is released.
Returns an
Return code | Description |
---|---|
| Failure |
| Success |
?
This method prevents other threads from calling methods on the DMO. If another thread calls a method on the DMO, the thread blocks until the lock is released.
If you are using the Active Template Library (ATL) to implement a DMO, the name of the Lock method conflicts with the CComObjectRootEx::Lock method. To work around this problem, define the preprocessor symbol FIX_LOCK_NAME before including the header file Dmo.h:
#define FIX_LOCK_NAME - #include <dmo.h> -
This directive causes the preprocessor to rename the
The GetLatency
method retrieves the latency introduced by this DMO.
This method returns the average time required to process each buffer. This value usually depends on factors in the run-time environment, such as the processor speed and the CPU load. One possible way to implement this method is for the DMO to keep a running average based on historical data.
-
The Clone
method creates a copy of the DMO in its current state.
Address of a reference to receive the new DMO's
Returns
If the method succeeds, the
The GetLatency
method retrieves the latency introduced by this DMO.
Pointer to a variable that receives the latency, in 100-nanosecond units.
Returns
This method returns the average time required to process each buffer. This value usually depends on factors in the run-time environment, such as the processor speed and the CPU load. One possible way to implement this method is for the DMO to keep a running average based on historical data.
-Enables other components in the protected media path (PMP) to use the input protection system provided by an input trust authorities (ITA). An ITA is a component that implements an input protection system for media content. ITAs expose the
An ITA translates policy from the content's native format into a common format that is used by other PMP components. It also provides a decrypter, if one is needed to decrypt the stream.
The topology contains one ITA instance for every protected stream in the media source. The ITA is obtained from the media source by calling
Retrieves a decrypter transform.
-Interface identifier (IID) of the interface being requested. Currently this value must be IID_IMFTransform, which requests the
Receives a reference to the interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The decrypter does not support the requested interface. |
| This input trust authority (ITA) does not provide a decrypter. |
?
The decrypter should be created in a disabled state, where any calls to
An ITA is not required to provide a decrypter. If the source content is not encrypted, the method should return
The ITA must create a new instance of its decrypter for each call to GetDecrypter. Do not return multiple references to the same decrypter. They must be separate instances because the Media Session might place them in two different branches of the topology.
-
Requests permission to perform a specified action on the stream.
-The requested action, specified as a member of the
Receives the value
The method returns an
Return code | Description |
---|---|
| The user has permission to perform this action. |
| The user must individualize the application. |
| The user must obtain a license. |
?
This method verifies whether the user has permission to perform a specified action on the stream. The ITA does any work needed to verify the user's right to perform the action, such as checking licenses.
To verify the user's rights, the ITA might need to perform additional steps that require interaction with the user or consent from the user. For example, it might need to acquire a new license or individualize a DRM component. In that case, the ITA creates an activation object for a content enabler and returns the activation object's
The Media Session returns the
The application calls
The application calls
The Media Session calls RequestAccess again.
The return value signals whether the user has permission to perform the action:
If the user already has permission to perform the action, the method returns
If the user does not have permission, the method returns a failure code and sets *ppContentEnablerActivate to
If the ITA must perform additional steps that require interaction with the user, the method returns a failure code and returns the content enabler's
The Media Session will not allow the action unless this method returns
A stream can go to multiple outputs, so this method might be called multiple times with different actions, once for every output.
-
Retrieves the policy that defines which output protection systems are allowed for this stream, and the configuration data for each protection system.
-The action that will be performed on this stream, specified as a member of the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Notifies the input trust authority (ITA) that a requested action is about to be performed.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before calling this method, the Media Session calls
Notifies the input trust authority (ITA) when the number of output trust authorities (OTAs) that will perform a specified action has changed.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The ITA can update its internal state if needed. If the method returns a failure code, the Media Session cancels the action.
-
Resets the input trust authority (ITA) to its initial state.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When this method is called, the ITA should disable any decrypter that was returned in the
Registers Media Foundation transforms (MFTs) in the caller's process.
The Media Session exposes this interface as a service. To obtain a reference to this interface, call the
This interface requires the Media Session. If you are not using the Media Session for playback, call one of the following functions instead:
Registers one or more Media Foundation transforms (MFTs) in the caller's process.
-A reference to an array of
The number of elements in the pMFTs array.
If this method succeeds, it returns
This method is similar to the
Unlike
Provides a generic way to store key/value pairs on an object. The keys are
For a list of predefined attribute
To create an empty attribute store, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of attributes that are set on this object.
-To enumerate all of the attributes, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the value associated with a key.
- A
A reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified key was not found. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the data type of the value associated with a key.
-Receives a member of the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether a stored attribute value equals to a specified
Receives a Boolean value indicating whether the attribute matches the value given in Value. See Remarks. This parameter must not be
The method sets pbResult to
No attribute is found whose key matches the one given in guidKey.
The attribute's
The attribute value does not match the value given in Value.
The method fails.
Otherwise, the method sets pbResult to TRUE.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Compares the attributes on this object with the attributes on another object.
-Pointer to the
Member of the
Receives a Boolean value. The value is TRUE if the two sets of attributes match in the way specified by the MatchType parameter. Otherwise, the value is
If pThis is the object whose Compare method is called, and pTheirs is the object passed in as the pTheirs parameter, the following comparisons are defined by MatchType.
Match type | Returns TRUE if and only if |
---|---|
For every attribute in pThis, an attribute with the same key and value exists in pTheirs. | |
For every attribute in pTheirs, an attribute with the same key and value exists in pThis. | |
The key/value pairs are identical in both objects. | |
Take the intersection of the keys in pThis and the keys in pTheirs. The values associated with those keys are identical in both pThis and pTheirs. | |
Take the object with the smallest number of attributes. For every attribute in that object, an attribute with the same key and value exists in the other object. |
?
The pTheirs and pbResult parameters must not be
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a UINT32 value associated with a key.
-Receives a UINT32 value. If the key is found and the data type is UINT32, the method copies the value into this parameter. Otherwise, the original value of this parameter is not changed.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a UINT64 value associated with a key.
-Receives a UINT64 value. If the key is found and the data type is UINT64, the method copies the value into this parameter. Otherwise, the original value of this parameter is not changed.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a double value associated with a key.
-Receives a double value. If the key is found and the data type is double, the method copies the value into this parameter. Otherwise, the original value of this parameter is not changed.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a
Receives a
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of a string value associated with a key.
-If the key is found and the value is a string type, this parameter receives the number of characters in the string, not including the terminating
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a wide-character string associated with a key.
-Pointer to a wide-character array allocated by the caller. The array must be large enough to hold the string, including the terminating
The size of the pwszValue array, in characters. This value includes the terminating
Receives the number of characters in the string, excluding the terminating
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The length of the string is too large to fit in a UINT32 value. |
| The buffer is not large enough to hold the string. |
| The specified key was not found. |
| The attribute value is not a string. |
?
You can also use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets a wide-character string associated with a key. This method allocates the memory for the string.
-A
If the key is found and the value is a string type, this parameter receives a copy of the string. The caller must free the memory for the string by calling CoTaskMemFree.
Receives the number of characters in the string, excluding the terminating
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified key was not found. |
| The attribute value is not a string. |
?
To copy a string value into a caller-allocated buffer, use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of a byte array associated with a key.
-If the key is found and the value is a byte array, this parameter receives the size of the array, in bytes.
To get the byte array, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a byte array associated with a key. This method copies the array into a caller-allocated buffer.
-Pointer to a buffer allocated by the caller. If the key is found and the value is a byte array, the method copies the array into this buffer. To find the required size of the buffer, call
The size of the pBuf buffer, in bytes.
Receives the size of the byte array. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer is not large enough to the array. |
| The specified key was not found. |
| The attribute value is not a byte array. |
?
You can also use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Provides a generic way to store key/value pairs on an object. The keys are
For a list of predefined attribute
To create an empty attribute store, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an interface reference associated with a key.
-Interface identifier (IID) of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The attribute value is an |
| The specified key was not found. |
| The attribute value is not an |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Adds an attribute value with a specified key.
- A
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
| Invalid attribute type. |
?
This method checks whether the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes a key/value pair from the object's attribute list.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the specified key does not exist, the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes all key/value pairs from the object's attribute list.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a UINT32 value with a key.
-New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the UINT32 value, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a UINT64 value with a key.
-New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the UINT64 value, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a double value with a key.
-New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the double value, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a
New value for this key.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
?
To retrieve the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a wide-character string with a key.
-Null-terminated wide-character string to associate with this key. The method stores a copy of the string.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the string, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates a byte array with a key.
-Pointer to a byte array to associate with this key. The method stores a copy of the array.
Size of the array, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the byte array, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Associates an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To retrieve the
It is not an error to call SetUnknown with pUnknown equal to
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Locks the attribute store so that no other thread can access it. If the attribute store is already locked by another thread, this method blocks until the other thread unlocks the object. After calling this method, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method can cause a deadlock if a thread that calls LockStore waits on a thread that calls any other
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Unlocks the attribute store after a call to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of attributes that are set on this object.
-Receives the number of attributes. This parameter must not be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To enumerate all of the attributes, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an attribute at the specified index.
-Index of the attribute to retrieve. To get the number of attributes, call
Receives the
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid index. |
?
To enumerate all of an object's attributes in a thread-safe way, do the following:
Call
Call
Call GetItemByIndex to get each attribute by index.
Call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Copies all of the attributes from this object into another attribute store.
- A reference to the
If this method succeeds, it returns
This method deletes all of the attributes originally stored in pDest.
Note??When you call CopyAllItems on an
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Attributes are used throughout Microsoft Media Foundation to configure objects, describe media formats, query object properties, and other purposes. For more information, see Attributes in Media Foundation.
For a complete list of all the defined attribute GUIDs in Media Foundation, see Media Foundation Attributes.
-Applies to: desktop apps | Metro style apps
Retrieves an attribute at the specified index.
-Index of the attribute to retrieve. To get the number of attributes, call
Receives the
To enumerate all of an object's attributes in a thread-safe way, do the following:
Call
Call
Call GetItemByIndex to get each attribute by index.
Call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Adds an attribute value with a specified key.
- A
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
| Invalid attribute type. |
?
This method checks whether the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Adds an attribute value with a specified key.
- A
A
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Insufficient memory. |
| Invalid attribute type. |
?
This method checks whether the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents a block of memory that contains media data. Use this interface to access the data in the buffer.
-If the buffer contains 2-D image data (such as an uncompressed video frame), you should query the buffer for the
To get a buffer from a media sample, call one of the following
To create a new buffer object, use one of the following functions.
Function | Description |
---|---|
| Creates a buffer and allocates system memory. |
| Creates a media buffer that wraps an existing media buffer. |
| Creates a buffer that manages a DirectX surface. |
| Creates a buffer and allocates system memory with a specified alignment. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of the valid data in the buffer.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the allocated size of the buffer.
-The buffer might or might not contain any valid data, and if there is valid data in the buffer, it might be smaller than the buffer's allocated size. To get the length of the valid data, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gives the caller access to the memory in the buffer, for reading or writing
-Receives the maximum amount of data that can be written to the buffer. This parameter can be
Receives the length of the valid data in the buffer, in bytes. This parameter can be
Receives a reference to the start of the buffer.
This method gives the caller access to the entire buffer, up to the maximum size returned in the pcbMaxLength parameter. The value returned in pcbCurrentLength is the size of any valid data already in the buffer, which might be less than the total buffer size.
The reference returned in ppbBuffer is guaranteed to be valid, and can safely be accessed across the entire buffer for as long as the lock is held. When you are done accessing the buffer, call
Locking the buffer does not prevent other threads from calling Lock, so you should not rely on this method to synchronize threads.
This method does not allocate any memory, or transfer ownership of the memory to the caller. Do not release or free the memory; the media buffer will free the memory when the media buffer is destroyed.
If you modify the contents of the buffer, update the current length by calling
If the buffer supports the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Unlocks a buffer that was previously locked. Call this method once for every call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| For Direct3D surface buffers, an error occurred when unlocking the surface. |
?
It is an error to call Unlock if you did not call Lock previously.
After calling this method, do not use the reference returned by the Lock method. It is no longer guaranteed to be valid.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the length of the valid data in the buffer.
-Receives the length of the valid data, in bytes. If the buffer does not contain any valid data, the value is zero.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the length of the valid data in the buffer.
-Length of the valid data, in bytes. This value cannot be greater than the allocated size of the buffer, which is returned by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified length is greater than the maximum size of the buffer. |
?
Call this method if you write data into the buffer.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the allocated size of the buffer.
-Receives the allocated size of the buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The buffer might or might not contain any valid data, and if there is valid data in the buffer, it might be smaller than the buffer's allocated size. To get the length of the valid data, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Enables an application to play audio or video files.
-The Media Engine implements this interface. To create an instance of the Media Engine, call
This interface is extended with
Gets the most recent error status.
-This method returns the last error status, if any, that resulted from loading the media source. If there has not been an error, ppError receives the value
This method corresponds to the error attribute of the HTMLMediaElement interface in HTML5.
-Sets the current error code.
-Sets a list of media sources.
-This method corresponds to adding a list of source elements to a media element in HTML5.
The Media Engine tries to load each item in the pSrcElements list, until it finds one that loads successfully. After this method is called, the application can use the
This method completes asynchronously. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load a URL, it sends an
For more information about event handling in the Media Engine, see
If the application also calls
Gets the current network state of the media engine.
-This method corresponds to the networkState attribute of the HTMLMediaElement interface in HTML5.
-Gets or sets the preload flag.
-This method corresponds to the preload attribute of the HTMLMediaElement interface in HTML5. The value is a hint to the user-agent whether to preload the media resource.
-Queries how much resource data the media engine has buffered.
-This method corresponds to the buffered attribute of the HTMLMediaElement interface in HTML5.
The returned
Gets the ready state, which indicates whether the current media resource can be rendered.
-This method corresponds to the readyState attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine is currently seeking to a new playback position.
-This method corresponds to the seeking attribute of the HTMLMediaElement interface in HTML5.
-Gets or sets the current playback position.
-This method corresponds to the currentTime attribute of the HTMLMediaElement interface in HTML5.
-Gets the initial playback position.
-This method corresponds to the initialTime attribute of the HTMLMediaElement interface in HTML5.
-Gets the duration of the media resource.
-This method corresponds to the duration attribute of the HTMLMediaElement interface in HTML5.
If the duration changes, the Media Engine sends an
Queries whether playback is currently paused.
-This method corresponds to the paused attribute of the HTMLMediaElement interface in HTML5.
-Gets or sets the default playback rate.
-This method corresponds to getting the defaultPlaybackRate attribute of the HTMLMediaElement interface in HTML5.
The default playback rate is used for the next call to the
Gets or sets the current playback rate.
-This method corresponds to getting the playbackRate attribute of the HTMLMediaElement interface in HTML5.
-Gets the time ranges that have been rendered.
-This method corresponds to the played attribute of the HTMLMediaElement interface in HTML5.
-Gets the time ranges to which the Media Engine can currently seek.
-This method corresponds to the seekable attribute of the HTMLMediaElement interface in HTML5.
To find out whether the media source supports seeking, call
Queries whether playback has ended.
-This method corresponds to the ended attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine automatically begins playback.
-This method corresponds to the autoplay attribute of the HTMLMediaElement interface in HTML5.
If this method returns TRUE, playback begins automatically after the
Queries whether the Media Engine will loop playback.
-This method corresponds to getting the loop attribute of the HTMLMediaElement interface in HTML5.
If looping is enabled, the Media Engine seeks to the start of the content when playback reaches the end.
-Queries whether the audio is muted.
-Gets or sets the audio volume level.
-Gets the most recent error status.
-Receives either a reference to the
If this method succeeds, it returns
This method returns the last error status, if any, that resulted from loading the media source. If there has not been an error, ppError receives the value
This method corresponds to the error attribute of the HTMLMediaElement interface in HTML5.
-Sets the current error code.
-The error code, as an
If this method succeeds, it returns
Sets a list of media sources.
-A reference to the
If this method succeeds, it returns
This method corresponds to adding a list of source elements to a media element in HTML5.
The Media Engine tries to load each item in the pSrcElements list, until it finds one that loads successfully. After this method is called, the application can use the
This method completes asynchronously. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load a URL, it sends an
For more information about event handling in the Media Engine, see
If the application also calls
Sets the URL of a media resource.
-The URL of the media resource.
If this method succeeds, it returns
This method corresponds to setting the src attribute of the HTMLMediaElement interface in HTML5.
The URL specified by this method takes precedence over media resources specified in the
This method asynchronously loads the URL. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load the URL, the Media Engine sends an
For more information about event handling in the Media Engine, see
Gets the URL of the current media resource, or an empty string if no media resource is present.
-Receives a BSTR that contains the URL of the current media resource. If there is no media resource, ppUrl receives an empty string. The caller must free the BSTR by calling SysFreeString.
If this method succeeds, it returns
This method corresponds to the currentSrc attribute of the HTMLMediaElement interface in HTML5.
Initially, the current media resource is empty. It is updated when the Media Engine performs the resource selection algorithm.
-Gets the current network state of the media engine.
-Returns an
This method corresponds to the networkState attribute of the HTMLMediaElement interface in HTML5.
-Gets the preload flag.
-Returns an
This method corresponds to the preload attribute of the HTMLMediaElement interface in HTML5. The value is a hint to the user-agent whether to preload the media resource.
-Sets the preload flag.
-An
If this method succeeds, it returns
This method corresponds to setting the preload attribute of the HTMLMediaElement interface in HTML5. The value is a hint to the user-agent whether to preload the media resource.
-Queries how much resource data the media engine has buffered.
-Receives a reference to the
If this method succeeds, it returns
This method corresponds to the buffered attribute of the HTMLMediaElement interface in HTML5.
The returned
Loads the current media source.
-If this method succeeds, it returns
The main purpose of this method is to reload a list of source elements after updating the list. For more information, see SetSourceElements. Otherwise, calling this method is generally not required. To load a new media source, call
The Load method explictly invokes the Media Engine's media resource loading algorithm. Before calling this method, you must set the media resource by calling
This method completes asynchronously. When the Load operation starts, the Media Engine sends an
If the Media Engine is unable to load the file, the Media Engine sends an
For more information about event handling in the Media Engine, see
This method corresponds to the load method of the HTMLMediaElement interface in HTML5.
-Queries how likely it is that the Media Engine can play a specified type of media resource.
-A string that contains a MIME type with an optional codecs parameter, as defined in RFC 4281.
Receives an
If this method succeeds, it returns
This method corresponds to the canPlayType attribute of the HTMLMediaElement interface in HTML5.
The canPlayType attribute defines the following values.
Value | Description |
---|---|
"" (empty string) | The user-agent cannot play the resource, or the resource type is "application/octet-stream". |
"probably" | The user-agent probably can play the resource. |
"maybe" | Neither of the previous values applies. |
?
The value "probably" is used because a MIME type for a media resource is generally not a complete description of the resource. For example, "video/mp4" specifies an MP4 file with video, but does not describe the codec. Even with the optional codecs parameter, the MIME type omits some information, such as the actual coded bit rate. Therefore, it is usually impossible to be certain that playback is possible until the actual media resource is opened.
-Gets the ready state, which indicates whether the current media resource can be rendered.
-Returns an
This method corresponds to the readyState attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine is currently seeking to a new playback position.
-Returns TRUE if the Media Engine is seeking, or
This method corresponds to the seeking attribute of the HTMLMediaElement interface in HTML5.
-Gets the current playback position.
-Returns the playback position, in seconds.
This method corresponds to the currentTime attribute of the HTMLMediaElement interface in HTML5.
-Seeks to a new playback position.
-The new playback position, in seconds.
If this method succeeds, it returns
This method corresponds to setting the currentTime attribute of the HTMLMediaElement interface in HTML5.
The method completes asynchronously. When the seek operation starts, the Media Engine sends an
Gets the initial playback position.
-Returns the initial playback position, in seconds.
This method corresponds to the initialTime attribute of the HTMLMediaElement interface in HTML5.
-Gets the duration of the media resource.
-Returns the duration, in seconds. If no media data is available, the method returns not-a-number (NaN). If the duration is unbounded, the method returns an infinite value.
This method corresponds to the duration attribute of the HTMLMediaElement interface in HTML5.
If the duration changes, the Media Engine sends an
Queries whether playback is currently paused.
-Returns TRUE if playback is paused, or
This method corresponds to the paused attribute of the HTMLMediaElement interface in HTML5.
-Gets the default playback rate.
-Returns the default playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
This method corresponds to getting the defaultPlaybackRate attribute of the HTMLMediaElement interface in HTML5.
The default playback rate is used for the next call to the
Sets the default playback rate.
-The default playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
If this method succeeds, it returns
This method corresponds to setting the defaultPlaybackRate attribute of the HTMLMediaElement interface in HTML5.
-Gets the current playback rate.
-Returns the playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
This method corresponds to getting the playbackRate attribute of the HTMLMediaElement interface in HTML5.
-Sets the current playback rate.
-The playback rate, as a multiple of normal (1?) playback. A negative value indicates reverse playback.
If this method succeeds, it returns
This method corresponds to setting the playbackRate attribute of the HTMLMediaElement interface in HTML5.
-Gets the time ranges that have been rendered.
-Receives a reference to the
If this method succeeds, it returns
This method corresponds to the played attribute of the HTMLMediaElement interface in HTML5.
-Gets the time ranges to which the Media Engine can currently seek.
-Receives a reference to the
If this method succeeds, it returns
This method corresponds to the seekable attribute of the HTMLMediaElement interface in HTML5.
To find out whether the media source supports seeking, call
Queries whether playback has ended.
-Returns TRUE if the direction of playback is forward and playback has reached the end of the media resource. Returns
This method corresponds to the ended attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine automatically begins playback.
-Returns TRUE if the Media Engine automatically begins playback, or
This method corresponds to the autoplay attribute of the HTMLMediaElement interface in HTML5.
If this method returns TRUE, playback begins automatically after the
Specifies whether the Media Engine automatically begins playback.
-If TRUE, the Media Engine automatically begins playback after it loads a media source. Otherwise, playback does not begin until the application calls
If this method succeeds, it returns
This method corresponds to setting the autoplay attribute of the HTMLMediaElement interface in HTML5.
-Queries whether the Media Engine will loop playback.
-Returns TRUE if looping is enabled, or
This method corresponds to getting the loop attribute of the HTMLMediaElement interface in HTML5.
If looping is enabled, the Media Engine seeks to the start of the content when playback reaches the end.
-Specifies whether the Media Engine loops playback.
-Specify TRUE to enable looping, or
If this method succeeds, it returns
If Loop is TRUE, playback loops back to the beginning when it reaches the end of the source.
This method corresponds to setting the loop attribute of the HTMLMediaElement interface in HTML5.
-Starts playback.
-If this method succeeds, it returns
This method corresponds to the play method of the HTMLMediaElement interface in HTML5.
The method completes asynchronously. When the operation starts, the Media Engine sends an
Pauses playback.
-If this method succeeds, it returns
This method corresponds to the pause method of the HTMLMediaElement interface in HTML5.
The method completes asynchronously. When the transition to paused is complete, the Media Engine sends an
Queries whether the audio is muted.
-Returns TRUE if the audio is muted, or
Mutes or unmutes the audio.
-Specify TRUE to mute the audio, or
If this method succeeds, it returns
Gets the audio volume level.
-Returns the volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
Sets the audio volume level.
-The volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
If this method succeeds, it returns
When the audio balance changes, the Media Engine sends an
Queries whether the current media resource contains a video stream.
-Returns TRUE if the current media resource contains a video stream. Returns
Queries whether the current media resource contains an audio stream.
-Returns TRUE if the current media resource contains an audio stream. Returns
Gets the size of the video frame, adjusted for aspect ratio.
-Receives the width in pixels.
Receives the height in pixels.
If this method succeeds, it returns
This method adjusts for the correct picture aspect ratio. - For example, if the encoded frame is 720 ? 420 and the picture aspect ratio is 4:3, the method will return a size equal to 640 ? 480 pixels.
-Gets the picture aspect ratio of the video stream.
-Receives the x component of the aspect ratio.
Receives the y component of the aspect ratio.
If this method succeeds, it returns
The Media Engine automatically converts the pixel aspect ratio to 1:1 (square pixels).
-Shuts down the Media Engine and releases the resources it is using.
-If this method succeeds, it returns
Copies the current video frame to a DXGI surface or WIC bitmap.
-A reference to the
A reference to an
A reference to a
A reference to an
If this method succeeds, it returns
In frame-server mode, call this method to blit the video frame to a DXGI or WIC surface. The application can call this method at any time after the Media Engine loads a video resource. Typically, however, the application calls
The Media Engine scales and letterboxes the video to fit the destination rectangle. It fills the letterbox area with the border color.
For protected content, call the
Queries the Media Engine to find out whether a new video frame is ready.
-If a new frame is ready, receives the presentation time of the frame.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded, but the Media Engine does not have a new frame. |
| A new video frame is ready for display. |
?
In frame-server mode, the application should call this method whenever a vertical blank occurs in the display device. If the method returns
Do not call this method in rendering mode or audio-only mode.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Queries the Media Engine to find out whether a new video frame is ready.
-If a new frame is ready, receives the presentation time of the frame.
In frame-server mode, the application should call this method whenever a vertical blank occurs in the display device. If the method returns
Do not call this method in rendering mode or audio-only mode.
-[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Sets the URL of a media resource.
-The URL of the media resource.
If this method succeeds, it returns
This method corresponds to setting the src attribute of the HTMLMediaElement interface in HTML5.
The URL specified by this method takes precedence over media resources specified in the
This method asynchronously loads the URL. When the operation starts, the Media Engine sends an
If the Media Engine is unable to load the URL, the Media Engine sends an
For more information about event handling in the Media Engine, see
Creates a new instance of the Media Engine.
-Before calling this method, call
The Media Engine supports three distinct modes:
Mode | Description |
---|---|
Frame-server mode | In this mode, the Media Engine delivers uncompressed video frames to the application. The application is responsible for displaying each frame, using Microsoft Direct3D or any other rendering technique. The Media Engine renders the audio; the application is not responsible for audio rendering. Frame-server mode is the default mode. |
Rendering mode | In this mode, the Media Engine renders both audio and video. The video is rendered to a window or Microsoft DirectComposition visual provided by the application. To enable rendering mode, set either the |
Audio mode | In this mode, the Media Engine renders audio only, with no video. To enable audio mode, set the |
?
-Creates a new instance of the Media Engine.
-A bitwise OR of zero or more flags from the
A reference to the
This parameter specifies configuration attributes for the Media Engine. Call
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| A required attribute was missing from pAttr, or an invalid combination of attributes was used. |
?
Before calling this method, call
The Media Engine supports three distinct modes:
Mode | Description |
---|---|
Frame-server mode | In this mode, the Media Engine delivers uncompressed video frames to the application. The application is responsible for displaying each frame, using Microsoft Direct3D or any other rendering technique. The Media Engine renders the audio; the application is not responsible for audio rendering. Frame-server mode is the default mode. |
Rendering mode | In this mode, the Media Engine renders both audio and video. The video is rendered to a window or Microsoft DirectComposition visual provided by the application. To enable rendering mode, set either the |
Audio mode | In this mode, the Media Engine renders audio only, with no video. To enable audio mode, set the |
?
-Creates a time range object.
-Receives a reference to the
If this method succeeds, it returns
Creates a media error object.
-Receives a reference to the
If this method succeeds, it returns
Creates an instance of the
Creates a media keys object based on the specified key system.
-The media key system.
Points to the default file location for the store Content Decryption Module (CDM) data.
Points to a the inprivate location for the store Content Decryption Module (CDM) data. Specifying this path allows the CDM to comply with the application?s privacy policy by putting personal information in the file location indicated by this path.
Receives the media keys.
If this method succeeds, it returns
Gets a value that indicates if the specified key system supports the specified media type.
-Creates an instance of
If this method succeeds, it returns
Creates a media keys object based on the specified key system.
-The media keys system.
Points to a location to store Content Decryption Module (CDM) data which might be locked by multiple process and so might be incompatible with store app suspension.
The media keys.
If this method succeeds, it returns
Checks if keySystem is a supported key system and creates the related Content Decryption Module (CDM). -
-Gets a value that indicates if the specified key system supports the specified media type.
-The MIME type to check support for.
The key system to check support for.
true if type is supported by keySystem; otherwise, false.
If this method succeeds, it returns
Implemented by the media engine to add encrypted media extensions methods.
-Gets the media keys object associated with the media engine or null if there is not a media keys object.
-Sets the media keys object to use with the media engine.
-Gets the media keys object associated with the media engine or null if there is not a media keys object.
-The media keys object associated with the media engine or null if there is not a media keys object.
If this method succeeds, it returns
Sets the media keys object to use with the media engine.
-The media keys.
If this method succeeds, it returns
Extends the
The
Gets or sets the audio balance.
-Gets various flags that describe the media resource.
-Gets the number of streams in the media resource.
-Queries whether the media resource contains protected content.
-Gets or sets the time of the next timeline marker, if any.
-Queries whether the media resource contains stereoscopic 3D video.
-For stereoscopic 3D video, gets the layout of the two views within a video frame.
-For stereoscopic 3D video, queries how the Media Engine renders the 3D video content.
-Gets a handle to the windowless swap chain.
-To enable windowless swap-chain mode, call
Gets or sets the audio stream category used for the next call to SetSource or Load.
-For information on audio stream categories, see
Gets or sets the audio device endpoint role used for the next call to SetSource or Load.
-For information on audio endpoint roles, see ERole enumeration.
-Gets or sets the real time mode used for the next call to SetSource or Load.
-Opens a media resource from a byte stream.
-A reference to the
The URL of the byte stream.
If this method succeeds, it returns
Gets a playback statistic from the Media Engine.
-A member of the
A reference to a
If this method succeeds, it returns
Updates the source rectangle, destination rectangle, and border color for the video.
-A reference to an
A reference to a
A reference to an
If this method succeeds, it returns
In rendering mode, call this method to reposition the video, update the border color, or repaint the video frame. If all of the parameters are
In frame-server mode, this method has no effect.
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Gets the audio balance.
-Returns the balance. The value can be any number in the following range (inclusive).
Return value | Description |
---|---|
| The left channel is at full volume; the right channel is silent. |
| The right channel is at full volume; the left channel is silent. |
?
If the value is zero, the left and right channels are at equal volumes. The default value is zero.
Sets the audio balance.
-The audio balance. The value can be any number in the following range (inclusive).
Value | Meaning |
---|---|
| The left channel is at full volume; the right channel is silent. |
| The right channel is at full volume; the left channel is silent. |
?
If the value is zero, the left and right channels are at equal volumes. The default value is zero.
If this method succeeds, it returns
When the audio balance changes, the Media Engine sends an
Queries whether the Media Engine can play at a specified playback rate.
-The requested playback rate.
Returns TRUE if the playback rate is supported, or
Playback rates are expressed as a ratio of the current rate to the normal rate. For example, 1.0 is normal playback speed, 0.5 is half speed, and 2.0 is 2? speed. Positive values mean forward playback, and negative values mean reverse playback.
The results of this method can vary depending on the media resource that is currently loaded. Some media formats might support faster playback rates than others. Also, some formats might not support reverse play.
-Steps forward or backward one frame.
-Specify TRUE to step forward or
If this method succeeds, it returns
The frame-step direction is independent of the current playback direction.
This method completes asynchronously. When the operation completes, the Media Engine sends an
Gets various flags that describe the media resource.
-Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Gets a presentation attribute from the media resource.
-The attribute to query. For a list of presentation attributes, see Presentation Descriptor Attributes.
A reference to a
If this method succeeds, it returns
Gets the number of streams in the media resource.
-Receives the number of streams.
If this method succeeds, it returns
Gets a stream-level attribute from the media resource.
-The zero-based index of the stream. To get the number of streams, call
The attribute to query. Possible values are listed in the following topics:
A reference to a
If this method succeeds, it returns
Queries whether a stream is selected to play.
-The zero-based index of the stream. To get the number of streams, call
Receives a Boolean value.
Value | Meaning |
---|---|
| The stream is selected. During playback, this stream will play. |
The stream is not selected. During playback, this stream will not play. |
?
If this method succeeds, it returns
Selects or deselects a stream for playback.
-The zero-based index of the stream. To get the number of streams, call
Specifies whether to select or deselect the stream.
Value | Meaning |
---|---|
| The stream is selected. During playback, this stream will play. |
The stream is not selected. During playback, this stream will not play. |
?
If this method succeeds, it returns
Applies the stream selections from previous calls to SetStreamSelection.
-If this method succeeds, it returns
Queries whether the media resource contains protected content.
-Receives the value TRUE if the media resource contains protected content, or
If this method succeeds, it returns
Inserts a video effect.
-One of the following:
Specifies whether the effect is optional.
Value | Meaning |
---|---|
| The effect is optional. If the Media Engine cannot add the effect, it ignores the effect and continues playback. |
The effect is required. If the Media Engine object cannot add the effect, a playback error occurs. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The maximum number of video effects was reached. |
?
The effect is applied when the next media resource is loaded.
-Inserts an audio effect.
-One of the following:
Specifies whether the effect is optional.
Value | Meaning |
---|---|
| The effect is optional. If the Media Engine cannot add the effect, it ignores the effect and continues playback. |
The effect is required. If the Media Engine object cannot add the effect, a playback error occurs. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The maximum number of audio effects was reached. |
?
The effect is applied when the next media resource is loaded.
-Removes all audio and video effects.
-If this method succeeds, it returns
Call this method to remove all of the effects that were added with the InsertAudioEffect and InsertVideoEffect methods.
-Specifies a presentation time when the Media Engine will send a marker event.
-The presentation time for the marker event, in seconds.
If this method succeeds, it returns
When playback reaches the time specified by timeToFire, the Media Engine sends an
If the application seeks past the marker point, the Media Engine cancels the marker and does not send the event.
During forward playback, set timeToFire to a value greater than the current playback position. During reverse playback, set timeToFire to a value less than the playback position.
To cancel a marker, call
Gets the time of the next timeline marker, if any.
-Receives the marker time, in seconds. If no marker is set, this parameter receives the value NaN.
If this method succeeds, it returns
Cancels the next pending timeline marker.
-If this method succeeds, it returns
Call this method to cancel the
Queries whether the media resource contains stereoscopic 3D video.
-Returns TRUE if the media resource contains 3D video, or
For stereoscopic 3D video, gets the layout of the two views within a video frame.
-Receives a member of the
If this method succeeds, it returns
For stereoscopic 3D video, sets the layout of the two views within a video frame.
-A member of the
If this method succeeds, it returns
For stereoscopic 3D video, queries how the Media Engine renders the 3D video content.
-Receives a member of the
If this method succeeds, it returns
For stereoscopic 3D video, specifies how the Media Engine renders the 3D video content.
-A member of the
If this method succeeds, it returns
Enables or disables windowless swap-chain mode.
-If TRUE, windowless swap-chain mode is enabled.
If this method succeeds, it returns
In windowless swap-chain mode, the Media Engine creates a windowless swap chain and presents video frames to the swap chain. To render the video, call
Gets a handle to the windowless swap chain.
-Receives a handle to the swap chain.
If this method succeeds, it returns
To enable windowless swap-chain mode, call
Enables or disables mirroring of the video.
-If TRUE, the video is mirrored horizontally. Otherwise, the video is displayed normally.
If this method succeeds, it returns
Gets the audio stream category used for the next call to SetSource or Load.
-If this method succeeds, it returns
For information on audio stream categories, see
Sets the audio stream category for the next call to SetSource or Load.
-If this method succeeds, it returns
For information on audio stream categories, see
Gets the audio device endpoint role used for the next call to SetSource or Load.
-If this method succeeds, it returns
For information on audio endpoint roles, see ERole enumeration.
-Sets the audio device endpoint used for the next call to SetSource or Load.
-If this method succeeds, it returns
For information on audio endpoint roles, see ERole enumeration.
-Gets the real time mode used for the next call to SetSource or Load.
-If this method succeeds, it returns
Sets the real time mode used for the next call to SetSource or Load.
-If this method succeeds, it returns
Seeks to a new playback position using the specified
If this method succeeds, it returns
Enables or disables the time update timer.
-If TRUE, the update timer is enabled. Otherwise, the timer is disabled.
If this method succeeds, it returns
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Opens a media resource from a byte stream.
-A reference to the
The URL of the byte stream.
If this method succeeds, it returns
Enables an application to load media resources in the Media Engine.
-To use this interface, set the
Queries whether the object can load a specified type of media resource.
-If TRUE, the Media Engine is set to audio-only mode. Otherwise, the Media Engine is set to audio-video mode.
A string that contains a MIME type with an optional codecs parameter, as defined in RFC 4281.
Receives a member of the
If this method succeeds, it returns
Implement this method if your Media Engine extension supports one or more MIME types.
-Begins an asynchronous request to create either a byte stream or a media source.
-The URL of the media resource.
A reference to the
If the type parameter equals
If type equals
A member of the
Value | Meaning |
---|---|
Create a byte stream. The byte stream must support the | |
Create a media source. The media source must support the |
?
Receives a reference to the
The caller must release the interface. This parameter can be
A reference to the
A reference to the
If this method succeeds, it returns
This method requests the object to create either a byte stream or a media source, depending on the value of the type parameter:
The method is performed asynchronously. The Media Engine calls the
Cancels the current request to create an object.
-The reference that was returned in the the ppIUnknownCancelCookie parameter of the
If this method succeeds, it returns
This method attempts to cancel a previous call to BeginCreateObject. Because that method is asynchronous, however, it might complete before the operation can be canceled.
-Completes an asynchronous request to create a byte stream or media source.
-A reference to the
Receives a reference to the
If this method succeeds, it returns
The Media Engine calls this method to complete the
Represents a callback to the media engine to notify key request data.
-Notifies the application that a key or keys are needed along with any initialization data.
-The initialization data.
The count in bytes of initData.
Callback interface for the
To set the callback reference on the Media Engine, set the
[This documentation is preliminary and is subject to change.]
Applies to: desktop apps | Metro style apps
Notifies the application when a playback event occurs.
-A member of the
The first event parameter. The meaning of this parameter depends on the event code.
The second event parameter. The meaning of this parameter depends on the event code.
If this method succeeds, it returns
Provides methods for getting information about the Output Protection Manager (OPM).
-To get a reference to this interface, call QueryInterface on the Media Engine.
The
Gets status information about the Output Protection Manager (OPM).
-The method returns an
Return code | Description |
---|---|
| The method succeeded |
| If any of the parameters are |
?
Copies a protected video frame to a DXGI surface.
-For protected content, call this method instead of the
Gets the content protections that must be applied in frame-server mode.
-Specifies the window that should receive output link protections.
-In frame-server mode, call this method to specify the destination window for protected video content. The Media Engine uses this window to set link protections, using the Output Protection Manager (OPM).
-Sets the content protection manager (CPM).
-The Media Engine uses the CPM to handle events related to protected content, such as license acquisition.
-Enables the Media Engine to access protected content while in frame-server mode.
-A reference to the Direct3D?11 device content. The Media Engine queries this reference for the
If this method succeeds, it returns
In frame-server mode, this method enables the Media Engine to share protected content with the Direct3D?11 device.
-Gets the content protections that must be applied in frame-server mode.
-Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Specifies the window that should receive output link protections.
-A handle to the window.
If this method succeeds, it returns
In frame-server mode, call this method to specify the destination window for protected video content. The Media Engine uses this window to set link protections, using the Output Protection Manager (OPM).
-Copies a protected video frame to a DXGI surface.
-A reference to the
A reference to an
A reference to a
A reference to an
Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
For protected content, call this method instead of the
Sets the content protection manager (CPM).
-A reference to the
If this method succeeds, it returns
The Media Engine uses the CPM to handle events related to protected content, such as license acquisition.
-Sets the application's certificate.
-A reference to a buffer that contains the certificate in X.509 format, followed by the application identifier signed with a SHA-256 signature using the private key from the certificate.
The size of the pbBlob buffer, in bytes.
If this method succeeds, it returns
Call this method to access protected video content in frame-server mode.
-Provides the Media Engine with a list of media resources.
-The
This interface enables the application to provide the same audio/video content in several different encoding formats, such as H.264 and Windows Media Video. If a particular codec is not present on the user's computer, the Media Engine will try the next URL in the list. To use this interface, do the following:
Gets the number of source elements in the list.
-Gets the number of source elements in the list.
-Returns the number of source elements.
Gets the URL of an element in the list.
-The zero-based index of the source element. To get the number of source elements, call
Receives a BSTR that contains the URL of the source element. The caller must free the BSTR by calling SysFreeString. If no URL is set, this parameter receives the value
If this method succeeds, it returns
Gets the MIME type of an element in the list.
-The zero-based index of the source element. To get the number of source elements, call
Receives a BSTR that contains the MIME type. The caller must free the BSTR by calling SysFreeString. If no MIME type is set, this parameter receives the value
If this method succeeds, it returns
Gets the intended media type of an element in the list.
-The zero-based index of the source element. To get the number of source elements, call
Receives a BSTR that contains a media-query string. The caller must free the BSTR by calling SysFreeString. If no media type is set, this parameter receives the value
If this method succeeds, it returns
The string returned in pMedia should be a media-query string that conforms to the W3C Media Queries specification.
-Adds a source element to the end of the list.
-The URL of the source element, or
The MIME type of the source element, or
A media-query string that specifies the intended media type, or
If this method succeeds, it returns
Any of the parameters to this method can be
This method allocates copies of the BSTRs that are passed in.
-Removes all of the source elements from the list.
-If this method succeeds, it returns
Extends the
Provides an enhanced version of
If this method succeeds, it returns
Gets the key system for the given source element index.
-The source element index.
The MIME type of the source element.
If this method succeeds, it returns
Enables the media source to be transferred between the media engine and the sharing engine for Play To.
-Specifies wether or not the source should be transferred.
-true if the source should be transferred; otherwise, false.
If this method succeeds, it returns
Detaches the media source.
-Receives the byte stream.
Receives the media source.
Receives the media source extension.
If this method succeeds, it returns
Attaches the media source.
-Specifies the byte stream.
Specifies the media source.
Specifies the media source extension.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables playback of web audio.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a value indicating if the connecting to Web audio should delay the page's load event.
-True if connection to Web audio should delay the page's load event; otherwise, false.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Connects web audio to Media Engine using the specified sample rate.
-The sample rate of the web audio.
The sample rate of the web audio.
Returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Disconnects web audio from the Media Engine
-Returns
Provides the current error status for the Media Engine.
-The
To get a reference to this interface, call
Gets or sets the extended error code.
-Gets the error code.
-Returns a value from the
Gets the extended error code.
-Returns an
Sets the error code.
-The error code, specified as an
If this method succeeds, it returns
Sets the extended error code.
-An
If this method succeeds, it returns
Represents an event generated by a Media Foundation object. Use this interface to get information about the event.
To get a reference to this interface, call
If you are implementing an object that generates events, call the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the event type. The event type indicates what happened to trigger the event. It also defines the meaning of the event value.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the extended type of the event.
-To define a custom event, create a new extended-type
Some standard Media Foundation events also use the extended type to differentiate between types of event data.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the value associated with the event, if any. The value is retrieved as a
Before calling this method, call PropVariantInit to initialize the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the event type. The event type indicates what happened to trigger the event. It also defines the meaning of the event value.
-Receives the event type. For a list of event types, see Media Foundation Events.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the extended type of the event.
-Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
To define a custom event, create a new extended-type
Some standard Media Foundation events also use the extended type to differentiate between types of event data.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an
Receives the event status. If the operation that generated the event was successful, the value is a success code. A failure code means that an error condition triggered the event.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the value associated with the event, if any. The value is retrieved as a
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before calling this method, call PropVariantInit to initialize the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves events from any Media Foundation object that generates events.
-An object that supports this interface maintains a queue of events. The client of the object can retrieve the events either synchronously or asynchronously. The synchronous method is GetEvent. The asynchronous methods are BeginGetEvent and EndGetEvent.
-
Retrieves the next event in the queue. This method is synchronous.
-Specifies one of the following values.
Value | Meaning |
---|---|
| The method blocks until the event generator queues an event. |
| The method returns immediately. |
?
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| There is a pending request. |
| There are no events in the queue. |
| The object was shut down. |
?
This method executes synchronously.
If the queue already contains an event, the method returns
If dwFlags is 0, the method blocks indefinitely until a new event is queued, or until the event generator is shut down.
If dwFlags is MF_EVENT_FLAG_NO_WAIT, the method fails immediately with the return code
This method returns
Begins an asynchronous request for the next event in the queue.
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| There is a pending request with the same callback reference and a different state object. |
| There is a pending request with a different callback reference. |
| The object was shut down. |
| There is a pending request with the same callback reference and state object. |
?
When a new event is available, the event generator calls the
Do not call BeginGetEvent a second time before calling EndGetEvent. While the first call is still pending, additional calls to the same object will fail. Also, the
Completes an asynchronous request for the next event in the queue.
-Pointer to the
Receives a reference to the
Call this method from inside your application's
Puts a new event in the object's queue.
-Specifies the event type. The event type is returned by the event's
The extended type. If the event does not have an extended type, use the value GUID_NULL. The extended type is returned by the event's
A success or failure code indicating the status of the event. This value is returned by the event's
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object was shut down. |
?
Applies to: desktop apps | Metro style apps
Retrieves the next event in the queue. This method is synchronous.
-This method executes synchronously.
If the queue already contains an event, the method returns
If dwFlags is 0, the method blocks indefinitely until a new event is queued, or until the event generator is shut down.
If dwFlags is MF_EVENT_FLAG_NO_WAIT, the method fails immediately with the return code
This method returns
Applies to: desktop apps | Metro style apps
Begins an asynchronous request for the next event in the queue.
-Pointer to the
When a new event is available, the event generator calls the
Do not call BeginGetEvent a second time before calling EndGetEvent. While the first call is still pending, additional calls to the same object will fail. Also, the
Provides an event queue for applications that need to implement the
This interface is exposed by a helper object that implements an event queue. If you are writing a component that implements the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the next event in the queue. This method is synchronous.
Call this method inside your implementation of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Begins an asynchronous request for the next event in the queue.
Call this method inside your implementation of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Completes an asynchronous request for the next event in the queue.
Call this method inside your implementation of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Puts an event in the queue.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
Call this method when your component needs to raise an event that contains attributes. To create the event object, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an event, sets a
Call this method inside your implementation of
You can also call this method when your component needs to raise an event that does not contain attributes. If the event data is an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates an event, sets an
Specifies the event type of the event to be added to the queue. The event type is returned by the event's
The extended type of the event. If the event does not have an extended type, use the value GUID_NULL. The extended type is returned by the event's
A success or failure code indicating the status of the event. This value is returned by the event's
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Shutdown method was called. |
?
Call this method when your component needs to raise an event that contains an
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Shuts down the event queue.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when your component shuts down. After this method is called, all
This method removes all of the events from the queue.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents a media keys used for decrypting media data using a Digital Rights Management (DRM) key system.
-Gets the suspend notify interface of the Content Decryption Module (CDM).
-Creates a media key session object using the specified initialization data and custom data. - . -
-The MIME type of the media container used for the content.
The initialization data for the key system.
The count in bytes of initData.
Custom data sent to the key system.
The count in bytes of cbCustomData.
notify
The media key session.
If this method succeeds, it returns
Gets the key system string the
If this method succeeds, it returns
If this method succeeds, it returns
Shutdown should be called by the application before final release. The Content Decryption Module (CDM) reference and any other resources is released at this point. However, related sessions are not freed or closed.
-Gets the suspend notify interface of the Content Decryption Module (CDM).
-The suspend notify interface of the Content Decryption Module (CDM).
If this method succeeds, it returns
Represents a session with the Digital Rights Management (DRM) key system.
-Gets the error state associated with the media key session.
-The error code.
Platform specific error information.
If this method succeeds, it returns
Gets the name of the key system name the media keys object was created with.
-The name of the key system.
If this method succeeds, it returns
Gets a unique session id created for this session.
-The media key session id.
If this method succeeds, it returns
Passes in a key value with any associated data required by the Content Decryption Module for the given key system.
-The count in bytes of key.
If this method succeeds, it returns
Closes the media key session and must be called before the key session is released.
-If this method succeeds, it returns
Provides a mechanism for notifying the app about information regarding the media key session.
-Passes information to the application so it can initiate a key acquisition.
-The URL to send the message to.
The message to send to the application.
The length in bytes of message.
Notifies the application that the key has been added.
-KeyAdded can also be called if the keys requested for the session have already been acquired.
-Notifies the application that an error occurred while processing the key.
-Provides playback controls for protected and unprotected content. The Media Session and the protected media path (PMP) session objects expose this interface. This interface is the primary interface that applications use to control the Media Foundation pipeline.
To obtain a reference to this interface, call
Retrieves the Media Session's presentation clock.
-The application can query the returned
Retrieves the capabilities of the Media Session, based on the current presentation.
-Sets a topology on the Media Session.
- Bitwise OR of zero or more flags from the
Pointer to the topology object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
| The topology has invalid values for one or more of the following attributes: |
| Protected content cannot be played while debugging. |
?
If pTopology is a full topology, set the
If the Media Session is currently paused or stopped, the SetTopology method does not take effect until the next call to
If the Media Session is currently running, or on the next call to Start, the SetTopology method does the following:
This method is asynchronous. If the method returns
Clears all of the presentations that are queued for playback in the Media Session.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
?
This method is asynchronous. When the operation completes, the Media Session sends an
This method does not clear the current topology; it only removes topologies that are placed in the queue, waiting for playback. To remove the current topology, call
Starts the Media Session.
-Pointer to a
The following time format GUIDs are defined:
Value | Meaning |
---|---|
| Presentation time. The pvarStartPosition parameter must have one of the following
All media sources support this time format. |
| Segment offset. This time format is supported by the Sequencer Source. The starting time is an offset within a segment. Call the |
| Note??Requires Windows?7 or later. ? Skip to a playlist entry. The pvarStartPosition parameter specifies the index of the playlist entry, relative to the current entry. For example, the value 2 skips forward two entries. To skip backward, pass a negative value. The If a media source supports this time format, the |
?
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
?
When this method is called, the Media Session starts the presentation clock and begins to process media samples.
This method is asynchronous. When the method completes, the Media Session sends an
Pauses the Media Session.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
| The Media Session cannot pause while stopped. |
?
This method pauses the presentation clock.
This method is asynchronous. When the operation completes, the Media Session sends an
This method fails if the Media Session is stopped.
-
Stops the Media Session.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation cannot be performed in the Media Session's current state. |
| The Media Session has been shut down. |
?
This method is asynchronous. When the operation completes, the Media Session sends an
Closes the Media Session and releases all of the resources it is using.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Media Session has been shut down. |
?
This method is asynchronous. When the operation completes, the Media Session sends an
After the Close method is called, the only valid methods on the Media Session are the following:
All other methods return
Shuts down the Media Session and releases all the resources used by the Media Session.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when you are done using the Media Session, before the final call to IUnknown::Release. Otherwise, your application will leak memory.
After this method is called, other
Retrieves the Media Session's presentation clock.
-Receives a reference to the presentation clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Media Session does not have a presentation clock. |
| The Media Session has been shut down. |
?
The application can query the returned
Retrieves the capabilities of the Media Session, based on the current presentation.
-Receives a bitwise OR of zero or more of the following flags.
Value | Meaning |
---|---|
| The Media Session can be paused. |
| The Media Session supports forward playback at rates faster than 1.0. |
| The Media Session supports reverse playback. |
| The Media Session can be seeked. |
| The Media Session can be started. |
?
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| The Media Session has been shut down. |
?
Gets a topology from the Media Session.
This method can get the current topology or a queued topology.
- Bitwise OR of zero or more flags from the
The identifier of the topology. This parameter is ignored if the dwGetFullTopologyFlags parameter contains the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The Media Session has been shut down. |
?
If the
This method can be used to retrieve the topology for the current presentation or any pending presentations. It cannot be used to retrieve a topology that has already ended.
The topology returned in ppFullTopo is a full topology, not a partial topology.
-Implemented by media sink objects. This interface is the base interface for all Media Foundation media sinks. Stream sinks handle the actual processing of data on each stream.
-
Gets the characteristics of the media sink.
-The characteristics of a media sink are fixed throughout the life time of the sink.
-
Gets the number of stream sinks on this media sink.
-
Gets the presentation clock that was set on the media sink.
-
Gets the characteristics of the media sink.
-Receives a bitwise OR of zero or more flags. The following flags are defined:
Value | Meaning |
---|---|
| The media sink has a fixed number of streams. It does not support the |
| The media sink cannot match rates with an external clock. For best results, this media sink should be used as the time source for the presentation clock. If any other time source is used, the media sink cannot match rates with the clock, with poor results (for example, glitching). This flag should be used sparingly, because it limits how the pipeline can be configured. For more information about the presentation clock, see Presentation Clock. |
| The media sink is rateless. It consumes samples as quickly as possible, and does not synchronize itself to a presentation clock. Most archiving sinks are rateless. |
| The media sink requires a presentation clock. The presentation clock is set by calling the media sink's This flag is obsolete, because all media sinks must support the SetPresentationClock method, even if the media sink ignores the clock (as in a rateless media sink). |
| The media sink can accept preroll samples before the presentation clock starts. The media sink exposes the |
| The first stream sink (index 0) is a reference stream. The reference stream must have a media type before the media types can be set on the other stream sinks. |
?
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
?
The characteristics of a media sink are fixed throughout the life time of the sink.
-
Adds a new stream sink to the media sink.
-Identifier for the new stream. The value is arbitrary but must be unique.
Pointer to the
Receives a reference to the new stream sink's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified stream identifier is not valid. |
| The media sink's Shutdown method has been called. |
| There is already a stream sink with the same stream identifier. |
| This media sink has a fixed set of stream sinks. New stream sinks cannot be added. |
?
Not all media sinks support this method. If the media sink does not support this method, the
If pMediaType is
Removes a stream sink from the media sink.
-Identifier of the stream to remove. The stream identifier is defined when you call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This particular stream sink cannot be removed. |
| The stream number is not valid. |
| The media sink has not been initialized. |
| The media sink's Shutdown method has been called. |
| This media sink has a fixed set of stream sinks. Stream sinks cannot be removed. |
?
After this method is called, the corresponding stream sink object is no longer valid. The
Not all media sinks support this method. If the media sink does not support this method, the
In some cases, the media sink supports this method but does not allow every stream sink to be removed. (For example, it might not allow stream 0 to be removed.)
-
Gets the number of stream sinks on this media sink.
-Receives the number of stream sinks.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
?
Gets a stream sink, specified by index.
-Zero-based index of the stream. To get the number of streams, call
Receives a reference to the stream's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid index. |
| The media sink's Shutdown method has been called. |
?
Enumerating stream sinks is not a thread-safe operation, because stream sinks can be added or removed between calls to this method.
-
Gets a stream sink, specified by stream identifier.
-Stream identifier of the stream sink.
Receives a reference to the stream's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream identifier is not valid. |
| The media sink's Shutdown method has been called. |
?
If you add a stream sink by calling the
To enumerate the streams by index number instead of stream identifier, call
Sets the presentation clock on the media sink.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The presentation clock does not have a time source. Call SetTimeSource on the presentation clock. |
| The media sink's Shutdown method has been called. |
?
During streaming, the media sink attempts to match rates with the presentation clock. Ideally, the media sink presents samples at the correct time according to the presentation clock and does not fall behind. Rateless media sinks are an exception to this rule, as they consume samples as quickly as possible and ignore the clock. If the sink is rateless, the
The presentation clock must have a time source. Before calling this method, call
If pPresentationClock is non-
All media sinks must support this method.
-
Gets the presentation clock that was set on the media sink.
-Receives a reference to the presentation clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No clock has been set. To set the presentation clock, call |
| The media sink's Shutdown method has been called. |
?
Shuts down the media sink and releases the resources it is using.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink was shut down. |
?
If the application creates the media sink, it is responsible for calling Shutdown to avoid memory or resource leaks. In most applications, however, the application creates an activation object for the media sink, and the Media Session uses that object to create the media sink. In that case, the Media Session ? not the application ? shuts down the media sink. (For more information, see Activation Objects.)
After this method returns, all methods on the media sink return
Enables a media sink to receive samples before the presentation clock is started.
To get a reference to this interface, call QueryInterface on the media sink.
-Media sinks can implement this interface to support seamless playback and transitions. If a media sink exposes this interface, it can receive samples before the presentation clock starts. It can then pre-process the samples, so that rendering can begin immediately when the clock starts. Prerolling helps to avoid glitches during playback.
If a media sink supports preroll, the media sink's
Notifies the media sink that the presentation clock is about to start.
- The upcoming start time for the presentation clock, in 100-nanosecond units. This time is the same value that will be given to the
If this method succeeds, it returns
After this method is called, the media sink sends any number of
During preroll, the media sink can prepare the samples that it receives, so that they are ready to be rendered. It does not actually render any samples until the clock starts.
-Implemented by media source objects.
Media sources are objects that generate media data. For example, the data might come from a video file, a network stream, or a hardware device, such as a camera. Each media source contains one or more streams, and each stream delivers data of one type, such as audio or video.
-In Windows?8, this interface is extended with
Retrieves the characteristics of the media source.
-The characteristics of a media source can change at any time. If this happens, the source sends an
Retrieves the characteristics of the media source.
-Receives a bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
The characteristics of a media source can change at any time. If this happens, the source sends an
Retrieves a copy of the media source's presentation descriptor. Applications use the presentation descriptor to select streams and to get information about the source content.
-Receives a reference to the presentation descriptor's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
The presentation descriptor contains the media source's default settings for the presentation. The application can change these settings by selecting or deselecting streams, or by changing the media type on a stream. Do not modify the presentation descriptor unless the source is stopped. The changes take affect when the source's
Starts, seeks, or restarts the media source by specifying where to start playback.
- Pointer to the
Pointer to a
Specifies where to start playback. The units of this parameter are indicated by the time format given in pguidTimeFormat. If the time format is GUID_NULL, the variant type must be VT_I8 or VT_EMPTY. Use VT_I8 to specify a new starting position, in 100-nanosecond units. Use VT_EMPTY to start from the current position. Other time formats might use other
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The start position is past the end of the presentation (ASF media source). |
| A hardware device was unable to start streaming. This error code can be returned by a media source that represents a hardware device, such as a camera. For example, if the camera is already being used by another application, the method might return this error code. |
| The start request is not valid. For example, the start position is past the end of the presentation. |
| The media source's Shutdown method has been called. |
| The media source does not support the time format specified in pguidTimeFormat. |
?
This method is asynchronous. If the operation succeeds, the media source sends the following events:
If the start operation fails asynchronously (after the method returns
A call to Start results in a seek if the previous state was started or paused, and the new starting position is not VT_EMPTY. Not every media source can seek. If a media source can seek, the
Events from the media source are not synchronized with events from the media streams. If you seek a media source, therefore, you can still receive samples from the earlier position after getting the
Stops all active streams in the media source.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
This method is asynchronous. When the operation completes, the media source sends and
When a media source is stopped, its current position reverts to zero. After that, if the Start method is called with VT_EMPTY for the starting position, playback starts from the beginning of the presentation.
While the source is stopped, no streams produce data.
-
Pauses all active streams in the media source.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid state transition. The media source must be in the started state. |
| The media source's Shutdown method has been called. |
?
This method is asynchronous. When the operation completes, the media source sends and
The media source must be in the started state. The method fails if the media source is paused or stopped.
While the source is paused, calls to
Not every media source can pause. If a media source can pause, the
Shuts down the media source and releases the resources it is using.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application creates the media source, either directly or through the source resolver, the application is responsible for calling Shutdown to avoid memory or resource leaks.
After this method is called, methods on the media source and all of its media streams return
Extends the
To get a reference to this interface, call QueryInterface on the media source.
-Implementations of this interface can return E_NOTIMPL for any methods that are not required by the media source.
-Gets an attribute store for the media source.
-Use the
Sets a reference to the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager on the media source.
-Gets an attribute store for the media source.
-Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The media source does not support source-level attributes. |
?
Use the
Gets an attribute store for a stream on the media source.
-The identifier of the stream. To get the identifier, call
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The media source does not support stream-level attributes. |
| Invalid stream identifier. |
?
Use the
Sets a reference to the Microsoft DirectX Graphics Infrastructure (DXGI) Device Manager on the media source.
-A reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The media source does not support source-level attributes. |
?
Provides functionality for the Media Source Extension (MSE).
- Media Source Extensions (MSE) is a World Wide Web Consortium (W3C) standard that extends the HTML5 media elements to enable dynamically changing the media stream without the use of plug-ins. The
The MSE media source keeps track of the ready state of the of the source as well as a list of
Gets the collection of source buffers associated with this media source.
-Gets the source buffers that are actively supplying media data to the media source.
-Gets the ready state of the media source.
-Gets or sets the duration of the media source in 100-nanosecond units.
-Indicate that the end of the media stream has been reached.
-Gets the collection of source buffers associated with this media source.
-The collection of source buffers.
Gets the source buffers that are actively supplying media data to the media source.
-The list of active source buffers.
Gets the ready state of the media source.
-The ready state of the media source.
Gets the duration of the media source in 100-nanosecond units.
-The duration of the media source in 100-nanosecond units.
Sets the duration of the media source in 100-nanosecond units.
-The duration of the media source in 100-nanosecond units.
If this method succeeds, it returns
Adds a
If this method succeeds, it returns
Removes the specified source buffer from the collection of source buffers managed by the
If this method succeeds, it returns
Indicate that the end of the media stream has been reached.
-Used to pass error information.
If this method succeeds, it returns
Gets a value that indicates if the specified MIME type is supported by the media source.
-The media type to check support for.
true if the media type is supported; otherwise, false.
Gets the
The source buffer.
Provides functionality for raising events associated with
Used to indicate that the media source has opened.
-Used to indicate that the media source has ended.
-Used to indicate that the media source has closed.
-
Notifies the source when playback has reached the end of a segment. For timelines, this corresponds to reaching a mark-out point.
-
Notifies the source when playback has reached the end of a segment. For timelines, this corresponds to reaching a mark-out point.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Enables an application to get a topology from the sequencer source. This interface is exposed by the sequencer source object.
-
Returns a topology for a media source that builds an internal topology.
-A reference to the
Receives a reference to the topology's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. For example, a |
?
Represents one stream in a media source.
-Streams are created when a media source is started. For each stream, the media source sends an
Retrieves a reference to the media source that created this media stream.
-
Retrieves a stream descriptor for this media stream.
-Do not modify the stream descriptor. To change the presentation, call
Retrieves a reference to the media source that created this media stream.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
Retrieves a stream descriptor for this media stream.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media source's Shutdown method has been called. |
?
Do not modify the stream descriptor. To change the presentation, call
Requests a sample from the media source.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The end of the stream was reached. |
| The media source is stopped. |
| The source's Shutdown method has been called. |
?
If pToken is not
When the next sample is available, the media stream stream does the following:
If the media stream cannot fulfill the caller's request for a sample, it simply releases the token object and skips steps 2 and 3.
The caller should monitor the reference count on the request token. If the media stream sends an
Because the Media Foundation pipeline is multithreaded, the source's RequestSample method might get called after the source has stopped. If the media source is stopped, the method should return
If the media source is paused, the method succeeds, but the stream does not deliver the sample until the source is started again.
If a media source enounters an error asynchronously while processing data, it should signal the error in one of the following ways (but not both):
Represents a request for a sample from a MediaStreamSource.
-MFMediaStreamSourceSampleRequest is implemented by the Windows.Media.Core.MediaStreamSourceSampleRequest runtime class.
-Sets the sample for the media stream source.
-Sets the sample for the media stream source.
-The sample for the media stream source.
If this method succeeds, it returns
Represents a list of time ranges, where each range is defined by a start and end time.
-The
Several
Gets the number of time ranges contained in the object.
-This method corresponds to the TimeRanges.length attribute in HTML5.
-Gets the number of time ranges contained in the object.
-Returns the number of time ranges.
This method corresponds to the TimeRanges.length attribute in HTML5.
-Gets the start time for a specified time range.
-The zero-based index of the time range to query. To get the number of time ranges, call
Receives the start time, in seconds.
If this method succeeds, it returns
This method corresponds to the TimeRanges.start method in HTML5.
-Gets the end time for a specified time range.
-The zero-based index of the time range to query. To get the number of time ranges, call
Receives the end time, in seconds.
If this method succeeds, it returns
This method corresponds to the TimeRanges.end method in HTML5.
-Queries whether a specified time falls within any of the time ranges.
-The time, in seconds.
Returns TRUE if any time range contained in this object spans the value of the time parameter. Otherwise, returns
This method returns TRUE if the following condition holds for any time range in the list:
Adds a new range to the list of time ranges.
-The start time, in seconds.
The end time, in seconds.
If this method succeeds, it returns
If the new range intersects a range already in the list, the two ranges are combined. Otherwise, the new range is added to the list.
-Clears the list of time ranges.
-If this method succeeds, it returns
Represents a description of a media format.
- To create a new media type, call
All of the information in a media type is stored as attributes. To clone a media type, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major type of the format.
- This method is equivalent to getting the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the media type is a temporally compressed format. Temporal compression uses information from previously decoded samples when decompressing the current sample.
- This method returns
If the method returns TRUE in pfCompressed, it is a hint that the format has temporal compression applied to it. If the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major type of the format.
-Receives the major type
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The major type is not set. |
?
This method is equivalent to getting the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the media type is a temporally compressed format. Temporal compression uses information from previously decoded samples when decompressing the current sample.
-Receives a Boolean value. The value is TRUE if the format uses temporal compression, or
If this method succeeds, it returns
This method returns
If the method returns TRUE in pfCompressed, it is a hint that the format has temporal compression applied to it. If the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Compares two media types and determines whether they are identical. If they are not identical, the method indicates how the two formats differ.
-Pointer to the
Receives a bitwise OR of zero or more flags, indicating the degree of similarity between the two media types. The following flags are defined.
Value | Meaning |
---|---|
| The major types are the same. The major type is specified by the |
| The subtypes are the same, or neither media type has a subtype. The subtype is specified by the |
| The attributes in one of the media types are a subset of the attributes in the other, and the values of these attributes match, excluding the value of the Specifically, the method takes the media type with the smaller number of attributes and checks whether each attribute from that type is present in the other media type and has the same value (not including To perform other comparisons, use the |
| The user data is identical, or neither media type contains user data. User data is specified by the |
?
The method returns an
Return code | Description |
---|---|
| The types are not equal. Examine the pdwFlags parameter to determine how the types differ. |
| The types are equal. |
| One or both media types are invalid. |
?
Both of the media types must have a major type, or the method returns E_INVALIDARG.
If the method succeeds and all of the comparison flags are set in pdwFlags, the return value is
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an alternative representation of the media type. Currently only the DirectShow
Value | Meaning |
---|---|
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
?
Receives a reference to a structure that contains the representation. The method allocates the memory for the structure. The caller must release the memory by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The details of the media type do not match the requested representation. |
| The media type is not valid. |
| The media type does not support the requested representation. |
?
If you request a specific format structure in the guidRepresentation parameter, such as
You can also use the MFInitAMMediaTypeFromMFMediaType function to convert a Media Foundation media type into a DirectShow media type.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an alternative representation of the media type. Currently only the DirectShow
Value | Meaning |
---|---|
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
| Convert the media type to a DirectShow |
?
Receives a reference to a structure that contains the representation. The method allocates the memory for the structure. The caller must release the memory by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The details of the media type do not match the requested representation. |
| The media type is not valid. |
| The media type does not support the requested representation. |
?
If you request a specific format structure in the guidRepresentation parameter, such as
You can also use the MFInitAMMediaTypeFromMFMediaType function to convert a Media Foundation media type into a DirectShow media type.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
The media type is created without any attributes.
-Applies to: desktop apps | Metro style apps
Converts a Media Foundation audio media type to a
Receives the size of the
Contains a flag from the
If the wFormatTag member of the returned structure is
Gets and sets media types on an object, such as a media source or media sink.
-This interface is exposed by media-type handlers.
If you are implementing a custom media source or media sink, you can create a simple media-type handler by calling
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of media types in the object's list of supported media types.
- To get the supported media types, call
For a media source, the media type handler for each stream must contain at least one supported media type. For media sinks, the media type handler for each stream might contain zero media types. In that case, the application must provide the media type. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the current media type of the object.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major media type of the object.
-The major type identifies what kind of data is in the stream, such as audio or video. To get the specific details of the format, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Queries whether the object supports a specified media type.
- Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support this media type. |
?
If the object supports the media type given in pMediaType, the method returns
The ppMediaType parameter is optional. If the method fails, the object might use ppMediaType to return a media type that the object does support, and which closely matches the one given in pMediaType. The method is not guaranteed to return a media type in ppMediaType. If no type is returned, this parameter receives a
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of media types in the object's list of supported media types.
-Receives the number of media types in the list.
If this method succeeds, it returns
To get the supported media types, call
For a media source, the media type handler for each stream must contain at least one supported media type. For media sinks, the media type handler for each stream might contain zero media types. In that case, the application must provide the media type. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type from the object's list of supported media types.
- Zero-based index of the media type to retrieve. To get the number of media types in the list, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwIndex parameter is out of range. |
?
Media types are returned in the approximate order of preference. The list of supported types is not guaranteed to be complete. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the object's media type.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid request. |
?
For media sources, setting the media type means the source will generate data that conforms to that media type. For media sinks, setting the media type means the sink can receive data that conforms to that media type.
Any implementation of this method should check whether pMediaType differs from the object's current media type. If the types are identical, the method should return
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the current media type of the object.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No media type is set. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets the major media type of the object.
-Receives a
If this method succeeds, it returns
The major type identifies what kind of data is in the stream, such as audio or video. To get the specific details of the format, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type from the object's list of supported media types.
- Zero-based index of the media type to retrieve. To get the number of media types in the list, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwIndex parameter is out of range. |
?
Media types are returned in the approximate order of preference. The list of supported types is not guaranteed to be complete. To test whether a particular media type is supported, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Manages metadata for an object. Metadata is information that describes a media file, stream, or other content. Metadata consists of individual properties, where each property contains a descriptive name and a value. A property may be associated with a particular language.
To get this interface from a media source, use the
Gets a list of the languages in which metadata is available.
-For more information about language tags, see RFC 1766, "Tags for the Identification of Languages".
To set the current language, call
Gets a list of all the metadata property names on this object.
-Sets the language for setting and retrieving metadata.
-Pointer to a null-terminated string containing an RFC 1766-compliant language tag.
If this method succeeds, it returns
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages".
-Gets the current language setting.
-Receives a reference to a null-terminated string containing an RFC 1766-compliant language tag. The caller must release the string by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The metadata provider does not support multiple languages. |
| No language was set. |
?
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages."
The
Gets a list of the languages in which metadata is available.
- A reference to a
The returned
If this method succeeds, it returns
For more information about language tags, see RFC 1766, "Tags for the Identification of Languages".
To set the current language, call
Sets the value of a metadata property.
-Pointer to a null-terminated string containing the name of the property.
Pointer to a
If this method succeeds, it returns
Gets the value of a metadata property.
- A reference to a null-terminated string that containings the name of the property. To get the list of property names, call
Pointer to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The requested property was not found. |
?
Deletes a metadata property.
-Pointer to a null-terminated string containing the name of the property.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The property was not found. |
?
For a media source, deleting a property from the metadata collection does not change the original content.
-Gets a list of all the metadata property names on this object.
-Pointer to a
If this method succeeds, it returns
Gets metadata from a media source or other object.
If a media source supports this interface, it must expose the interface as a service. To get a reference to this interface from a media source, call
Use this interface to get a reference to the
Gets a collection of metadata, either for an entire presentation, or for one stream in the presentation.
- Pointer to the
If this parameter is zero, the method retrieves metadata that applies to the entire presentation. Otherwise, this parameter specifies a stream identifier, and the method retrieves metadata for that stream. To get the stream identifier for a stream, call
Reserved. Must be zero.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No metadata is available for the requested stream or presentation. |
?
Contains data that is needed to implement the
Any custom implementation of the
Receives state-change notifications from the presentation clock.
-To receive state-change notifications from the presentation clock, implement this interface and call
This interface must be implemented by:
Presentation time sources. The presentation clock uses this interface to request change states from the time source.
Media sinks. Media sinks use this interface to get notifications when the presentation clock changes.
Other objects that need to be notified can implement this interface.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The stream specified substream index is invalid. Call GetStreamCount to get the number of substreams managed by the multiplexed media source. |
?
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The
The following functions return
A byte stream for a media souce can be opened with read access. A byte stream for an archive media sink should be opened with both read and write access. (Read access may be required, because the archive sink might need to read portions of the file as it writes.)
Some implementations of this interface also expose one or more of the following interfaces:
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the ability to retrieve
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the ability to retrieve
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Provides the ability to retrieve
Retrieves the user name.
-If the user name is not available, the method might succeed and set *pcbData to zero.
-
Sets the user name.
-Pointer to a buffer that contains the user name. If fDataIsEncrypted is
Size of pbData, in bytes. If fDataIsEncrypted is
If TRUE, the user name is encrypted. Otherwise, the user name is not encrypted.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets the password.
-Pointer to a buffer that contains the password. If fDataIsEncrypted is
Size of pbData, in bytes. If fDataIsEncrypted is
If TRUE, the password is encrypted. Otherwise, the password is not encrypted.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the user name.
-Pointer to a buffer that receives the user name. To find the required buffer size, set this parameter to
On input, specifies the size of the pbData buffer, in bytes. On output, receives the required buffer size. If fEncryptData is
If TRUE, the method returns an encrypted string. Otherwise, the method returns an unencrypted string.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the user name is not available, the method might succeed and set *pcbData to zero.
-
Retrieves the password.
-Pointer to a buffer that receives the password. To find the required buffer size, set this parameter to
On input, specifies the size of the pbData buffer, in bytes. On output, receives the required buffer size. If fEncryptData is
If TRUE, the method returns an encrypted string. Otherwise, the method returns an unencrypted string.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the password is not available, the method might succeed and set *pcbData to zero.
-
Queries whether logged-on credentials should be used.
-Receives a Boolean value. If logged-on credentials should be used, the value is TRUE. Otherwise, the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Gets credentials from the credential cache.
This interface is implemented by the credential cache object. Applications that implement the
Retrieves the credential object for the specified URL.
-A null-terminated wide-character string containing the URL for which the credential is needed.
A null-terminated wide-character string containing the realm for the authentication.
Bitwise OR of zero or more flags from the
Receives a reference to the
Receives a bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Reports whether the credential object provided successfully passed the authentication challenge.
-Pointer to the
TRUE if the credential object succeeded in the authentication challenge; otherwise,
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is called by the network source into the credential manager.
-
Specifies how user credentials are stored.
-Pointer to the
Bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If no flags are specified, the credentials are cached in memory. This method can be implemented by the credential manager and called by the network source.
-Implemented by applications to provide user credentials for a network source.
To use this interface, implement it in your application. Then create a property store object and set the MFNETSOURCE_CREDENTIAL_MANAGER property. The value of the property is a reference to your application's
Media Foundation does not provide a default implementation of this interface. Applications that support authentication must implement this interface.
-
Begins an asynchronous request to retrieve the user's credentials.
-Pointer to an
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Completes an asynchronous request to retrieve the user's credentials.
-Pointer to an
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Specifies whether the user's credentials succeeded in the authentication challenge. The network source calls this method to informs the application whether the user's credentials were authenticated.
-Pointer to the
Boolean value. The value is TRUE if the credentials succeeded in the authentication challenge. Otherwise, the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Determines the proxy to use when connecting to a server. The network source uses this interface.
Applications can create the proxy locator configured by the application by implementing the
To create the default proxy locator, call
Initializes the proxy locator object.
-Null-terminated wide-character string containing the hostname of the destination server.
Null-terminated wide-character string containing the destination URL.
Reserved. Set to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Determines the next proxy to use.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There are no more proxy objects. |
?
Keeps a record of the success or failure of using the current proxy.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the current proxy information including hostname and port.
-Pointer to a buffer that receives a null-terminated string containing the proxy hostname and port. This parameter can be
On input, specifies the number of elements in the pszStr array. On output, receives the required size of the buffer.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The buffer specified in pszStr is too small. |
?
Creates a new instance of the default proxy locator.
-Receives a reference to the new proxy locator object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates an
Creates an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Notifies the application when a byte stream requests a URL, and enables the application to block URL redirection.
-To set the callback interface:
Called when the byte stream redirects to a URL.
-The URL to which the connection has been redirected.
To cancel the redirection, set this parameter to VARIANT_TRUE. To allow the redirection, set this parameter to VARIANT_FALSE.
If this method succeeds, it returns
Called when the byte stream requests a URL.
-The URL that the byte stream is requesting.
If this method succeeds, it returns
Retrieves the number of protocols supported by the network scheme plug-in.
-
Retrieves the number of protocols supported by the network scheme plug-in.
-
Retrieves the number of protocols supported by the network scheme plug-in.
-Receives the number of protocols.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a supported protocol by index
-Zero-based index of the protocol to retrieve. To get the number of supported protocols, call
Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The value passed in the nProtocolIndex parameter was greater than the total number of supported protocols, returned by GetNumberOfSupportedProtocols. |
?
Not implemented in this release.
-This method returns
Marshals an interface reference to and from a stream.
Stream objects that support
Stores the data needed to marshal an interface across a process boundary.
-Interface identifier of the interface to marshal.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Marshals an interface from data stored in the stream.
-Interface identifier of the interface to marshal.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Encapsulates a usage policy from an input trust authority (ITA). Output trust authorities (OTAs) use this interface to query which protection systems they are required to enforce by the ITA.
-
Retrieives a
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
-
Retrieves the minimum version of the global revocation list (GRL) that must be enforced by the protected environment for this policy.
-Retrieves a list of the output protection systems that the output trust authority (OTA) must enforce, along with configuration data for each protection system.
-Describes the output that is represented by the OTA calling this method. This value is a bitwise OR of zero or more of the following flags.
Value | Meaning |
---|---|
| Hardware bus. |
| The output sends compressed data. If this flag is absent, the output sends uncompressed data. |
| Reserved. Do not use. |
| The output sends a digital signal. If this flag is absent, the output sends an analog signal. |
| Reserved. Do not use. |
| Reserved. Do not use. |
| The output sends video data. If this flag is absent, the output sends audio data. |
?
Indicates a specific family of output connectors that is represented by the OTA calling this method. Possible values include the following.
Value | Meaning |
---|---|
| AGP bus. |
| Component video. |
| Composite video. |
| Japanese D connector. (Connector conforming to the EIAJ RC-5237 standard.) |
| Embedded DisplayPort connector. |
| External DisplayPort connector. |
| Digital video interface (DVI) connector. |
| High-definition multimedia interface (HDMI) connector. |
| Low voltage differential signaling (LVDS) connector. A connector using the LVDS interface to connect internally to a display device. The connection between the graphics adapter and the display device is permanent and not accessible to the user. Applications should not enable High-Bandwidth Digital Content Protection (HDCP) for this connector. |
| PCI bus. |
| PCI Express bus. |
| PCI-X bus. |
| Audio data sent over a connector via S/PDIF. |
| Serial digital interface connector. |
| S-Video connector. |
| Embedded Unified Display Interface (UDI). |
| External UDI. |
| Unknown connector type. See Remarks. |
| VGA connector. |
| Miracast wireless connector. Supported in Windows?8.1 and later. |
?
Pointer to an array of
Number of elements in the rgGuidProtectionSchemasSupported array.
Receives a reference to the
If this method succeeds, it returns
The video OTA returns the MFCONNECTOR_UNKNOWN connector type unless the Direct3D device is in full-screen mode. (Direct3D windowed mode is not generally a secure video mode.) You can override this behavior by implementing a custom EVR presenter that implements the
Retrieives a
Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
-
Retrieves the minimum version of the global revocation list (GRL) that must be enforced by the protected environment for this policy.
-Receives the minimum GRL version.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Encapsulates information about an output protection system and its corresponding configuration data.
-If the configuration information for the output protection system does not require more than a DWORD of space, the configuration information is retrieved in the GetConfigurationData method. If more than a DWORD of configuration information is needed, it is stored using the
Retrieves the output protection system that is represented by this object. Output protection systems are identified by
Returns configuration data for the output protection system. The configuration data is used to enable or disable the protection system, and to set the protection levels.
-
Retrieves a
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
-
Retrieves the output protection system that is represented by this object. Output protection systems are identified by
Receives the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Returns configuration data for the output protection system. The configuration data is used to enable or disable the protection system, and to set the protection levels.
-Receives the configuration data. The meaning of this data depends on the output protection system.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a
Receives a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
All of the policy objects and output schemas from the same ITA should return the same originator identifier (including dynamic policy changes). This value enables the OTA to distinguish policies that originate from different ITAs, so that the OTA can update dynamic policies correctly.
-Encapsulates the functionality of one or more output protection systems that a trusted output supports. This interface is exposed by output trust authority (OTA) objects. Each OTA represents a single action that the trusted output can perform, such as play, copy, or transcode. An OTA can represent more than one physical output if each output performs the same action.
-
Retrieves the action that is performed by this output trust authority (OTA).
-
Retrieves the action that is performed by this output trust authority (OTA).
-Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets one or more policy objects on the output trust authority (OTA).
-The address of an array of
The number of elements in the ppPolicy array.
Receives either a reference to a buffer allocated by the OTA, or the value
Receives the size of the ppbTicket buffer, in bytes. If ppbTicket receives the value
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The policy was negotiated successfully, but the OTA will enforce it asynchronously. |
| The OTA does not support the requirements of this policy. |
?
If the method returns MF_S_WAIT_FOR_POLICY_SET, the OTA sends an
Sets one or more policy objects on the output trust authority (OTA).
-The address of an array of
The number of elements in the ppPolicy array.
Receives either a reference to a buffer allocated by the OTA, or the value
Receives the size of the ppbTicket buffer, in bytes. If ppbTicket receives the value
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The policy was negotiated successfully, but the OTA will enforce it asynchronously. |
| The OTA does not support the requirements of this policy. |
?
If the method returns MF_S_WAIT_FOR_POLICY_SET, the OTA sends an
Sets one or more policy objects on the output trust authority (OTA).
-The address of an array of
The number of elements in the ppPolicy array.
Receives either a reference to a buffer allocated by the OTA, or the value
Receives the size of the ppbTicket buffer, in bytes. If ppbTicket receives the value
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The policy was negotiated successfully, but the OTA will enforce it asynchronously. |
| The OTA does not support the requirements of this policy. |
?
If the method returns MF_S_WAIT_FOR_POLICY_SET, the OTA sends an
Controls how media sources and transforms are enumerated in Microsoft Media Foundation.
To get a reference to this interface, call
Media Foundation provides a set of built-in media sources and decoders. Applications can enumerate them as follows:
Applications might also enumerate these objects indirectly. For example, if an application uses the topology loader to resolve a partial topology, the topology loader calls
Third parties can implement their own custom media sources and decoders, and register them for enumeration so that other applications can use them.
To control the enumeration order, Media Foundation maintains two process-wide lists of CLSIDs: a preferred list and a blocked list. An object whose CLSID appears in the preferred list appears first in the enumeration order. An object whose CLSID appears on the blocked list is not enumerated.
The lists are initially populated from the registry. Applications can use the
The preferred list contains a set of key/value pairs, where the keys are strings and the values are CLSIDs. These key/value pairs are defined as follows:
The following examples show the various types of key:
To search the preferred list by key name, call the
The blocked list contains a list of CLSIDs. To enumerate the entire list, call the
Searches the preferred list for a class identifier (CLSID) that matches a specified key name.
-Member of the
The key name to match. For more information about the format of key names, see the Remarks section of
Receives a CLSID from the preferred list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| No CLSID matching this key was found. |
?
Gets a class identifier (CLSID) from the preferred list, specified by index value.
-Member of the
The zero-based index of the CLSID to retrieve.
Receives the key name associated with the CLSID. The caller must free the memory for the returned string by calling the CoTaskMemFree function. For more information about the format of key names, see the Remarks section of
Receives the CLSID at the specified index.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The index parameter is out of range. |
?
Adds a class identifier (CLSID) to the preferred list or removes a CLSID from the list.
-Member of the
The key name for the CLSID. For more information about the format of key names, see the Remarks section of
The CLSID to add to the list. If this parameter is
If this method succeeds, it returns
The preferred list is global to the caller's process. Calling this method does not affect the list in other process.
-Queries whether a class identifier (CLSID) appears in the blocked list.
-Member of the
The CLSID to search for.
The method returns an
Return code | Description |
---|---|
| The specified CLSID appears in the blocked list. |
| Invalid argument. |
| The specified CLSID is not in the blocked list. |
?
Gets a class identifier (CLSID) from the blocked list.
-Member of the
The zero-based index of the CLSID to retrieve.
Receives the CLSID at the specified index.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The index parameter is out of range. |
?
Adds a class identifier (CLSID) to the blocked list, or removes a CLSID from the list.
-Member of the
The CLSID to add or remove.
Specifies whether to add or remove the CSLID. If the value is TRUE, the method adds the CLSID to the blocked list. Otherwise, the method removes it from the list.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
?
The blocked list is global to the caller's process. Calling this method does not affect the list in other processes.
-Controls how media sources and transforms are enumerated in Microsoft Media Foundation.
This interface extends the
To get a reference to this interface, call
Sets the policy for which media sources and transforms are enumerated.
-Sets the policy for which media sources and transforms are enumerated.
-A value from the
If this method succeeds, it returns
Note??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Represents a media item. A media item is an abstraction for a source of media data, such as a video file. Use this interface to get information about the source, or to change certain playback settings, such as the start and stop times. To get a reference to this interface, call one of the following methods:
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the MFPlay player object that created the media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the object that was used to create the media item.
-The object reference is set if the application uses
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the application-defined value stored in the media item.
-You can assign this value when you first create the media item, by specifying it in the dwUserData parameter of the
This method can be called after the player object is shut down.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains protected content.
Note??CurrentlyImportant??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the number of streams (audio, video, and other) in the media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets various flags that describe the media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a property store that contains metadata for the source, such as author or title.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the MFPlay player object that created the media item.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the URL that was used to create the media item.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No URL is associated with this media item. |
| The |
?
This method applies when the application calls
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the object that was used to create the media item.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media item was created from a URL, not from an object. |
| The |
?
The object reference is set if the application uses
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the application-defined value stored in the media item.
-If this method succeeds, it returns
You can assign this value when you first create the media item, by specifying it in the dwUserData parameter of the
This method can be called after the player object is shut down.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Stores an application-defined value in the media item.
-This method can return one of these values.
This method can be called after the player object is shut down.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the start and stop times for the media item.
-If this method succeeds, it returns
The pguidStartPositionType and pguidStopPositionType parameters receive the units of time that are used. Currently, the only supported value is MFP_POSITIONTYPE_100NS.
Value | Description |
---|---|
MFP_POSITIONTYPE_100NS | 100-nanosecond units. The time parameter (pvStartValue or pvStopValue) uses the following data type:
|
?
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the start and stop time for the media item.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid start or stop time. Any of the following can cause this error:
|
?
By default, a media item plays from the beginning to the end of the file. This method adjusts the start time and/or the stop time:
The pguidStartPositionType and pguidStopPositionType parameters give the units of time that are used. Currently, the only supported value is MFP_POSITIONTYPE_100NS.
Value | Description |
---|---|
MFP_POSITIONTYPE_100NS | 100-nanosecond units. The time parameter (pvStartValue or pvStopValue) uses the following data type:
To clear a previously set time, use an empty |
?
The adjusted start and stop times are used the next time that
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains a video stream.
-If this method succeeds, it returns
To select or deselect streams before playback starts, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains an audio stream.
-If this method succeeds, it returns
To select or deselect streams before playback starts, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the media item contains protected content.
Note??CurrentlyIf this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the duration of the media item.
-If this method succeeds, it returns
The method returns the total duration of the content, regardless of any values set through
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the number of streams (audio, video, and other) in the media item.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether a stream is selected to play.
-If this method succeeds, it returns
To select or deselect a stream, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Selects or deselects a stream.
-If this method succeeds, it returns
You can use this method to change which streams are selected. The change goes into effect the next time that
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries the media item for a stream attribute.
-If this method succeeds, it returns
Stream attributes describe an individual stream (audio, video, or other) within the presentation. To get an attribute that applies to the entire presentation, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries the media item for a presentation attribute.
-If this method succeeds, it returns
Presentation attributes describe the presentation as a whole. To get an attribute that applies to an individual stream within the presentation, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets various flags that describe the media item.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets a media sink for the media item. A media sink is an object that consumes the data from one or more streams.
-If this method succeeds, it returns
By default, the MFPlay player object renders audio streams to the Streaming Audio Renderer (SAR) and video streams to the Enhanced Video Renderer (EVR). You can use the SetStreamSink method to provide a different media sink for an audio or video stream; or to support other stream types besides audio and video. You can also use it to configure the SAR or EVR before they are used.
Call this method before calling
To reset the media item to use the default media sink, set pMediaSink to
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a property store that contains metadata for the source, such as author or title.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains methods to play media files.
The MFPlay player object exposes this interface. To get a reference to this interface, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback rate.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback state of the MFPlay player object.
-This method can be called after the player object has been shut down.
Many of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the current media item.
-The
The previous remark also applies to setting the media item in the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio volume.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio balance.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the audio is muted.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the video source rectangle.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current aspect-ratio correction mode. This mode controls whether the aspect ratio of the video is preserved during playback.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the window where the video is displayed.
-The video window is specified when you first call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current color of the video border. The border color is used to letterbox the video.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Starts playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Pauses playback. While playback is paused, the most recent video frame is displayed, and audio is silent.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Stops playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
The current media item is still valid. After playback stops, the playback position resets to the beginning of the current media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Steps forward one video frame.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Cannot frame step. Reasons for this error code include:
|
| The object's Shutdown method was called. |
| The media source does not support frame stepping, or the current playback rate is negative. |
?
This method completes asynchronously. When the operation completes, the application's
The player object does not support frame stepping during reverse playback (that is, while the playback rate is negative).
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the playback position.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The value of pvPositionValue is not valid. |
| No media item has been queued. |
| The object's Shutdown method was called. |
?
If you call this method while playback is stopped, the new position takes effect after playback resumes.
This method completes asynchronously. When the operation completes, the application's
If playback was started before SetPosition is called, playback resumes at the new position. If playback was paused, the video is refreshed to display the current frame at the new position.
If you make two consecutive calls to SetPosition with guidPositionType equal to MFP_POSITIONTYPE_100NS, and the second call is made before the first call has completed, the second call supersedes the first. The status code for the superseded call is set to S_FALSE in the event data for that call. This behavior prevents excessive latency from repeated calls to SetPosition, as each call may force the media source to perform a relatively lengthy seek operation.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback position.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| No media item has been queued. |
| The object's Shutdown method was called. |
?
The playback position is calculated relative to the start time of the media item, which can be specified by calling
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the playback duration of the current media item.
-This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The media source does not have a duration. This error can occur with a live source, such as a video camera. |
| There is no current media item. |
?
This method calculates the playback duration, taking into account the start and stop times for the media item. To set the start and stop times, call
For example, suppose that you load a 30-second audio file and set the start time equal to 2 seconds and stop time equal to 10 seconds. The
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the playback rate.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The flRate parameter is zero. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
The method sets the nearest supported rate, which will depend on the underlying media source. For example, if flRate is 50 and the source's maximum rate is 8? normal rate, the method will set the rate to 8.0. The actual rate is indicated in the event data for the
To find the range of supported rates, call
This method does not support playback rates of zero, although Media Foundation defines a meaning for zero rates in some other contexts.
The new rate applies only to the current media item. Setting a new media item resets the playback rate to 1.0.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback rate.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the range of supported playback rates.
-This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not support playback in the requested direction (either forward or reverse). |
?
Playback rates are expressed as a ratio of the current rate to the normal rate. For example, 1.0 indicates normal playback speed, 0.5 indicates half speed, and 2.0 indicates twice speed. Positive values indicate forward playback, and negative values indicate reverse playback. -
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current playback state of the MFPlay player object.
-If this method succeeds, it returns
This method can be called after the player object has been shut down.
Many of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Creates a media item from a URL.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid request. This error can occur when fSync is |
| The object's Shutdown method was called. |
| Unsupported protocol. |
?
This method does not queue the media item for playback. To queue the item for playback, call
The CreateMediaItemFromURL method can be called either synchronously or asynchronously:
The callback interface is set when you first call
If you make multiple asynchronous calls to CreateMediaItemFromURL, they are not guaranteed to complete in the same order. Use the dwUserData parameter to match created media items with pending requests.
Currently, this method returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Creates a media item from an object.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid request. This error can occur when fSync is |
| The object's Shutdown method was called. |
?
The pIUnknownObj parameter must specify one of the following:
This method does not queue the media item for playback. To queue the item for playback, call
The CreateMediaItemFromObject method can be called either synchronously or asynchronously:
The callback interface is set when you first call
If you make multiple asynchronous calls to CreateMediaItemFromObject, they are not guaranteed to complete in the same order. Use the dwUserData parameter to match created media items with pending requests.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queues a media item for playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The media item contains protected content. MFPlay currently does not support protected content. |
| No audio playback device was found. This error can occur if the media source contains audio, but no audio playback devices are available on the system. |
| The object's Shutdown method was called. |
?
This method completes asynchronously. When the operation completes, the application's
To create a media item, call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Clears the current media item.
Note??This method is currently not implemented.? -If this method succeeds, it returns
This method stops playback and releases the player object's references to the current media item.
This method completes asynchronously. When the operation completes, the application's
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets a reference to the current media item.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no current media item. |
| There is no current media item. |
| The object's Shutdown method was called. |
?
The
The previous remark also applies to setting the media item in the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio volume.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the audio volume.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The flVolume parameter is invalid. |
?
If you call this method before playback starts, the setting is applied after playback starts.
This method does not change the master volume level for the player's audio session. Instead, it adjusts the per-channel volume levels for audio stream(s) that belong to the current media item. Other streams in the audio session are not affected. For more information, see Managing the Audio Session.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current audio balance.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the audio balance.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The flBalance parameter is invalid. |
?
If you call this method before playback starts, the setting is applied when playback starts.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Queries whether the audio is muted.
-If this method succeeds, it returns
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Mutes or unmutes the audio.
-If this method succeeds, it returns
If you call this method before playback starts, the setting is applied after playback starts.
This method does not mute the entire audio session to which the player belongs. It mutes only the streams from the current media item. Other streams in the audio session are not affected. For more information, see Managing the Audio Session. -
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the size and aspect ratio of the video. These values are computed before any scaling is done to fit the video into the destination window.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
At least one parameter must be non-
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the range of video sizes that can be displayed without significantly degrading performance or image quality.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
At least one parameter must be non-
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the video source rectangle.
MFPlay clips the video to this rectangle and stretches the rectangle to fill the video window.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
MFPlay stretches the source rectangle to fill the entire video window. By default, MFPlay maintains the source's correct aspect ratio, letterboxing if needed. The letterbox color is controlled by the
This method fails if no media item is currently set, or if the current media item does not contain video.
To set the video position before playback starts, call this method inside your event handler for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the video source rectangle.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Specifies whether the aspect ratio of the video is preserved during playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
This method fails if no media item is currently set, or if the current media item does not contain video.
To set the aspect-ratio mode before playback starts, call this method inside your event handler for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current aspect-ratio correction mode. This mode controls whether the aspect ratio of the video is preserved during playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the window where the video is displayed.
-If this method succeeds, it returns
The video window is specified when you first call
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Updates the video frame.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Call this method when your application's video playback window receives either a WM_PAINT or WM_SIZE message. This method performs two functions:
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Sets the color for the video border. The border color is used to letterbox the video.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
This method fails if no media item is currently set, or if the current media item does not contain video.
To set the border color before playback starts, call this method inside your event handler for the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Gets the current color of the video border. The border color is used to letterbox the video.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The current media item does not contain video. |
| The object's Shutdown method was called. |
?
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Applies an audio or video effect to playback.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This effect was already added. |
?
The object specified in the pEffect parameter can implement either a video effect or an audio effect. The effect is applied to any media items set after the method is called. It is not applied to the current media item.
For each media item, the effect is applied to the first selected stream of the matching type (audio or video). If a media item has two selected streams of the same type, the second stream does not receive the effect. The effect is ignored if the media item does not contain a stream that matches the effect type. For example, if you set a video effect and play a file that contains just audio, the video effect is ignored, although no error is raised.
The effect is applied to all subsequent media items, until the application removes the effect. To remove an effect, call
If you set multiple effects of the same type (audio or video), they are applied in the same order in which you call InsertEffect.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Removes an effect that was added with the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The effect was not found. |
?
The change applies to the next media item that is set on the player. The effect is not removed from the current media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Removes all effects that were added with the
If this method succeeds, it returns
The change applies to the next media item that is set on the player. The effects are not removed from the current media item.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Shuts down the MFPlay player object and releases any resources the object is using.
-If this method succeeds, it returns
After this method is called, most
The player object automatically shuts itself down when its reference count reaches zero. You can use the Shutdown method to shut down the player before all of the references have been released.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Callback interface for the
To set the callback, pass an
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Called by the MFPlay player object to notify the application of a playback event.
- The specific type of playback event is given in the eEventType member of the
It is safe to call
Enables a media source to receive a reference to the
If a media source exposes this interface, the Protected Media Path (PMP) Media Session calls SetPMPHost with a reference to the
Provides a reference to the
The
Provides a reference to the
If this method succeeds, it returns
The
Provides a mechanism for a media source to implement content protection functionality in a Windows Store apps.
-When to implement: A media source implements
Sets a reference to the
Sets a reference to the
If this method succeeds, it returns
Enables a media source in the application process to create objects in the protected media path (PMP) process.
-This interface is used when a media source resides in the application process but the Media Session resides in a PMP process. The media source can use this interface to create objects in the PMP process. For example, to play DRM-protected content, the media source typically must create an input trust authority (ITA) in the PMP process.
To use this interface, the media source implements the
You can also get a reference to this interface by calling
Blocks the protected media path (PMP) process from ending.
-If this method succeeds, it returns
When this method is called, it increments the lock count on the PMP process. For every call to this method, the application should make a corresponding call to
Decrements the lock count on the protected media path (PMP) process. Call this method once for each call to
If this method succeeds, it returns
Creates an object in the protect media path (PMP) process, from a CLSID.
-The CLSID of the object to create.
A reference to the
The interface identifier (IID) of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
You can use the pStream parameter to initialize the object after it is created.
-Allows a media source to create a Windows Runtime object in the Protected Media Path (PMP) process.
-Blocks the protected media path (PMP) process from ending.
-If this method succeeds, it returns
When this method is called, it increments the lock count on the PMP process. For every call to this method, the application should make a corresponding call to
Decrements the lock count on the protected media path (PMP) process. Call this method once for each call to
If this method succeeds, it returns
Creates a Windows Runtime object in the protected media path (PMP) process.
-Id of object to create.
Data to be passed to the object by way of a IPersistStream.
The interface identifier (IID) of the interface to retrieve.
Receives a reference to the created object.
If this method succeeds, it returns
Enables two instances of the Media Session to share the same protected media path (PMP) process.
-If your application creates more than one instance of the Media Session, you can use this interface to share the same PMP process among several instances. This can be more efficient than re-creating the PMP process each time.
Use this interface as follows:
Blocks the protected media path (PMP) process from ending.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When this method is called, it increments the lock count on the PMP process. For every call to this method, the application should make a corresponding call to
Decrements the lock count on the protected media path (PMP) process. Call this method once for each call to
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Creates an object in the protected media path (PMP) process.
-CLSID of the object to create.
Interface identifier of the interface to retrieve.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Represents a presentation clock, which is used to schedule when samples are rendered and to synchronize multiple streams.
-To create a new instance of the presentation clock, call the
To get the presentation clock from the Media Session, call
Retrieves the clock's presentation time source.
-Retrieves the latest clock time.
-This method does not attempt to smooth out jitter or otherwise account for any inaccuracies in the clock time.
-
Sets the time source for the presentation clock. The time source is the object that drives the clock by providing the current time.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The time source does not have a frequency of 10 MHz. |
| The time source has not been initialized. |
?
The presentation clock cannot start until it has a time source.
The time source is automatically registered to receive state change notifications from the clock, through the time source's
This time source have a frequency of 10 MHz. See
Retrieves the clock's presentation time source.
-Receives a reference to the time source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
?
Retrieves the latest clock time.
-Receives the latest clock time, in 100-nanosecond units. The time is relative to when the clock was last started.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The clock does not have a presentation time source. Call |
?
This method does not attempt to smooth out jitter or otherwise account for any inaccuracies in the clock time.
-
Registers an object to be notified whenever the clock starts, stops, or pauses, or changes rate.
-Pointer to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Before releasing the object, call
Unregisters an object that is receiving state-change notifications from the clock.
-Pointer to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Starts the presentation clock.
-Initial starting time, in 100-nanosecond units. At the time the Start method is called, the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
?
This method is valid in all states (stopped, paused, or running).
If the clock is paused and restarted from the same position (llClockStartOffset is PRESENTATION_CURRENT_POSITION), the presentation clock sends an
The presentation clock initiates the state change by calling OnClockStart or OnClockRestart on the clock's time source. This call is made synchronously. If it fails, the state change does not occur. If the call succeeds, the state changes, and the clock notifies the other state-change subscribers by calling their OnClockStart or OnClockRestart methods. These calls are made asynchronously.
If the clock is already running, calling Start again has the effect of seeking the clock to the new StartOffset position.
-
Stops the presentation clock. While the clock is stopped, the clock time does not advance, and the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
| The clock is already stopped. |
?
This method is valid when the clock is running or paused.
The presentation clock initiates the state change by calling
Pauses the presentation clock. While the clock is paused, the clock time does not advance, and the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| No time source was set on this clock. |
| The clock is already paused. |
| The clock is stopped. This request is not valid when the clock is stopped. |
?
This method is valid when the clock is running. It is not valid when the clock is paused or stopped.
The presentation clock initiates the state change by calling
Describes the details of a presentation. A presentation is a set of related media streams that share a common presentation time.
-Presentation descriptors are used to configure media sources and some media sinks. To get the presentation descriptor from a media source, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of stream descriptors in the presentation. Each stream descriptor contains information about one stream in the media source. To retrieve a stream descriptor, call the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of stream descriptors in the presentation. Each stream descriptor contains information about one stream in the media source. To retrieve a stream descriptor, call the
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a stream descriptor for a stream in the presentation. The stream descriptor contains information about the stream.
-Zero-based index of the stream. To find the number of streams in the presentation, call the
Receives a Boolean value. The value is TRUE if the stream is currently selected, or
Receives a reference to the stream descriptor's
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Selects a stream in the presentation.
-The stream number to select, indexed from zero. To find the number of streams in the presentation, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| dwDescriptorIndex is out of range. |
?
If a stream is selected, the media source will generate data for that stream. The media source will not generated data for deselected streams. To deselect a stream, call
To query whether a stream is selected, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Deselects a stream in the presentation.
- The stream number to deselect, indexed from zero. To find the number of streams in the presentation, call the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| dwDescriptorIndex is out of range. |
?
If a stream is deselected, no data is generated for that stream. To select the stream again, call
To query whether a stream is selected, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Creates a copy of this presentation descriptor.
-Receives a reference to the
If this method succeeds, it returns
This method performs a shallow copy of the presentation descriptor. The stream descriptors are not cloned. Therefore, use caution when modifying the presentation presentation descriptor or its stream descriptors.
If the original presentation descriptor is from a media source, do not modify the presentation descriptor unless the source is stopped. If you use the presentation descriptor to configure a media sink, do not modify the presentation descriptor after the sink is configured.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a stream descriptor for a stream in the presentation. The stream descriptor contains information about the stream.
-Zero-based index of the stream. To find the number of streams in the presentation, call the
Receives a Boolean value. The value is TRUE if the stream is currently selected, or
Receives a reference to the stream descriptor's
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Provides the clock times for the presentation clock.
-This interface is implemented by presentation time sources. A presentation time source is an object that provides the clock time for the presentation clock. For example, the audio renderer is a presentation time source. The rate at which the audio renderer consumes audio samples determines the clock time. If the audio format is 44100 samples per second, the audio renderer will report that one second has passed for every 44100 audio samples it plays. In this case, the timing is provided by the sound card.
To set the presentation time source on the presentation clock, call
A presentation time source must also implement the
Media Foundation provides a presentation time source that is based on the system clock. To create this object, call the
Retrieves the underlying clock that the presentation time source uses to generate its clock times.
-A presentation time source must support stopping, starting, pausing, and rate changes. However, in many cases the time source derives its clock times from a hardware clock or other device. The underlying clock is always running, and might not support rate changes.
Optionally, a time source can expose the underlying clock by implementing this method. The underlying clock is always running, even when the presentation time source is paused or stopped. (Therefore, the underlying clock returns the
The underlying clock is useful if you want to make decisions based on the clock times while the presentation clock is stopped or paused.
If the time source does not expose an underlying clock, the method returns
Retrieves the underlying clock that the presentation time source uses to generate its clock times.
-Receives a reference to the clock's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This time source does not expose an underlying clock. |
?
A presentation time source must support stopping, starting, pausing, and rate changes. However, in many cases the time source derives its clock times from a hardware clock or other device. The underlying clock is always running, and might not support rate changes.
Optionally, a time source can expose the underlying clock by implementing this method. The underlying clock is always running, even when the presentation time source is paused or stopped. (Therefore, the underlying clock returns the
The underlying clock is useful if you want to make decisions based on the clock times while the presentation clock is stopped or paused.
If the time source does not expose an underlying clock, the method returns
Provides a method that allows content protection systems to perform a handshake with the protected environment. This is needed because the CreateFile and DeviceIoControl APIs are not available to Windows Store apps.
-See
Allows content protection systems to access the protected environment.
-The length in bytes of the input data.
A reference to the input data.
The length in bytes of the output data.
A reference to the output data.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
See
Gets the Global Revocation List (GLR).
-The length of the data returned in output.
Receives the contents of the global revocation list file.
If this method succeeds, it returns
Allows reading of the system Global Revocation List (GRL).
-Enables the quality manager to adjust the audio or video quality of a component in the pipeline.
This interface is exposed by pipeline components that can adjust their quality. Typically it is exposed by decoders and stream sinks. For example, the enhanced video renderer (EVR) implements this interface. However, media sources can also implement this interface.
To get a reference to this interface from a media source, call
The quality manager typically obtains this interface when the quality manager's
Retrieves the current drop mode.
-
Retrieves the current quality level.
-
Sets the drop mode. In drop mode, a component drops samples, more or less aggressively depending on the level of the drop mode.
-Requested drop mode, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The component does not support the specified mode or any higher modes. |
?
If this method is called on a media source, the media source might switch between thinned and non-thinned output. If that occurs, the affected streams will send an
Sets the quality level. The quality level determines how the component consumes or produces samples.
-Requested quality level, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The component does not support the specified quality level or any levels below it. |
?
Retrieves the current drop mode.
-Receives the drop mode, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the current quality level.
-Receives the quality level, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Drops samples over a specified interval of time.
-Amount of time to drop, in 100-nanosecond units. This value is always absolute. If the method is called multiple times, do not add the times from previous calls.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support this method. |
?
Ideally the quality manager can prevent a renderer from falling behind. But if this does occur, then simply lowering quality does not guarantee the renderer will ever catch up. As a result, audio and video might fall out of sync. To correct this problem, the quality manager can call DropTime to request that the renderer drop samples quickly over a specified time interval. After that period, the renderer stops dropping samples.
This method is primarily intended for the video renderer. Dropped audio samples cause audio glitching, which is not desirable.
If a component does not support this method, it should return
Enables a pipeline object to adjust its own audio or video quality, in response to quality messages.
-This interface enables a pipeline object to respond to quality messages from the media sink. Currently, it is supported only for video decoders.
If a video decoder exposes
If the decoder exposes
The preceding remarks apply to the default implementation of the quality manager; custom quality managers can implement other behaviors.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Forwards an
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Queries an object for the number of quality modes it supports. Quality modes are used to adjust the trade-off between quality and speed when rendering audio or video.
The default presenter for the enhanced video renderer (EVR) implements this interface. The EVR uses the interface to respond to quality messages from the quality manager.
-Gets the maximum drop mode. A higher drop mode means that the object will, if needed, drop samples more aggressively to match the presentation clock.
-To get the current drop mode, call the
Gets the minimum quality level that is supported by the component.
-To get the current quality level, call the
Gets the maximum drop mode. A higher drop mode means that the object will, if needed, drop samples more aggressively to match the presentation clock.
-Receives the maximum drop mode, specified as a member of the
If this method succeeds, it returns
To get the current drop mode, call the
Gets the minimum quality level that is supported by the component.
-Receives the minimum quality level, specified as a member of the
If this method succeeds, it returns
To get the current quality level, call the
Adjusts playback quality. This interface is exposed by the quality manager.
-Media Foundation provides a default quality manager that is tuned for playback. Applications can provide a custom quality manager to the Media Session by setting the
Called when the Media Session is about to start playing a new topology.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
In a typical quality manager this method does the following:
Enumerates the nodes in the topology.
Calls
Queries for the
The quality manager can then use the
Called when the Media Session selects a presentation clock.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Called when the media processor is about to deliver an input sample to a pipeline component.
-Pointer to the
Index of the input stream on the topology node.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is called for every sample passing through every pipeline component. Therefore, the method must return quickly to avoid introducing too much latency into the pipeline.
-
Called after the media processor gets an output sample from a pipeline component.
-Pointer to the
Index of the output stream on the topology node.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is called for every sample passing through every pipeline component. Therefore, the method must return quickly to avoid introducing too much latency into the pipeline.
-
Called when a pipeline component sends an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Called when the Media Session is shutting down.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The quality manager should release all references to the Media Session when this method is called.
-Gets or sets the playback rate.
-Objects can expose this interface as a service. To obtain a reference to the interface, call
For more information, see About Rate Control.
To discover the playback rates that an object supports, use the
Sets the playback rate.
-If TRUE, the media streams are thinned. Otherwise, the stream is not thinned. For media sources and demultiplexers, the object must thin the streams when this parameter is TRUE. For downstream transforms, such as decoders and multiplexers, this parameter is informative; it notifies the object that the input streams are thinned. For information, see About Rate Control.
The requested playback rate. Postive values indicate forward playback, negative values indicate reverse playback, and zero indicates scrubbing (the source delivers a single frame).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support reverse playback. |
| The object does not support thinning. |
| The object does not support the requested playback rate. |
| The object cannot change to the new rate while in the running state. |
?
The Media Session prevents some transitions between rate boundaries, depending on the current playback state:
Playback State | Forward/Reverse | Forward/Zero | Reverse/Zero |
---|---|---|---|
Running | No | No | No |
Paused | No | Yes | No |
Stopped | Yes | Yes | Yes |
?
If the transition is not supported, the method returns
When a media source completes a call to SetRate, it sends the
If a media source switches between thinned and non-thinned playback, the streams send an
When the Media Session completes a call to SetRate, it sends the
Gets the current playback rate.
-Receives the current playback rate.
Receives the value TRUE if the stream is currently being thinned. If the object does not support thinning, this parameter always receives the value
Queries the range of playback rates that are supported, including reverse playback.
To get a reference to this interface, call
Applications can use this interface to discover the fastest and slowest playback rates that are possible, and to query whether a given playback rate is supported. Applications obtain this interface from the Media Session. Internally, the Media Session queries the objects in the pipeline. For more information, see How to Determine Supported Rates.
To get the current playback rate and to change the playback rate, use the
Playback rates are expressed as a ratio the normal playback rate. Reverse playback is expressed as a negative rate. Playback is either thinned or non-thinned. In thinned playback, some of the source data is skipped (typically delta frames). In non-thinned playback, all of the source data is rendered.
You might need to implement this interface if you are writing a pipeline object (media source, transform, or media sink). For more information, see Implementing Rate Control.
-
Retrieves the slowest playback rate supported by the object.
-Specifies whether to query to the slowest forward playback rate or reverse playback rate. The value is a member of the
If TRUE, the method retrieves the slowest thinned playback rate. Otherwise, the method retrieves the slowest non-thinned playback rate. For information about thinning, see About Rate Control.
Receives the slowest playback rate that the object supports.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support reverse playback. |
| The object does not support thinning. |
?
The value returned in plfRate represents a lower bound. Playback at this rate is not guaranteed. Call
If eDirection is
Gets the fastest playback rate supported by the object.
-Specifies whether to query to the fastest forward playback rate or reverse playback rate. The value is a member of the
If TRUE, the method retrieves the fastest thinned playback rate. Otherwise, the method retrieves the fastest non-thinned playback rate. For information about thinning, see About Rate Control.
Receives the fastest playback rate that the object supports.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The object does not support reverse playback. |
| The object does not support thinning. |
?
For some formats (such as ASF), thinning means dropping all frames that are not I-frames. If a component produces stream data, such as a media source or a demultiplexer, it should pay attention to the fThin parameter and return
If the component processes or receives a stream (most transforms or media sinks), it may ignore this parameter if it does not care whether the stream is thinned. In the Media Session's implementation of rate support, if the transforms do not explicitly support reverse playback, the Media Session will attempt to playback in reverse with thinning but not without thinning. Therefore, most applications will set fThin to TRUE when using the Media Session for reverse playback.
If eDirection is
Queries whether the object supports a specified playback rate.
-If TRUE, the method queries whether the object supports the playback rate with thinning. Otherwise, the method queries whether the object supports the playback rate without thinning. For information about thinning, see About Rate Control.
The playback rate to query.
If the object does not support the playback rate given in flRate, this parameter receives the closest supported playback rate. If the method returns
The method returns an
Return code | Description |
---|---|
| The object supports the specified rate. |
| The object does not support reverse playback. |
| The object does not support thinning. |
| The object does not support the specified rate. |
?
Creates an instance of either the sink writer or the source reader.
-To get a reference to this interface, call the CoCreateInstance function. The CLSID is CLSID_MFReadWriteClassFactory. Call the
As an alternative to using this interface, you can call any of the following functions:
Internally, these functions use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates an instance of the sink writer or source reader, given a URL.
-The CLSID of the object to create.
Value | Meaning |
---|---|
| Create the sink writer. The ppvObject parameter receives an |
| Create the source reader. The ppvObject parameter receives an |
?
A null-terminated string that contains a URL. If clsid is CLSID_MFSinkWriter, the URL specifies the name of the output file. The sink writer creates a new file with this name. If clsid is CLSID_MFSourceReader, the URL specifies the input file for the source reader.
A reference to the
This parameter can be
The IID of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Creates an instance of the sink writer or source reader, given an
The CLSID of the object to create.
Value | Meaning |
---|---|
| Create the sink writer. The ppvObject parameter receives an |
| Create the source reader. The ppvObject parameter receives an |
?
A reference to the
Value | Meaning |
---|---|
Pointer to a byte stream. If clsid is CLSID_MFSinkWriter, the sink writer writes data to this byte stream. If clsid is CLSID_MFSourceReader, this byte stream provides the source data for the source reader. | |
Pointer to a media sink. Applies only when clsid is CLSID_MFSinkWriter. | |
Pointer to a media source. Applies only when clsid is CLSID_MFSourceReader. |
?
A reference to the
This parameter can be
The IID of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Notifies a pipeline object to register itself with the Multimedia Class Scheduler Service (MMCSS).
Any pipeline object that creates worker threads should implement this interface.
-Media Foundation provides a mechanism for applications to associate branches in the topology with MMCSS tasks. A topology branch is defined by a source node in the topology and all of the nodes downstream from it. An application registers a topology branch with MMCSS by setting the
When the application registers a topology branch with MMCSS, the Media Session queries every pipeline object in that branch for the
When the application unregisters the topology branch, the Media Session calls UnregisterThreads.
If a pipeline object creates its own worker threads but does not implement this interface, it can cause priority inversions in the Media Foundation pipeline, because high-priority processing threads might be blocked while waiting for the component to process data on a thread with lower priority.
Pipeline objects that do not create worker threads do not need to implement this interface.
In Windows?8, this interface is extended with
Specifies the work queue for the topology branch that contains this object.
- An application can register a branch of the topology to use a private work queue. The Media Session notifies any pipeline object that supports
When the application unregisters the topology branch, the Media Session calls SetWorkQueue again with the value
Notifies the object to register its worker threads with the Multimedia Class Scheduler Service (MMCSS).
-The MMCSS task identifier.
The name of the MMCSS task.
If this method succeeds, it returns
The object's worker threads should register themselves with MMCSS by calling AvSetMmThreadCharacteristics, using the task name and identifier specified in this method.
-Notifies the object to unregister its worker threads from the Multimedia Class Scheduler Service (MMCSS).
-If this method succeeds, it returns
The object's worker threads should unregister themselves from MMCSS by calling AvRevertMmThreadCharacteristics.
-Specifies the work queue for the topology branch that contains this object.
-The identifier of the work queue, or the value
If this method succeeds, it returns
An application can register a branch of the topology to use a private work queue. The Media Session notifies any pipeline object that supports
When the application unregisters the topology branch, the Media Session calls SetWorkQueue again with the value
Notifies a pipeline object to register itself with the Multimedia Class Scheduler Service (MMCSS).
This interface is a replacement for the
Notifies the object to register its worker threads with the Multimedia Class Scheduler Service (MMCSS).
-The MMCSS task identifier. If the value is zero on input, the object should create a new MCCSS task group. See Remarks.
The name of the MMCSS task.
The base priority of the thread.
If this method succeeds, it returns
If the object does not create worker threads, the method should simply return
Otherwise, if the value of *pdwTaskIndex
is zero on input, the object should perform the following steps:
*pdwTaskIndex
equal to the task identifier.If the value of *pdwTaskIndex
is nonzero on input, the parameter contains an existing MMCSS task identifer. In that case, all worker threads of the object should register themselves for that task by calling AvSetMmThreadCharacteristics.
Notifies the object to unregister its worker threads from the Multimedia Class Scheduler Service (MMCSS).
-If this method succeeds, it returns
Specifies the work queue that this object should use for asynchronous work items.
-The work queue identifier.
The base priority for work items.
If this method succeeds, it returns
The object should use the values of dwMultithreadedWorkQueueId and lWorkItemBasePriority when it queues new work items. Use the
Used by the Microsoft Media Foundation proxy/stub DLL to marshal certain asynchronous method calls across process boundaries.
Applications do not use or implement this interface.
-Modifies a topology for use in a Terminal Services environment.
-To use this interface, do the following:
The application must call UpdateTopology before calling
Modifies a topology for use in a Terminal Services environment.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
If the application is running in a Terminal Services client session, call this method before calling
Retrieves a reference to the remote object for which this object is a proxy.
-
Retrieves a reference to the remote object for which this object is a proxy.
-Interface identifier (IID) of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves a reference to the object that is hosting this proxy.
-Interface identifier (IID) of the requested interface.
Receives a reference to the requested interface. The caller must release the interface.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets and retrieves Synchronized Accessible Media Interchange (SAMI) styles on the SAMI Media Source.
- To get a reference to this interface, call
Gets the number of styles defined in the SAMI file.
-Gets a list of the style names defined in the SAMI file.
-Gets the number of styles defined in the SAMI file.
-Receives the number of SAMI styles in the file.
If this method succeeds, it returns
Gets a list of the style names defined in the SAMI file.
-Pointer to a
If this method succeeds, it returns
Sets the current style on the SAMI media source.
-Pointer to a null-terminated string containing the name of the style. To clear the current style, pass an empty string (""). To get the list of style names, call
If this method succeeds, it returns
Gets the current style from the SAMI media source.
-Receives a reference to a null-terminated string that contains the name of the style. If no style is currently set, the method returns an empty string. The caller must free the memory for the string by calling CoTaskMemFree.
If this method succeeds, it returns
Represents a media sample, which is a container object for media data. For video, a sample typically contains one video frame. For audio data, a sample typically contains multiple audio samples, rather than a single sample of audio.
A media sample contains zero or more buffers. Each buffer manages a block of memory, and is represented by the
To create a new media sample, call
When you call CopyAllItems, inherited from the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves flags associated with the sample.
Currently no flags are defined. Instead, metadata for samples is defined using attributes. To get attibutes from a sample, use the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the presentation time of the sample.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the duration of the sample.
-If the sample contains more than one buffer, the duration includes the data from all of the buffers.
If the retrieved duration is zero, or if the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of buffers in the sample.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the total length of the valid data in all of the buffers in the sample. The length is calculated as the sum of the values retrieved by the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves flags associated with the sample.
Currently no flags are defined. Instead, metadata for samples is defined using attributes. To get attibutes from a sample, use the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets flags associated with the sample.
Currently no flags are defined. Instead, metadata for samples is defined using attributes. To set attibutes on a sample, use the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the presentation time of the sample.
-Receives the presentation time, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The sample does not have a presentation time. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the presentation time of the sample.
-The presentation time, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Some pipeline components require samples that have time stamps. Generally the component that generates the data for the sample also sets the time stamp. The Media Session might modify the time stamps.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the duration of the sample.
-Receives the duration, in 100-nanosecond units.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The sample does not have a specified duration. |
?
If the sample contains more than one buffer, the duration includes the data from all of the buffers.
If the retrieved duration is zero, or if the method returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Sets the duration of the sample.
-Duration of the sample, in 100-nanosecond units.
If this method succeeds, it returns
This method succeeds if the duration is negative, although negative durations are probably not valid for most types of data. It is the responsibility of the object that consumes the sample to validate the duration.
The duration can also be zero. This might be valid for some types of data. For example, the sample might contain stream metadata with no buffers.
Until this method is called, the
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the number of buffers in the sample.
-Receives the number of buffers in the sample. A sample might contain zero buffers.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Gets a buffer from the sample, by index.
Note??In most cases, it is safer to use the
A sample might contain more than one buffer. Use the GetBufferByIndex method to enumerate the individual buffers.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Converts a sample with multiple buffers into a sample with a single buffer.
-Receives a reference to the
If the sample contains more than one buffer, this method copies the data from the original buffers into a new buffer, and replaces the original buffer list with the new buffer. The new buffer is returned in the ppBuffer parameter.
If the sample contains a single buffer, this method returns a reference to the original buffer. In typical use, most samples do not contain multiple buffers.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Adds a buffer to the end of the list of buffers in the sample.
-Pointer to the buffer's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
?
For uncompressed video data, each buffer should contain a single video frame, and samples should not contain multiple frames. In general, storing multiple buffers in a sample is discouraged.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes a buffer at a specified index from the sample.
-Index of the buffer. To find the number of buffers in the sample, call
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Removes all of the buffers from the sample.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves the total length of the valid data in all of the buffers in the sample. The length is calculated as the sum of the values retrieved by the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Copies the sample data to a buffer. This method concatenates the valid data from all of the buffers of the sample, in order.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| |
| The buffer is not large enough to contain the data. |
?
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Callback interface to get media data from the sample-grabber sink.
-The sample-grabber sink enables an application to get data from the Media Foundation pipeline without implementing a custom media sink. To use the sample-grabber sink, the application must perform the following steps:
Implement the
Call
Create a topology that includes an output node with the sink's
Pass this topology to the Media Session.
During playback, the sample-grabber sink calls methods on the application's callback.
You cannot use the sample-grabber sink to get protected content.
-Extends the
This callback interface is used with the sample-grabber sink. It extends the
The OnProcessSampleEx method adds a parameter that contains the attributes for the media sample. You can use the attributes to get information about the sample, such as field dominance and telecine flags.
To use this interface, do the following:
Begins an asynchronous request to write a media sample to the stream.
-When the sample has been written to the stream, the callback object's
Begins an asynchronous request to write a media sample to the stream.
-A reference to the
A reference to the
A reference to the
If this method succeeds, it returns
When the sample has been written to the stream, the callback object's
Completes an asynchronous request to write a media sample to the stream.
-A reference to the
If this method succeeds, it returns
Call this method when the
Provides encryption for media data inside the protected media path (PMP).
-
Retrieves the version of sample protection that the component implements on input.
-
Retrieves the version of sample protection that the component implements on output.
-
Retrieves the version of sample protection that the component implements on input.
-Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the version of sample protection that the component implements on output.
-Receives a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the sample protection certificate.
-Specifies the version number of the sample protection scheme for which to receive a certificate. The version number is specified as a
Receives a reference to a buffer containing the certificate. The caller must free the memory for the buffer by calling CoTaskMemFree.
Receives the size of the ppCert buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
?
For certain version numbers of sample protection, the downstream component must provide a certificate. Components that do not support these version numbers can return E_NOTIMPL.
-
Retrieves initialization information for sample protection from the upstream component.
-Specifies the version number of the sample protection scheme. The version number is specified as a
Identifier of the output stream. The identifier corresponds to the output stream identifier returned by the
Pointer to a certificate provided by the downstream component.
Size of the certificate, in bytes.
Receives a reference to a buffer that contains the initialization information for downstream component. The caller must free the memory for the buffer by calling CoTaskMemFree.
Receives the size of the ppbSeed buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
?
This method must be implemented by the upstream component. The method fails if the component does not support the requested sample protection version. Downstream components do not implement this method and should return E_NOTIMPL.
-
Initializes sample protection on the downstream component.
-Specifies the version number of the sample protection scheme. The version number is specified as a
Identifier of the input stream. The identifier corresponds to the output stream identifier returned by the
Pointer to a buffer that contains the initialization data provided by the upstream component. To retrieve this buffer, call
Size of the pbSeed buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Persists media data from a source byte stream to an application-provided byte stream.
The byte stream used for HTTP download implements this interface. To get a reference to this interface, call
Retrieves the percentage of content saved to the provided byte stream.
-
Begins saving a Windows Media file to the application's byte stream.
-Pointer to the
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
When the operation completes, the callback object's
Completes the operation started by
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Cancels the operation started by
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the percentage of content saved to the provided byte stream.
-Receives the percentage of completion.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Begins an asynchronous request to create an object from a URL.
When the Source Resolver creates a media source from a URL, it passes the request to a scheme handler. The scheme handler might create a media source directly from the URL, or it might return a byte stream. If it returns a byte stream, the source resolver use a byte-stream handler to create the media source from the byte stream.
-The dwFlags parameter must contain the
If the
The following table summarizes the behavior of these two flags when passed to this method:
Flag | Object created |
---|---|
Media source or byte stream | |
Byte stream |
?
The
When the operation completes, the scheme handler calls the
Begins an asynchronous request to create an object from a URL.
When the Source Resolver creates a media source from a URL, it passes the request to a scheme handler. The scheme handler might create a media source directly from the URL, or it might return a byte stream. If it returns a byte stream, the source resolver use a byte-stream handler to create the media source from the byte stream.
- The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Cannot open the URL with the requested access (read or write). |
| Unsupported byte stream type. |
?
The dwFlags parameter must contain the
If the
The following table summarizes the behavior of these two flags when passed to this method:
Flag | Object created |
---|---|
Media source or byte stream | |
Byte stream |
?
The
When the operation completes, the scheme handler calls the
Completes an asynchronous request to create an object from a URL.
-Pointer to the
Receives a member of the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation was canceled. |
?
Call this method from inside the
Cancels the current request to create an object from a URL.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You can use this method to cancel a previous call to BeginCreateObject. Because that method is asynchronous, however, it might be completed before the operation can be canceled. Therefore, your callback might still be invoked after you call this method.
The operation cannot be canceled if BeginCreateObject returns
Establishes a one-way secure channel between two objects.
-
Retrieves the client's certificate.
-Receives a reference to a buffer allocated by the object. The buffer contains the client's certificate. The caller must release the buffer by calling CoTaskMemFree.
Receives the size of the ppCert buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Passes the encrypted session key to the client.
-Pointer to a buffer that contains the encrypted session key. This parameter can be
Size of the pbEncryptedSessionKey buffer, in bytes.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
For a particular seek position, gets the two nearest key frames.
-If an application seeks to a non?key frame, the decoder must start decoding from the previous key frame. This can increase latency, because several frames might get decoded before the requested frame is reached. To reduce latency, an application can call this method to find the two key frames that are closest to the desired time, and then seek to one of those key frames.
-For a particular seek position, gets the two nearest key frames.
-A reference to a
The seek position. The units for this parameter are specified by pguidTimeFormat.
Receives the position of the nearest key frame that appears earlier than pvarStartPosition. The units for this parameter are specified by pguidTimeFormat.
Receives the position of the nearest key frame that appears earlier than pvarStartPosition. The units for this parameter are specified by pguidTimeFormat.
This method can return one of these values.
Return code | Description |
---|---|
| The method succeeded. |
| The time format specified in pguidTimeFormat is not supported. |
?
If an application seeks to a non?key frame, the decoder must start decoding from the previous key frame. This can increase latency, because several frames might get decoded before the requested frame is reached. To reduce latency, an application can call this method to find the two key frames that are closest to the desired time, and then seek to one of those key frames.
-Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Called by the media pipeline to get information about a transform provided by the sensor transform.
-The index of the transform for which information is being requested. In the current release, this value will always be 0.
Gets the identifier for the transform.
The attribute store to be populated.
A collection of
If this method succeeds, it returns
Implemented by the Sequencer Source. The sequencer source enables an application to create a sequence of topologies. To create the sequencer source, call
Adds a topology to the end of the queue.
-Pointer to the
A combination of flags from the
Receives the sequencer element identifier for this topology.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The source topology node is missing one of the following attributes: |
?
The sequencer plays topologies in the order they are queued. You can queue as many topologies as you want to preroll.
The application must indicate to the sequencer when it has queued the last topology on the Media Session. To specify the last topology, set the SequencerTopologyFlags_Last flag in the dwFlags parameter when you append the topology. The sequencer uses this information to end playback with the pipeline. Otherwise, the sequencer waits indefinitely for a new topology to be queued.
-
Deletes a topology from the queue.
-The sequencer element identifier of the topology to delete.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Maps a presentation descriptor to its associated sequencer element identifier and the topology it represents.
-Pointer to the
Receives the sequencer element identifier. This value is assigned by the sequencer source when the application calls
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The presentation descriptor is not valid. |
| This segment was canceled. |
?
The topology returned in ppTopology is the original topology that the application specified in AppendTopology. The source nodes in this topology contain references to the native sources. Do not queue this topology on the Media Session. Instead, call
Updates a topology in the queue.
-Sequencer element identifier of the topology to update.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The sequencer source has been shut down. |
?
This method is asynchronous. When the operation is completed, the sequencer source sends an
Updates the flags for a topology in the queue.
-Sequencer element identifier of the topology to update.
Bitwise OR of flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Queries an object for a specified service interface.
-A service is an interface that is exposed by one object but might be implemented by another object. The GetService method is equivalent to QueryInterface, with the following difference: when QueryInterface retrieves a reference to an interface, it is guaranteed that you can query the returned interface and get back the original interface. The GetService method does not make this guarantee, because the retrieved interface might be implemented by a separate object.
The
Retrieves a service interface.
-The service identifier (SID) of the service. For a list of service identifiers, see Service Interfaces.
The interface identifier (IID) of the interface being requested.
Receives the interface reference. The caller must release the interface.
Applies to: desktop apps | Metro style apps
Retrieves a service interface.
-The service identifier (SID) of the service. For a list of service identifiers, see Service Interfaces.
Exposed by some Media Foundation objects that must be explicitly shut down.
-The following types of object expose
Any component that creates one of these objects is responsible for calling Shutdown on the object before releasing the object. Typically, applications do not create any of these objects directly, so it is not usually necessary to use this interface in an application.
To obtain a reference to this interface, call QueryInterface on the object.
If you are implementing a custom object, your object can expose this interface, but only if you can guarantee that your application will call Shutdown.
Media sources, media sinks, and synchronous MFTs should not implement this interface, because the Media Foundation pipeline will not call Shutdown on these objects. Asynchronous MFTs must implement this interface.
This interface is not related to the
Some Media Foundation interfaces define a Shutdown method, which serves the same purpose as
Queries the status of an earlier call to the
Until Shutdown is called, the GetShutdownStatus method returns
If an object's Shutdown method is asynchronous, pStatus might receive the value
Shuts down a Media Foundation object and releases all resources associated with the object.
-If this method succeeds, it returns
The
Queries the status of an earlier call to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The Shutdown method has not been called on this object. |
?
Until Shutdown is called, the GetShutdownStatus method returns
If an object's Shutdown method is asynchronous, pStatus might receive the value
Provides a method that allows content protection systems to get the procedure address of a function in the signed library. This method provides the same functionality as GetProcAddress which is not available to Windows Store apps.
-See
Gets the procedure address of the specified function in the signed library.
-The entry point name in the DLL that specifies the function.
Receives the address of the entry point.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
See
Controls the master volume level of the audio session associated with the streaming audio renderer (SAR) and the audio capture source.
The SAR and the audio capture source expose this interface as a service. To get a reference to the interface, call
To control the volume levels of individual channels, use the
Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation). For each channel, the attenuation level is the product of:
The master volume level of the audio session.
The volume level of the channel.
For example, if the master volume is 0.8 and the channel volume is 0.5, the attenuaton for that channel is 0.8 ? 0.5 = 0.4. Volume levels can exceed 1.0 (positive gain), but the audio engine clips any audio samples that exceed zero decibels. To change the volume level of individual channels, use the
Use the following formula to convert the volume level to the decibel (dB) scale:
Attenuation (dB) = 20 * log10(Level)
For example, a volume level of 0.50 represents 6.02 dB of attenuation.
-
Retrieves the master volume level.
-If an external event changes the master volume, the audio renderer sends an
Queries whether the audio is muted.
-Calling
Sets the master volume level.
-Volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
Events outside of the application can change the master volume level. For example, the user can change the volume from the system volume-control program (SndVol). If an external event changes the master volume, the audio renderer sends an
Retrieves the master volume level.
-Receives the volume level. Volume is expressed as an attenuation level, where 0.0 indicates silence and 1.0 indicates full volume (no attenuation).
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
If an external event changes the master volume, the audio renderer sends an
Mutes or unmutes the audio.
-Specify TRUE to mute the audio, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
This method does not change the volume level returned by the
Queries whether the audio is muted.
-Receives a Boolean value. If TRUE, the audio is muted; otherwise, the audio is not muted.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The audio renderer is not initialized. |
| The audio renderer was removed from the pipeline. |
?
Calling
Implemented by the Microsoft Media Foundation sink writer object.
-To create the sink writer, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Adds a stream to the sink writer.
-A reference to the
Receives the zero-based index of the new stream.
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Sets the input format for a stream on the sink writer.
-The zero-based index of the stream. The index is received by the pdwStreamIndex parameter of the
A reference to the
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The underlying media sink does not support the format, no conversion is possible, or a dynamic format change is not possible. |
| The dwStreamIndex parameter is invalid. |
| Could not find an encoder for the encoded format. |
?
The input format does not have to match the target format that is written to the media sink. If the formats do not match, the method attempts to load an encoder that can encode from the input format to the target format.
After streaming begins?that is, after the first call to
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Initializes the sink writer for writing.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
Call this method after you configure the input streams and before you send any data to the sink writer.
You must call BeginWriting before calling any of the following methods:
The underlying media sink must have at least one input stream. Otherwise, BeginWriting returns
If BeginWriting succeeds, any further calls to BeginWriting return
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Delivers a sample to the sink writer.
-The zero-based index of the stream for this sample.
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
You must call
By default, the sink writer limits the rate of incoming data by blocking the calling thread inside the WriteSample method. This prevents the application from delivering samples too quickly. To disable this behavior, set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Indicates a gap in an input stream.
-The zero-based index of the stream.
The position in the stream where the gap in the data occurs. The value is given in 100-nanosecond units, relative to the start of the stream.
If this method succeeds, it returns
For video, call this method once for each missing frame. For audio, call this method at least once per second during a gap in the audio. Set the
Internally, this method calls
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Places a marker in the specified stream.
-The zero-based index of the stream.
Pointer to an application-defined value. The value of this parameter is returned to the caller in the pvContext parameter of the caller's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
To use this method, you must provide an asynchronous callback when you create the sink writer. Otherwise, the method returns
Markers provide a way to be notified when the media sink consumes all of the samples in a stream up to a certain point. The media sink does not process the marker until it has processed all of the samples that came before the marker. When the media sink processes the marker, the sink writer calls the application's OnMarker method. When the callback is invoked, you know that the sink has consumed all of the previous samples for that stream.
For example, to change the format midstream, call PlaceMarker at the point where the format changes. When OnMarker is called, it is safe to call
Internally, this method calls
Note??The pvContext parameter of the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Notifies the media sink that a stream has reached the end of a segment.
-The zero-based index of a stream, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
You must call
This method sends an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Flushes one or more streams.
-The zero-based index of the stream to flush, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The request is invalid. |
?
You must call
For each stream that is flushed, the sink writer drops all pending samples, flushes the encoder, and sends an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Completes all writing operations on the sink writer.
-If this method succeeds, it returns
Call this method after you send all of the input samples to the sink writer. The method performs any operations needed to create the final output from the media sink.
If you provide a callback interface when you create the sink writer, this method completes asynchronously. When the operation completes, the
Internally, this method calls
After this method is called, the following methods will fail:
If you do not call Finalize, the output from the media sink might be incomplete or invalid. For example, required file headers might be missing from the output file.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Queries the underlying media sink or encoder for an interface.
-The zero-based index of a stream to query, or
A service identifier
The interface identifier (IID) of the interface being requested.
Receives a reference to the requested interface. The caller must release the interface.
If this method succeeds, it returns
If the dwStreamIndex parameter equals
If the input and output types of the sink are identical and compressed, it's possible that no encoding is required and the video encoder will not be instantiated. In that case, GetServiceForStream will return
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Gets statistics about the performance of the sink writer.
-The zero-based index of a stream to query, or
A reference to an
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid stream number. |
?
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Callback interface for the Microsoft Media Foundation sink writer.
-Set the callback reference by setting the
The callback methods can be called from any thread, so an object that implements this interface must be thread-safe.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the
Returns an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the
Returns an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Extends the
This interface provides a mechanism for apps that use
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when the transform chain in the
Returns an
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when an asynchronous error occurs with the
Returns an
Provides additional functionality on the sink writer for dynamically changing the media type and encoder configuration.
-The Sink Writer implements this interface in Windows?8.1. To get a reference to this interface, call QueryInterface on the
Dynamically changes the target media type that Sink Writer is encoding to.
-Specifies the stream index.
The new media format to encode to.
The new set of encoding parameters to configure the encoder with. If not specified, previously provided parameters will be used.
If this method succeeds, it returns
The new media type must be supported by the media sink being used and by the encoder MFTs installed on the system. -
-Dynamically updates the encoder configuration with a collection of new encoder settings.
-Specifies the stream index.
A set of encoding parameters to configure the encoder with.
If this method succeeds, it returns
The encoder will be configured with these settings after all previously queued input media samples have been sent to it through
Extends the
The Sink Writer implements this interface in Windows?8. To get a reference to this interface, call QueryInterface on the Sink Writer.
-Gets a reference to a Media Foundation transform (MFT) for a specified stream.
-The zero-based index of a stream.
The zero-based index of the MFT to retreive.
Receives a reference to a
Receives a reference to the
If this method succeeds, it returns
Represents a buffer which contains media data for a
Gets a value that indicates if Append, AppendByteStream, or Remove is in process.
-Gets the buffered time range.
-Gets or sets the timestamp offset for media segments appended to the
Gets or sets the timestamp for the start of the append window.
-Gets or sets the timestamp for the end of the append window.
-Gets a value that indicates if Append, AppendByteStream, or Remove is in process.
-true if Append, AppendByteStream, or Remove; otherwise, false.
Gets the buffered time range.
-The buffered time range.
If this method succeeds, it returns
Gets the timestamp offset for media segments appended to the
The timestamp offset.
Sets the timestamp offset for media segments appended to the
If this method succeeds, it returns
Gets the timestamp for the start of the append window.
-The timestamp for the start of the append window.
Sets the timestamp for the start of the append window.
-The timestamp for the start of the append window.
If this method succeeds, it returns
Gets the timestamp for the end of the append window.
-The timestamp for the end of the append window.
Sets the timestamp for the end of the append window.
-The timestamp for the end of the append window.
Appends the specified media segment to the
If this method succeeds, it returns
Appends the media segment from the specified byte stream to the
If this method succeeds, it returns
Aborts the processing of the current media segment.
-If this method succeeds, it returns
Removes the media segments defined by the specified time range from the
If this method succeeds, it returns
Represents a collection of
Gets the number of
Gets the number of
The number of source buffers in the list.
Gets the
The source buffer.
Provides functionality for raising events associated with
Used to indicate that the source buffer has started updating.
-Used to indicate that the source buffer has been aborted.
-Used to indicate that an error has occurred with the source buffer.
-Used to indicate that the source buffer is updating.
-Used to indicate that the source buffer has finished updating.
-Callback interface to receive notifications from a network source on the progress of an asynchronous open operation.
-
Called by the network source when the open operation begins or ends.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The networks source calls this method with the following event types.
For more information, see How to Get Events from the Network Source.
-Implemented by the Microsoft Media Foundation source reader object.
-To create the source reader, call one of the following functions:
Alternatively, use the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
In Windows?8, this interface is extended with
Queries whether a stream is selected.
-The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
Receives TRUE if the stream is selected and will generate data. Receives
If this method succeeds, it returns
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Selects or deselects one or more streams.
-The stream to set. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
Specify TRUE to select streams or
If this method succeeds, it returns
There are two common uses for this method:
For an example of deselecting a stream, see Tutorial: Decoding Audio.
If a stream is deselected, the
Stream selection does not affect how the source reader loads or unloads decoders in memory. In particular, deselecting a stream does not force the source reader to unload the decoder for that stream.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Gets a format that is supported natively by the media source.
-Specifies which stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the media type to retrieve.
Receives a reference to the
This method queries the underlying media source for its native output format. Potentially, each source stream can produce more than one output format. Use the dwMediaTypeIndex parameter to loop through the available formats. Generally, file sources offer just one format per stream, but capture devices might offer several formats.
The method returns a copy of the media type, so it is safe to modify the object received in the ppMediaType parameter.
To set the output type for a stream, call the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Gets the current media type for a stream.
-The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
Receives a reference to the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Sets the media type for a stream.
This media type defines that format that the Source Reader produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one decoder was found for the native stream type, but the type specified by pMediaType was rejected. |
| One or more sample requests are still pending. |
| The dwStreamIndex parameter is invalid. |
| Could not find a decoder for the native stream type. |
?
For each stream, you can set the media type to any of the following:
Audio resampling support was added to the source reader with Windows?8. In versions of Windows prior to Windows?8, the source reader does not support audio resampling. If you need to resample the audio in versions of Windows earlier than Windows?8, you can use the Audio Resampler DSP.
If you set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Seeks to a new position in the media source.
-A
Value | Meaning |
---|---|
| 100-nanosecond units. |
?
Some media sources might support additional values.
The position from which playback will be started. The units are specified by the guidTimeFormat parameter. If the guidTimeFormat parameter is GUID_NULL, set the variant type to VT_I8.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| One or more sample requests are still pending. |
?
The SetCurrentPosition method does not guarantee exact seeking. The accuracy of the seek depends on the media content. If the media content contains a video stream, the SetCurrentPosition method typically seeks to the nearest key frame before the desired position. The distance between key frames depends on several factors, including the encoder implementation, the video content, and the particular encoding settings used to encode the content. The distance between key frame can vary within a single video file (for example, depending on scene complexity).
After seeking, the application should call
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Reads the next sample from the media source.
-The stream to pull data from. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| Get the next available sample, regardless of which stream. |
?
A bitwise OR of zero or more flags from the
Receives the zero-based index of the stream.
Receives a bitwise OR of zero or more flags from the
Receives the time stamp of the sample, or the time of the stream event indicated in pdwStreamFlags. The time is given in 100-nanosecond units.
Receives a reference to the
If the requested stream is not selected, the return code is
This method can complete synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about setting the callback reference, see
Flushes one or more streams.
-The stream to flush. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
If this method succeeds, it returns
The Flush method discards all queued samples and cancels all pending sample requests.
This method can complete either synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about the setting the callback reference, see
In synchronous mode, the method blocks until the operation is complete.
In asynchronous mode, the application's
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Queries the underlying media source or decoder for an interface.
-The stream or object to query. If the value is
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A service identifier
The interface identifier (IID) of the interface being requested.
Receives a reference to the requested interface. The caller must release the interface.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Gets an attribute from the underlying media source.
-The stream or object to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A
Otherwise, if the dwStreamIndex parameter specifies a stream, guidAttribute specifies a stream descriptor attribute. For a list of values, see Stream Descriptor Attributes.
A reference to a
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-A reference to the
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-A reference to the
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Pointer to the
Call CoInitialize(Ex) and
By default, when the application releases the source reader, the source reader shuts down the media source by calling
To change this default behavior, set the
When using the Source Reader, do not call any of the following methods on the media source:
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
Windows Phone 8.1: This API is supported.
-A reference to the
Pointer to the
Call CoInitialize(Ex) and
Internally, the source reader calls the
This function is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Gets a format that is supported natively by the media source.
-Specifies which stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the media type to retrieve.
Receives a reference to the
This method queries the underlying media source for its native output format. Potentially, each source stream can produce more than one output format. Use the dwMediaTypeIndex parameter to loop through the available formats. Generally, file sources offer just one format per stream, but capture devices might offer several formats.
The method returns a copy of the media type, so it is safe to modify the object received in the ppMediaType parameter.
To set the output type for a stream, call the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Selects or deselects one or more streams.
-The stream to set. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
Specify TRUE to select streams or
If this method succeeds, it returns
There are two common uses for this method:
For an example of deselecting a stream, see Tutorial: Decoding Audio.
If a stream is deselected, the
Stream selection does not affect how the source reader loads or unloads decoders in memory. In particular, deselecting a stream does not force the source reader to unload the decoder for that stream.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Sets the media type for a stream.
This media type defines that format that the Source Reader produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one decoder was found for the native stream type, but the type specified by pMediaType was rejected. |
| One or more sample requests are still pending. |
| The dwStreamIndex parameter is invalid. |
| Could not find a decoder for the native stream type. |
?
For each stream, you can set the media type to any of the following:
The source reader does not support audio resampling. If you need to resample the audio, you can use the Audio Resampler DSP.
If you set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Sets the media type for a stream.
This media type defines that format that the Source Reader produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one decoder was found for the native stream type, but the type specified by pMediaType was rejected. |
| One or more sample requests are still pending. |
| The dwStreamIndex parameter is invalid. |
| Could not find a decoder for the native stream type. |
?
For each stream, you can set the media type to any of the following:
The source reader does not support audio resampling. If you need to resample the audio, you can use the Audio Resampler DSP.
If you set the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Seeks to a new position in the media source.
-The SetCurrentPosition method does not guarantee exact seeking. The accuracy of the seek depends on the media content. If the media content contains a video stream, the SetCurrentPosition method typically seeks to the nearest key frame before the desired position. The distance between key frames depends on several factors, including the encoder implementation, the video content, and the particular encoding settings used to encode the content. The distance between key frame can vary within a single video file (for example, depending on scene complexity).
After seeking, the application should call
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Gets the current media type for a stream.
-The stream to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
Receives a reference to the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Reads the next sample from the media source.
-The stream to pull data from. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| Get the next available sample, regardless of which stream. |
?
A bitwise OR of zero or more flags from the
Receives the zero-based index of the stream.
Receives a bitwise OR of zero or more flags from the
Receives the time stamp of the sample, or the time of the stream event indicated in pdwStreamFlags. The time is given in 100-nanosecond units.
Receives a reference to the
If the requested stream is not selected, the return code is MF_E_INVALIDREQUEST. See
This method can complete synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about setting the callback reference, see
In asynchronous mode:
[out]
parameters must be In synchronous mode:
In synchronous mode, if the dwStreamIndex parameter is
This method can return flags in the pdwStreamFlags parameter without returning a media sample in ppSample. Therefore, the ppSample parameter can receive a
If there is a gap in the stream, pdwStreamFlags receives the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Flushes one or more streams.
-The stream to flush. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| All streams. |
?
If this method succeeds, it returns
The Flush method discards all queued samples and cancels all pending sample requests.
This method can complete either synchronously or asynchronously. If you provide a callback reference when you create the source reader, the method is asynchronous. Otherwise, the method is synchronous. For more information about the setting the callback reference, see
In synchronous mode, the method blocks until the operation is complete.
In asynchronous mode, the application's
Note??In Windows?7, there was a bug in the implementation of this method, which causes OnFlush to be called before the flush operation completes. A hotfix is available that fixes this bug. For more information, see http://support.microsoft.com/kb/979567.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Queries the underlying media source or decoder for an interface.
-The stream or object to query. If the value is
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A service identifier
The interface identifier (IID) of the interface being requested.
Receives a reference to the requested interface. The caller must release the interface.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Gets an attribute from the underlying media source.
-The stream or object to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A
Otherwise, if the dwStreamIndex parameter specifies a stream, guidAttribute specifies a stream descriptor attribute. For a list of values, see Stream Descriptor Attributes.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Applies to: desktop apps | Metro style apps
Gets an attribute from the underlying media source.
-The stream or object to query. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
| The media source. |
?
A
Otherwise, if the dwStreamIndex parameter specifies a stream, guidAttribute specifies a stream descriptor attribute. For a list of values, see Stream Descriptor Attributes.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Callback interface for the Microsoft Media Foundation source reader.
-Use the
The callback methods can be called from any thread, so an object that implements this interface must be thread-safe.
If you do not specify a callback reference, the source reader operates synchronously.
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the
Returns an
The pSample parameter might be
If there is a gap in the stream, dwStreamFlags contains the
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the
Returns an
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-Called when the source reader receives certain events from the media source.
-For stream events, the value is the zero-based index of the stream that sent the event. For source events, the value is
A reference to the
Returns an
In the current implementation, the source reader uses this method to forward the following events to the application:
This interface is available on Windows?Vista if Platform Update Supplement for Windows?Vista is installed.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Extends the
This interface provides a mechanism for apps that use
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when the transform chain in the
Returns an
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when an asynchronous error occurs with the
Returns an
Extends the
The Source Reader implements this interface in Windows?8. To get a reference to this interface, call QueryInterface on the Source Reader.
-Sets the native media type for a stream on the media source.
-A reference to the
Receives a bitwise OR of zero or more of the following flags.
Value | Meaning |
---|---|
All effects were removed from the stream. | |
The current output type changed. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. |
| The dwStreamIndex parameter is invalid. |
?
This method sets the output type that is produced by the media source. Unlike the
In asynchronous mode, this method fails if a sample request is pending. In that case, wait for the OnReadSample callback to be invoked before calling the method. For more information about using the Source Reader in asynchronous mode, see
This method can trigger a change in the output format for the stream. If so, the
This method is useful with audio and video capture devices, because a device might support several output formats. This method enables the application to choose the device format before decoders and other transforms are added.
-Adds a transform, such as an audio or video effect, to a stream.
-The stream to configure. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
A reference to one of the following:
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The transform does not support the current stream format, and no conversion was possible. See Remarks for more information. |
| Invalid request. |
| The dwStreamIndex parameter is invalid. |
?
This method attempts to add the transform at the end of the current processing chain.
To use this method, make the following sequence of calls:
The AddTransformForStream method will not insert a decoder into the processing chain. If the native stream format is encoded, and the transform requires an uncompressed format, call SetCurrentMediaType to set the uncompressed format (step 1 in the previous list). However, the method will insert a video processor to convert between RGB and YUV formats, if required.
The method fails if the source reader was configured with the
In asynchronous mode, the method also fails if a sample request is pending. In that case, wait for the OnReadSample callback to be invoked before calling the method. For more information about using the Source Reader in asynchronous mode, see
You can add a transform at any time during streaming. However, the method does not flush or drain the pipeline before inserting the transform. Therefore, if data is already in the pipeline, the next sample is not guaranteed to have the transform applied.
-Removes all of the Media Foundation transforms (MFTs) for a specified stream, with the exception of the decoder.
-The stream for which to remove the MFTs. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| Invalid request. |
| The dwStreamIndex parameter is invalid. |
?
Calling this method can reset the current output type for the stream. To get the new output type, call
In asynchronous mode, this method fails if a sample request is pending. In that case, wait for the OnReadSample callback to be invoked before calling the method. For more information about using the Source Reader in asynchronous mode, see
Gets a reference to a Media Foundation transform (MFT) for a specified stream.
-The stream to query for the MFT. The value can be any of the following.
Value | Meaning |
---|---|
| The zero-based index of a stream. |
| The first video stream. |
| The first audio stream. |
?
The zero-based index of the MFT to retreive.
Receives a
Receives a reference to the
This method can return one of these values.
Return code | Description |
---|---|
| Success. |
| The dwTransformIndex parameter is out of range. |
| The dwStreamIndex parameter is invalid. |
?
You can use this method to configure an MFT after it is inserted into the processing chain. Do not use the reference returned in ppTransform to set media types on the MFT or to process data. In particular, calling any of the following
If a decoder is present, it appears at index position zero.
To avoid losing any data, you should drain the source reader before calling this method. For more information, see Draining the Data Pipeline.
-Creates a media source from a URL or a byte stream. The Source Resolver implements this interface. To create the source resolver, call
Creates a media source or a byte stream from a URL. This method is synchronous.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags. See remarks below.
Pointer to the
Receives a member of the
Receives a reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| The URL scheme is not supported. |
?
The dwFlags parameter must contain either the
It is recommended that you do not set
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Creates a media source from a byte stream. This method is synchronous.
- Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
Pointer to the
Receives a member of the
Receives a reference to the media source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| This byte stream is not supported. |
?
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Begins an asynchronous request to create a media source or a byte stream from a URL.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of flags. See Source Resolver Flags.
Pointer to the
Receives an
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| The URL scheme is not supported. |
?
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
When the operation completes, the source resolver calls the
The usage of the pProps parameter depends on the implementation of the media source.
-Completes an asynchronous request to create an object from a URL.
- Pointer to the
Receives a member of the
Receives a reference to the media source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The operation was canceled. |
?
Call this method from inside your application's
Begins an asynchronous request to create a media source from a byte stream.
-A reference to the byte stream's
A null-terminated string that contains the original URL of the byte stream. This parameter can be
A bitwise OR of one or more flags. See Source Resolver Flags.
A reference to the
Receives an
A reference to the
A oointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwFlags parameter contains mutually exclusive flags. |
| The byte stream is not supported. |
| The byte stream does not support seeking. |
?
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
When the operation completes, the source resolver calls the
Completes an asynchronous request to create a media source from a byte stream.
-Pointer to the
Receives a member of the
Receives a reference to the media source's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The application canceled the operation. |
?
Call this method from inside your application's
Cancels an asynchronous request to create an object.
- Pointer to the
If this method succeeds, it returns
You can use this method to cancel a previous call to BeginCreateObjectFromByteStream or BeginCreateObjectFromURL. Because these methods are asynchronous, however, they might be completed before the operation can be canceled. Therefore, your callback might still be invoked after you call this method.
Note??This method cannot be called remotely.? -Applies to: desktop apps | Metro style apps
Creates a media source or a byte stream from a URL. This method is synchronous.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags.
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source or a byte stream from a URL. This method is synchronous.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags.
Receives a member of the
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source or a byte stream from a URL. This method is synchronous.
-Null-terminated string that contains the URL to resolve.
Bitwise OR of one or more flags. See Source Resolver Flags.
Pointer to the
Receives a member of the
The dwFlags parameter must contain either the
For local files, you can pass the file name in the pwszURL parameter; the file:
scheme is not required.
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source from a byte stream. This method is synchronous.
- Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source from a byte stream. This method is synchronous.
- Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
Receives a member of the
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Note??This method cannot be called remotely.
-Applies to: desktop apps | Metro style apps
Creates a media source from a byte stream. This method is synchronous.
- Pointer to the byte stream's
Null-terminated string that contains the URL of the byte stream. The URL is optional and can be
Bitwise OR of flags. See Source Resolver Flags.
Pointer to the
Receives a member of the
The dwFlags parameter must contain the
The source resolver attempts to find one or more byte-stream handlers for the byte stream, based on the file name extension of the URL, or the MIME type of the byte stream (or both). The URL is specified in the optional pwszURL parameter, and the MIME type may be specified in the
Note??This method cannot be called remotely.
-Implemented by a client and called by Microsoft Media Foundation to get the client Secure Sockets Layer (SSL) certificate requested by the server.
In most HTTPS connections the server provides a certificate so that the client can ensure the identity of the server. However, in certain cases the server might wants to verify the identity of the client by requesting the client to send a certificate. For this scenario, a client application must provide a mechanism for Media Foundation to retrieve the client side certificate while opening an HTTPS URL with the source resolver or the scheme handler. The application must implement
If the
Gets the client SSL certificate synchronously.
-Pointer to a string that contains the URL for which a client-side SSL certificate is required. Media Foundation can resolve the scheme and send the request to the server.
Pointer to the buffer that stores the certificate.This caller must free the buffer by calling CoTaskMemFree.
Pointer to a DWORD variable that receives the number of bytes required to hold the certificate data in the buffer pointed by *ppbData.
If this method succeeds, it returns
Starts an asynchronous call to get the client SSL certificate.
-A null-terminated string that contains the URL for which a client-side SSL certificate is required. Media Foundation can resolve the scheme and send the request to the server.
A reference to the
A reference to the
If this method succeeds, it returns
When the operation completes, the callback object's
Completes an asynchronous request to get the client SSL certificate.
-A reference to the
Receives a reference to the buffer that stores the certificate.The caller must free the buffer by calling CoTaskMemFree.
Receives the size of the ppbData buffer, in bytes.
If this method succeeds, it returns
Call this method after the
Indicates whether the server SSL certificate must be verified by the caller, Media Foundation, or the
Pointer to a string that contains the URL that is sent to the server.
Pointer to a
Pointer to a
If this method succeeds, it returns
Called by Media Foundation when the server SSL certificate has been received; indicates whether the server certificate is accepted.
-Pointer to a string that contains the URL used to send the request to the server, and for which a server-side SSL certificate has been received.
Pointer to a buffer that contains the server SSL certificate.
Pointer to a DWORD variable that indicates the size of pbData in bytes.
Pointer to a
If this method succeeds, it returns
Gets information about one stream in a media source.
-A presentation descriptor contains one or more stream descriptors. To get the stream descriptors from a presentation descriptor, call
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an identifier for the stream.
-The stream identifier uniquely identifies a stream within a presentation. It does not change throughout the lifetime of the stream. For example, if the presentation changes while the source is running, the index number of the stream may change, but the stream identifier does not.
In general, stream identifiers do not have a specific meaning, other than to identify the stream. Some media sources may assign stream identifiers based on meaningful values, such as packet identifiers, but this depends on the implementation.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type handler for the stream. The media type handler can be used to enumerate supported media types for the stream, get the current media type, and set the media type.
-This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves an identifier for the stream.
-Receives the stream identifier.
If this method succeeds, it returns
The stream identifier uniquely identifies a stream within a presentation. It does not change throughout the lifetime of the stream. For example, if the presentation changes while the source is running, the index number of the stream may change, but the stream identifier does not.
In general, stream identifiers do not have a specific meaning, other than to identify the stream. Some media sources may assign stream identifiers based on meaningful values, such as packet identifiers, but this depends on the implementation.
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Retrieves a media type handler for the stream. The media type handler can be used to enumerate supported media types for the stream, get the current media type, and set the media type.
-Receives a reference to the
If this method succeeds, it returns
This interface is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Passes configuration information to the media sinks that are used for streaming the content. Optionally, this interface is supported by media sinks. The built-in ASF streaming media sink and the MP3 media sink implement this interface.
-Called by the streaming media client before the Media Session starts streaming to specify the byte offset or the time offset.
-A Boolean value that specifies whether qwSeekOffset gives a byte offset of a time offset.
Value | Meaning |
---|---|
| The qwSeekOffset parameter specifies a byte offset. |
The qwSeekOffset parameter specifies the time position in 100-nanosecond units. |
?
A byte offset or a time offset, depending on the value passed in fSeekOffsetIsByteOffset. Time offsets are specified in 100-nanosecond units.
If this method succeeds, it returns
Represents a stream on a media sink object.
-
Retrieves the media sink that owns this stream sink.
-
Retrieves the stream identifier for this stream sink.
-
Retrieves the media sink that owns this stream sink.
-Receives a reference to the media sink's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
Retrieves the stream identifier for this stream sink.
-Receives the stream identifier. If this stream sink was added by calling
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
Delivers a sample to the stream. The media sink processes the sample.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink is in the wrong state to receive a sample. For example, preroll is complete but the presenation clock has not started yet. |
| The sample has an invalid time stamp. See Remarks. |
| The media sink is paused or stopped and cannot process the sample. |
| The presentation clock was not set. Call |
| The sample does not have a time stamp. |
| The stream sink has not been initialized. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
Call this method when the stream sink sends an
This method can return
Negative time stamps.
Time stamps that jump backward (within the same stream).
The time stamps for one stream have drifted too far from the time stamps on another stream within the same media sink (for example, an archive sink that multiplexes the streams).
Not every media sink returns an error code in these situations.
-
Places a marker in the stream.
- Specifies the marker type, as a member of the
Optional reference to a
Optional reference to a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
This method causes the stream sink to send an
Causes the stream sink to drop any samples that it has received and has not rendered yet.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The stream sink has not been initialized yet. You might need to set a media type. |
| The media sink's Shutdown method has been called. |
| This stream was removed from the media sink and is no longer valid. |
?
If any samples are still queued from previous calls to the
Any pending marker events from the
This method is synchronous. It does not return until the sink has discarded all pending samples.
-Provides a method that retireves system id data.
-Retrieves system id data.
-The size in bytes of the returned data.
Receives the returned data. The caller must free this buffer by calling CoTaskMemFree.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Sets up the
If this method succeeds, it returns
Converts between Society of Motion Picture and Television Engineers (SMPTE) time codes and 100-nanosecond time units.
-If an object supports this interface, it must expose the interface as a service. To get a reference to the interface, call
The Advanced Streaming Format (ASF) media source exposes this interface.
-Starts an asynchronous call to convert Society of Motion Picture and Television Engineers (SMPTE) time code to 100-nanosecond units.
-Time in SMPTE time code to convert. The vt member of the
Pointer to the
PPointer to the
The method returns an
Return code | Description |
---|---|
| pPropVarTimecode is not VT_I8. |
| The object's Shutdown method was called. |
| The byte stream is not seekable. The time code cannot be read from the end of the byte stream. |
?
When the asynchronous method completes, the callback object's
The value of pPropVarTimecode is a 64-bit unsigned value typed as a LONGLONG. The upper DWORD contains the range. (A range is a continuous series of time codes.) The lower DWORD contains the time code in the form of a hexadecimal number 0xhhmmssff, where each 2-byte sequence is read as a decimal value.
void CreateTimeCode( DWORD dwFrames, DWORD dwSeconds, DWORD dwMinutes, DWORD dwHours, DWORD dwRange,-*pvar ) - { ULONGLONG ullTimecode = ((ULONGLONG)dwRange) << 32; ullTimecode += dwFrames % 10; ullTimecode += (( (ULONGLONG)dwFrames ) / 10) << 4; ullTimecode += (( (ULONGLONG)dwSeconds ) % 10) << 8; ullTimecode += (( (ULONGLONG)dwSeconds ) / 10) << 12; ullTimecode += (( (ULONGLONG)dwMinutes ) % 10) << 16; ullTimecode += (( (ULONGLONG)dwMinutes ) / 10) << 20; ullTimecode += (( (ULONGLONG)dwHours ) % 10) << 24; ullTimecode += (( (ULONGLONG)dwHours ) / 10) << 28; pvar->vt = VT_I8; pvar->hVal.QuadPart = (LONGLONG)ullTimecode; - } -
Completes an asynchronous request to convert time in Society of Motion Picture and Television Engineers (SMPTE) time code to 100-nanosecond units.
-Pointer to the
Receives the converted time.
If this method succeeds, it returns
Call this method after the
Starts an asynchronous call to convert time in 100-nanosecond units to Society of Motion Picture and Television Engineers (SMPTE) time code.
-The time to convert, in 100-nanosecond units.
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The object's Shutdown method was called. |
| The byte stream is not seekable. The time code cannot be read from the end of the byte stream. |
?
When the asynchronous method completes, the callback object's
Completes an asynchronous request to convert time in 100-nanosecond units to Society of Motion Picture and Television Engineers (SMPTE) time code.
-A reference to the
A reference to a
If this method succeeds, it returns
Call this method after the
The value of pPropVarTimecode is a 64-bit unsigned value typed as a LONGLONG. The upper DWORD contains the range. (A range is a continuous series of time codes.) The lower DWORD contains the time code in the form of a hexadecimal number 0xhhmmssff, where each 2-byte sequence is read as a decimal value.
-ParseTimeCode( const & var, DWORD *pdwRange, DWORD *pdwFrames, DWORD *pdwSeconds, DWORD *pdwMinutes, DWORD *pdwHours ) - { if (var.vt != VT_I8) { return E_INVALIDARG; } ULONGLONG ullTimeCode = (ULONGLONG)var.hVal.QuadPart; DWORD dwTimecode = (DWORD)(ullTimeCode & 0xFFFFFFFF); *pdwRange = (DWORD)(ullTimeCode >> 32); *pdwFrames = dwTimecode & 0x0000000F; *pdwFrames += (( dwTimecode & 0x000000F0) >> 4 ) * 10; *pdwSeconds = ( dwTimecode & 0x00000F00) >> 8; *pdwSeconds += (( dwTimecode & 0x0000F000) >> 12 ) * 10; *pdwMinutes = ( dwTimecode & 0x000F0000) >> 16; *pdwMinutes += (( dwTimecode & 0x00F00000) >> 20 ) * 10; *pdwHours = ( dwTimecode & 0x0F000000) >> 24; *pdwHours += (( dwTimecode & 0xF0000000) >> 28 ) * 10; return ; - } -
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A timed-text object represents a component of timed text.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the offset to the cue time.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Retrieves a list of all timed-text tracks registered with the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of active timed-text tracks in the timed-text component.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of all the timed-text tracks in the timed-text component.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of the timed-metadata tracks in the timed-text component.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables or disables inband mode.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether inband mode is enabled.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Registers a timed-text notify object.
-A reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Selects or deselects a track of text in the timed-text component.
-The identifier of the track to select.
Specifies whether to select or deselect a track of text. Specify TRUE to select the track or
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Adds a timed-text data source.
-A reference to the
Null-terminated wide-character string that contains the label of the data source.
Null-terminated wide-character string that contains the language of the data source.
A
Specifies whether to add the default data source. Specify TRUE to add the default data source or
Receives a reference to the unique identifier for the added track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Adds a timed-text data source from the specified URL.
-The URL of the timed-text data source.
Null-terminated wide-character string that contains the label of the data source.
Null-terminated wide-character string that contains the language of the data source.
A
Specifies whether to add the default data source. Specify TRUE to add the default data source or
Receives a reference to the unique identifier for the added track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Removes the timed-text track with the specified identifier.
-The identifier of the track to remove.
If this method succeeds, it returns
Get the identifier for a track by calling GetId.
When a track is removed, all buffered data from the track is also removed.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the offset to the cue time.
-A reference to a variable that receives the offset to the cue time.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Sets the offset to the cue time.
-The offset to the cue time.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Retrieves a list of all timed-text tracks registered with the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of active timed-text tracks in the timed-text component.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of all the timed-text tracks in the timed-text component.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the list of the timed-metadata tracks in the timed-text component.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables or disables inband mode.
- Specifies whether inband mode is enabled. If TRUE, inband mode is enabled. If
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether inband mode is enabled.
-Returns whether inband mode is enabled. If TRUE, inband mode is enabled. If
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents the data content of a timed-text object.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text object.
-A reference to a memory block that receives a reference to the data content of the timed-text object.
A reference to a variable that receives the length in bytes of the data content.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of a timed-text cue.
-The identifier is retrieved by this method is dynamically generated by the system and is guaranteed to uniquely identify a cue within a single timed-text track. It is not guaranteed to be unique across tracks. If a cue already has an identifier that is provided in the text-track data format, this ID can be retrieved by calling GetOriginalId.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the start time of the cue in the track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the duration time of the cue in the track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the display region of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the style of the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of lines of text in the timed-text cue.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of a timed-text cue.
-The identifier of a timed-text cue.
The identifier is retrieved by this method is dynamically generated by the system and is guaranteed to uniquely identify a cue within a single timed-text track. It is not guaranteed to be unique across tracks. If a cue already has an identifier that is provided in the text-track data format, this ID can be retrieved by calling GetOriginalId.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the cue identifier that is provided in the text-track data format, if available.
-The cue identifier that is provided in the text-track data format.
If this method succeeds, it returns
This method retrieves an identifier for the cue that is included in the source data, if one was specified. The system dynamically generates identifiers for cues that are guaranteed to be unique within a single time-text track. To obtain this system-generated ID, call GetId.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text cue.
-Returns a
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the start time of the cue in the track.
-Returns the start time of the cue in the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the duration time of the cue in the track.
-Returns the duration time of the cue in the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the timed-text cue.
-Returns the identifier of the timed-text cue.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the data content of the timed-text cue.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the display region of the timed-text cue.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets info about the style of the timed-text cue.
-A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of lines of text in the timed-text cue.
-Returns the number of lines of text.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a line of text in the cue from the index of the line.
-The index of the line of text in the cue to retrieve.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a block of formatted timed-text.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of subformats in the formatted timed-text object.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text in the formatted timed-text object.
-A reference to a variable that receives the null-terminated wide-character string that contains the text.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the number of subformats in the formatted timed-text object.
-Returns the number of subformats.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a subformat in the formatted timed-text object.
-The index of the subformat in the formatted timed-text object.
A reference to a variable that receives the first character of the subformat.
A reference to a variable that receives the length, in characters, of the subformat.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Interface that defines callbacks for Microsoft Media Foundation Timed Text notifications.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a text track is added
-The identifier of the track that was added.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a text track is removed.
-The identifier of the track that was removed.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a track is selected or deselected.
-The identifier of the track that was selected or deselected.
TRUE if the track was selected.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when an error occurs in a text track.
-An
The extended error code for the last error.
The identifier of the track on which the error occurred.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Called when a cue event occurs in a text track.
-A value specifying the type of event that has occured.
The current time when the cue event occurred.
The
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Resets the timed-text-notify object.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents the display region of a timed-text object.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the writing mode of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the display alignment of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether a clip of text overflowed the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the word wrap feature is enabled in the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the Z-index (depth) of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the scroll mode of the region.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the name of the region.
-A reference to a variable that receives the null-terminated wide-character string that contains the name of the region.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the position of the region.
-A reference to a variable that receives the X-coordinate of the position.
A reference to a variable that receives the Y-coordinate of the position.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the extent of the region.
-A reference to a variable that receives the width of the region.
A reference to a variable that receives the height of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the region.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the writing mode of the region.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the display alignment of the region.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the height of each line of text in the region.
-A reference to a variable that receives the height of each line of text in the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether a clip of text overflowed the region.
-A reference to a variable that receives a value that specifies whether a clip of text overflowed the region. The variable specifies TRUE if the clip overflowed; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the padding that surrounds the region.
-A reference to a variable that receives the padding before the start of the region.
A reference to a variable that receives the start of the region.
A reference to a variable that receives the padding after the end of the region.
A reference to a variable that receives the end of the region.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the word wrap feature is enabled in the region.
-A reference to a variable that receives a value that specifies whether the word wrap feature is enabled in the region. The variable specifies TRUE if word wrap is enabled; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the Z-index (depth) of the region.
-A reference to a variable that receives the Z-index (depth) of the region.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the scroll mode of the region.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the color of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text style is external.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the color of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text always shows the background.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font style of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text is bold.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the right to left writing mode of the timed-text style is enabled.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text alignment of the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets how text is decorated for the timed-text style.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the name of the timed-text style.
-A reference to a variable that receives the null-terminated wide-character string that contains the name of the style.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text style is external.
-Returns whether the timed-text style is external. If TRUE, the timed-text style is external; otherwise,
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font family of the timed-text style.
-A reference to a variable that receives the null-terminated wide-character string that contains the font family of the style.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font size of the timed-text style.
-A reference to a variable that receives the font size of the timed-text style.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the color of the timed-text style.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the background color of the timed-text style.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text always shows the background.
-A reference to a variable that receives a value that specifies whether the style of timed text always shows the background. The variable specifies TRUE if the background is always shown; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the font style of the timed-text style.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the style of timed text is bold.
-A reference to a variable that receives a value that specifies whether the style of timed text is bold. The variable specifies TRUE if the style is bold; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the right to left writing mode of the timed-text style is enabled.
-A reference to a variable that receives a value that specifies whether the right to left writing mode is enabled. The variable specifies TRUE if the right to left writing mode is enabled; otherwise,
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text alignment of the timed-text style.
-A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets how text is decorated for the timed-text style.
-A reference to a variable that receives a combination of
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the text outline for the timed-text style.
-A reference to a variable that receives a
A reference to a variable that receives the thickness.
A reference to a variable that receives the blur radius.
A reference to a variable that receives a
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a track of timed text.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the track of timed text.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Sets the label of a timed-text track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is inband.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is active.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a value indicating the error type of the latest error associated with the track.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the extended error code for the latest error associated with the track.
-If the most recent error was associated with a track, this value will be the same
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the identifier of the track of timed text.
-Returns the identifier of the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the label of the track.
-A reference to a variable that receives the null-terminated wide-character string that contains the label of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Sets the label of a timed-text track.
-A reference to a null-terminated wide-character string that contains the label of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the language of the track.
-A reference to a variable that receives the null-terminated wide-character string that contains the language of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the kind of timed-text track.
-Returns a
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is inband.
-Returns whether the timed-text track is inband. If TRUE, the timed-text track is inband; otherwise,
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the in-band metadata of the track.
-A reference to a variable that receives the null-terminated wide-character string that contains the in-band metadata of the track.
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Determines whether the timed-text track is active.
-Returns whether the timed-text track is active. If TRUE, the timed-text track is active; otherwise,
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a value indicating the error type of the latest error associated with the track.
-A value indicating the error type of the latest error associated with the track.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the extended error code for the latest error associated with the track.
-The extended error code for the latest error associated with the track.
If the most recent error was associated with a track, this value will be the same
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a
A
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a list of timed-text tracks.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the length, in tracks, of the timed-text-track list.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets the length, in tracks, of the timed-text-track list.
-Returns the length, in tracks, of the timed-text-track list.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a text track in the list from the index of the track.
-The index of the track in the list to retrieve.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Gets a text track in the list from the identifier of the track.
-The identifier of the track in the list to retrieve.
A reference to a memory block that receives a reference to the
If this method succeeds, it returns
Provides a timer that invokes a callback at a specified time.
-The presentation clock exposes this interface. To get a reference to the interface, call QueryInterface.
-
Sets a timer that invokes a callback at the specified time.
-Bitwise OR of zero or more flags from the
The time at which the timer should fire, in units of the clock's frequency. The time is either absolute or relative to the current time, depending on the value of dwFlags.
Pointer to the
Pointer to the
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The clock was shut down. |
| The method succeeded, but the clock is stopped. |
?
If the clock is stopped, the method returns MF_S_CLOCK_STOPPED. The callback will not be invoked until the clock is started.
-
Cancels a timer that was set using the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Because the timer is dispatched asynchronously, the application's timer callback might get invoked even if this method succeeds.
-Creates a fully loaded topology from the input partial topology.
-This method creates any intermediate transforms that are needed to complete the topology. It also sets the input and output media types on all of the objects in the topology. If the method succeeds, the full topology is returned in the ppOutputTopo parameter.
You can use the pCurrentTopo parameter to provide a full topology that was previously loaded. If this topology contains objects that are needed in the new topology, the topology loader can re-use them without creating them again. This caching can potentially make the process faster. The objects from pCurrentTopo will not be reconfigured, so you can specify a topology that is actively streaming data. For example, while a topology is still running, you can pre-load the next topology.
Before calling this method, you must ensure that the output nodes in the partial topology have valid
Creates a fully loaded topology from the input partial topology.
-A reference to the
Receives a reference to the
A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| One or more output nodes contain |
?
This method creates any intermediate transforms that are needed to complete the topology. It also sets the input and output media types on all of the objects in the topology. If the method succeeds, the full topology is returned in the ppOutputTopo parameter.
You can use the pCurrentTopo parameter to provide a full topology that was previously loaded. If this topology contains objects that are needed in the new topology, the topology loader can re-use them without creating them again. This caching can potentially make the process faster. The objects from pCurrentTopo will not be reconfigured, so you can specify a topology that is actively streaming data. For example, while a topology is still running, you can pre-load the next topology.
Before calling this method, you must ensure that the output nodes in the partial topology have valid
Represents a topology. A topology describes a collection of media sources, sinks, and transforms that are connected in a certain order. These objects are represented within the topology by topology nodes, which expose the
To create a topology, call
Gets the identifier of the topology.
-Gets the number of nodes in the topology.
-Gets the source nodes in the topology.
-Gets the output nodes in the topology.
-Gets the identifier of the topology.
-Receives the identifier, as a TOPOID value.
If this method succeeds, it returns
Adds a node to the topology.
-Pointer to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| pNode is invalid, possibly because the node already exists in the topology. |
?
Removes a node from the topology.
-Pointer to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The specified node is not a member of this topology. |
?
This method does not destroy the node, so the
The method breaks any connections between the specified node and other nodes.
-Gets the number of nodes in the topology.
-Receives the number of nodes.
If this method succeeds, it returns
Gets a node in the topology, specified by index.
- The zero-based index of the node. To get the number of nodes in the topology, call
Receives a reference to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index is less than zero. |
| No node can be found at the index wIndex. |
?
Removes all nodes from the topology.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
You do not need to clear a topology before disposing of it. The Clear method is called automatically when the topology is destroyed.
-Converts this topology into a copy of another topology.
- A reference to the
If this method succeeds, it returns
This method does the following:
Gets a node in the topology, specified by node identifier.
- The identifier of the node to retrieve. To get a node's identifier, call
Receives a reference to the node's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The topology does not contain a node with this identifier. |
?
Gets the source nodes in the topology.
-Receives a reference to the
If this method succeeds, it returns
Gets the output nodes in the topology.
- Receives a reference to the
If this method succeeds, it returns
Represents a node in a topology. The following node types are supported:
To create a new node, call the
Sets the object associated with this node.
-All node types support this method, but the object reference is not used by every node type.
Node type | Object reference |
---|---|
Source node. | Not used. |
Transform node. | |
Output node | |
Tee node. | Not used. |
?
If the object supports
Gets the object associated with this node.
-
Retrieves the node type.
-Retrieves or sets the identifier of the node.
-When a node is first created, it is assigned an identifier. Node identifiers are unique within a topology, but can be reused across several topologies. The topology loader uses the identifier to look up nodes in the previous topology, so that it can reuse objects from the previous topology.
To find a node in a topology by its identifier, call
Retrieves the number of input streams that currently exist on this node.
-The input streams may or may not be connected to output streams on other nodes. To get the node that is connected to a specified input stream, call
The
Retrieves the number of output streams that currently exist on this node.
-The output streams may or may not be connected to input streams on other nodes. To get the node that is connected to a specific output stream on this node, call
The
Sets the object associated with this node.
-A reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
All node types support this method, but the object reference is not used by every node type.
Node type | Object reference |
---|---|
Source node. | Not used. |
Transform node. | |
Output node | |
Tee node. | Not used. |
?
If the object supports
Gets the object associated with this node.
- Receives a reference to the object's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no object associated with this node. |
?
Retrieves the node type.
-Receives the node type, specified as a member of the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Retrieves the identifier of the node.
-Receives the identifier.
If this method succeeds, it returns
When a node is first created, it is assigned an identifier. Node identifiers are unique within a topology, but can be reused across several topologies. The topology loader uses the identifier to look up nodes in the previous topology, so that it can reuse objects from the previous topology.
To find a node in a topology by its identifier, call
Sets the identifier for the node.
-The identifier for the node.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The TOPOID has already been set for this object. |
?
When a node is first created, it is assigned an identifier. Typically there is no reason for an application to override the identifier. Within a topology, each node identifier should be unique.
-
Retrieves the number of input streams that currently exist on this node.
-Receives the number of input streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The input streams may or may not be connected to output streams on other nodes. To get the node that is connected to a specified input stream, call
The
Retrieves the number of output streams that currently exist on this node.
-Receives the number of output streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The output streams may or may not be connected to input streams on other nodes. To get the node that is connected to a specific output stream on this node, call
The
Connects an output stream from this node to the input stream of another node.
-Zero-based index of the output stream on this node.
Pointer to the
Zero-based index of the input stream on the other node.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The method failed. |
| Invalid parameter. |
?
Node connections represent data flow from one node to the next. The streams are logical, and are specified by index.
If the node is already connected at the specified output, the method breaks the existing connection. If dwOutputIndex or dwInputIndexOnDownstreamNode specify streams that do not exist yet, the method adds as many streams as needed.
This method checks for certain invalid conditions:
An output node cannot have any output connections. If you call this method on an output node, the method returns E_FAIL.
A node cannot be connected to itself. If pDownstreamNode specifies the same node as the method call, the method returns E_INVALIDARG.
However, if the method succeeds, it does not guarantee that the node connection is valid. It is possible to create a partial topology that the topology loader cannot resolve. If so, the
To break an existing node connection, call
Disconnects an output stream on this node.
-Zero-based index of the output stream to disconnect.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The dwOutputIndex parameter is out of range. |
| The specified output stream is not connected to another node. |
?
If the specified output stream is connected to another node, this method breaks the connection.
-
Retrieves the node that is connected to a specified input stream on this node.
-Zero-based index of an input stream on this node.
Receives a reference to the
Receives the index of the output stream that is connected to this node's input stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index is out of range. |
| The specified input stream is not connected to another node. |
?
Retrieves the node that is connected to a specified output stream on this node.
-Zero-based index of an output stream on this node.
Receives a reference to the
Receives the index of the input stream that is connected to this node's output stream.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The index is out of range. |
| The specified input stream is not connected to another node. |
?
Sets the preferred media type for an output stream on this node.
-Zero-based index of the output stream.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node is an output node. |
?
The preferred type is a hint for the topology loader.
Do not call this method after loading a topology or setting a topology on the Media Session. Changing the preferred type on a running topology can cause connection errors.
If no output stream exists at the specified index, the method creates new streams up to and including the specified index number.
Output nodes cannot have outputs. If this method is called on an output node, it returns E_NOTIMPL
-
Retrieves the preferred media type for an output stream on this node.
-Zero-based index of the output stream.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node does not have a preferred output type. |
| Invalid stream index. |
| This node is an output node. |
?
Output nodes cannot have outputs. If this method is called on an output node, it returns E_NOTIMPL.
The preferred output type provides a hint to the topology loader. In a fully resolved topology, there is no guarantee that every topology node will have a preferred output type. To get the actual media type for a node, you must get a reference to the node's underlying object. (For more information, see
Sets the preferred media type for an input stream on this node.
-Zero-based index of the input stream.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node is a source node. |
?
The preferred type is a hint for the topology loader.
Do not call this method after loading a topology or setting a topology on the Media Session. Changing the preferred type on a running topology can cause connection errors.
If no input stream exists at the specified index, the method creates new streams up to and including the specified index number.
Source nodes cannot have inputs. If this method is called on a source node, it returns E_NOTIMPL.
-
Retrieves the preferred media type for an input stream on this node.
-Zero-based index of the input stream.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| This node does not have a preferred input type. |
| Invalid stream index. |
| This node is a source node. |
?
Source nodes cannot have inputs. If this method is called on a source node, it returns E_NOTIMPL.
The preferred input type provides a hint to the topology loader. In a fully resolved topology, there is no guarantee that every topology node will have a preferred input type. To get the actual media type for a node, you must get a reference to the node's underlying object. (For more information, see
Copies the data from another topology node into this node.
- A reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The node types do not match. |
?
The two nodes must have the same node type. To get the node type, call
This method copies the object reference, preferred types, and attributes from pNode to this node. It also copies the TOPOID that uniquely identifies each node in a topology. It does not duplicate any of the connections from pNode to other nodes.
The purpose of this method is to copy nodes from one topology to another. Do not use duplicate nodes within the same topology.
-Updates the attributes of one or more nodes in the Media Session's current topology.
The Media Session exposes this interface as a service. To get a reference to the interface, call
Currently the only attribute that can be updated is the
Updates the attributes of one or more nodes in the current topology.
-Reserved.
The number of elements in the pUpdates array.
Pointer to an array of
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Currently the only attribute that can be updated is the
Enables a custom video mixer or video presenter to get interface references from the Enhanced Video Renderer (EVR). The mixer can also use this interface to get interface references from the presenter, and the presenter can use it to get interface references from the mixer.
To use this interface, implement the
Retrieves an interface from the enhanced video renderer (EVR), or from the video mixer or video presenter.
-Specifies the scope of the search. Currently this parameter is ignored. Use the value
Reserved, must be zero.
Service
Interface identifier of the requested interface.
Array of interface references. If the method succeeds, each member of the array contains either a valid interface reference or
Pointer to a value that specifies the size of the ppvObjects array. The value must be at least 1. In the current implementation, there is no reason to specify an array size larger than one element. The value is not changed on output.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The requested interface is not available. |
| The method was not called from inside the |
| The object does not support the specified service |
?
This method can be called only from inside the
The presenter can use this method to query the EVR and the mixer. The mixer can use it to query the EVR and the presenter. Which objects are queried depends on the caller and the service
Caller | Service | Objects queried |
---|---|---|
Presenter | MR_VIDEO_RENDER_SERVICE | EVR |
Presenter | MR_VIDEO_MIXER_SERVICE | Mixer |
Mixer | MR_VIDEO_RENDER_SERVICE | Presenter and EVR |
?
The following interfaces are available from the EVR:
IMediaEventSink. This interface is documented in the DirectShow SDK documentation.
The following interfaces are available from the mixer:
Initializes a video mixer or presenter. This interface is implemented by mixers and presenters, and enables them to query the enhanced video renderer (EVR) for interface references.
-When the EVR loads the video mixer and the video presenter, the EVR queries the object for this interface and calls InitServicePointers. Inside the InitServicePointers method, the object can query the EVR for interface references.
-
Signals the mixer or presenter to query the enhanced video renderer (EVR) for interface references.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The
When the EVR calls
Signals the object to release the interface references obtained from the enhanced video renderer (EVR).
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
After this method is called, any interface references obtained during the previous call to
Tracks the reference counts on a video media sample. Video samples created by the
Use this interface to determine whether it is safe to delete or re-use the buffer contained in a sample. One object assigns itself as the owner of the video sample by calling SetAllocator. When all objects release their reference counts on the sample, the owner's callback method is invoked.
-
Sets the owner for the sample.
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The owner was already set. This method cannot be called twice on the sample. |
?
When this method is called, the sample holds an additional reference count on itself. When every other object releases its reference counts on the sample, the sample invokes the pSampleAllocator callback method. To get a reference to the sample, call
After the callback is invoked, the sample clears the callback. To reinstate the callback, you must call SetAllocator again.
It is safe to pass in the sample's
Implemented by the transcode profile object.
The transcode profile stores configuration settings that the topology builder uses to generate the transcode topology for the output file. These configuration settings are specified by the caller and include audio and video stream properties, encoder settings, and container settings that are specified by the caller.
To create the transcode profile object, call
Gets or sets the audio stream settings that are currently set in the transcode profile.
-If there are no audio attributes set in the transcode profile, the call to GetAudioAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Gets or sets the video stream settings that are currently set in the transcode profile.
-If there are no container attributes set in the transcode profile, the GetVideoAttributes method succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Gets or sets the container settings that are currently set in the transcode profile.
-If there are no container attributes set in the transcode profile, the call to GetContainerAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets audio stream configuration settings in the transcode profile.
To get a list of compatible audio media types supported by the Media Foundation transform (MFT) encoder , call
If this method succeeds, it returns
Gets the audio stream settings that are currently set in the transcode profile.
-Receives a reference to the
If this method succeeds, it returns
If there are no audio attributes set in the transcode profile, the call to GetAudioAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets video stream configuration settings in the transcode profile.
For example code, see
If this method succeeds, it returns
Gets the video stream settings that are currently set in the transcode profile.
-Receives a reference to the
If this method succeeds, it returns
If there are no container attributes set in the transcode profile, the GetVideoAttributes method succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets container configuration settings in the transcode profile.
For example code, see
If this method succeeds, it returns
Gets the container settings that are currently set in the transcode profile.
-Receives a reference to the
If this method succeeds, it returns
If there are no container attributes set in the transcode profile, the call to GetContainerAttributes succeeds and ppAttrs receives
To get a specific attribute value, the caller must call the appropriate
Sets the name of the encoded output file.
-The media sink will create a local file with the specified file name.
Alternately, you can call
Sets the name of the encoded output file.
-The media sink will create a local file with the specified file name.
Alternately, you can call
Sets an output byte stream for the transcode media sink.
-Call this method to provide a writeable byte stream that will receive the transcoded data.
Alternatively, you can provide the name of an output file, by calling
The pByteStreamActivate parameter must specify an activation object that creates a writeable byte stream. Internally, the transcode media sink calls
*pByteStream = null ; hr = pByteStreamActivate->ActivateObject(IID_IMFByteStream, (void**)&pByteStream);
Currently, Microsoft Media Foundation does not provide any byte-stream activation objects. To use this method, an application must provide a custom implementation of
Sets the transcoding profile on the transcode sink activation object.
-Before calling this method, initialize the profile object as follows:
Gets the media types for the audio and video streams specified in the transcode profile.
-Before calling this method, call
Sets the name of the encoded output file.
-Pointer to a null-terminated string that contains the name of the output file.
If this method succeeds, it returns
The media sink will create a local file with the specified file name.
Alternately, you can call
Sets an output byte stream for the transcode media sink.
-A reference to the
If this method succeeds, it returns
Call this method to provide a writeable byte stream that will receive the transcoded data.
Alternatively, you can provide the name of an output file, by calling
The pByteStreamActivate parameter must specify an activation object that creates a writeable byte stream. Internally, the transcode media sink calls
*pByteStream = null ; hr = pByteStreamActivate->ActivateObject(IID_IMFByteStream, (void**)&pByteStream);
Currently, Microsoft Media Foundation does not provide any byte-stream activation objects. To use this method, an application must provide a custom implementation of
Sets the transcoding profile on the transcode sink activation object.
-A reference to the
If this method succeeds, it returns
Before calling this method, initialize the profile object as follows:
Gets the media types for the audio and video streams specified in the transcode profile.
-A reference to an
If the method succeeds, the method assigns
If this method succeeds, it returns
Before calling this method, call
Implemented by all Media Foundation Transforms (MFTs).
-Gets the global attribute store for this Media Foundation transform (MFT).
- Use the
Implementation of this method is optional unless the MFT needs to support a particular set of attributes. Exception: Hardware-based MFTs must implement this method. See Hardware MFTs.
-Queries whether the Media Foundation transform (MFT) is ready to produce output data.
- If the method returns the
MFTs are not required to implement this method. If the method returns E_NOTIMPL, you must call ProcessOutput to determine whether the transform has output data.
If the MFT has more than one output stream, but it does not produce samples at the same time for each stream, it can set the
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputStatus. See Creating Hybrid DMO/MFT Objects.
-Gets the minimum and maximum number of input and output streams for this Media Foundation transform (MFT).
-Receives the minimum number of input streams.
Receives the maximum number of input streams. If there is no maximum, receives the value MFT_STREAMS_UNLIMITED.
Receives the minimum number of output streams.
Receives the maximum number of output streams. If there is no maximum, receives the value MFT_STREAMS_UNLIMITED.
If this method succeeds, it returns
If the MFT has a fixed number of streams, the minimum and maximum values are the same.
It is not recommended to create an MFT that supports zero inputs or zero outputs. An MFT with no inputs or no outputs may not be compatible with the rest of the Media Foundation pipeline. You should create a Media Foundation sink or source for this purpose instead.
When an MFT is first created, it is not guaranteed to have the minimum number of streams. To find the actual number of streams, call
This method should not be called with
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetStreamLimits. See Creating Hybrid DMO/MFT Objects.
-Gets the current number of input and output streams on this Media Foundation transform (MFT).
-Receives the number of input streams.
Receives the number of output streams.
If this method succeeds, it returns
The number of streams includes unselected streams?that is, streams with no media type or a
This method should not be called with
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetStreamCount. See Creating Hybrid DMO/MFT Objects.
-Gets the stream identifiers for the input and output streams on this Media Foundation transform (MFT).
-Number of elements in the pdwInputIDs array.
Pointer to an array allocated by the caller. The method fills the array with the input stream identifiers. The array size must be at least equal to the number of input streams. To get the number of input streams, call
If the caller passes an array that is larger than the number of input streams, the MFT must not write values into the extra array entries.
Number of elements in the pdwOutputIDs array.
Pointer to an array allocated by the caller. The method fills the array with the output stream identifiers. The array size must be at least equal to the number of output streams. To get the number of output streams, call GetStreamCount.
If the caller passes an array that is larger than the number of output streams, the MFT must not write values into the extra array entries.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. See Remarks. |
| One or both of the arrays is too small. |
?
Stream identifiers are necessary because some MFTs can add or remove streams, so the index of a stream may not be unique. Therefore,
This method can return E_NOTIMPL if both of the following conditions are true:
This method must be implemented if any of the following conditions is true:
All input stream identifiers must be unique within an MFT, and all output stream identifiers must be unique. However, an input stream and an output stream can share the same identifier.
If the client adds an input stream, the client assigns the identifier, so the MFT must allow arbitrary identifiers, as long as they are unique. If the MFT creates an output stream, the MFT assigns the identifier.
By convention, if an MFT has exactly one fixed input stream and one fixed output stream, it should assign the identifier 0 to both streams.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetStreamIDs. See Creating Hybrid DMO/MFT Objects.
-Gets the buffer requirements and other information for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
?
It is valid to call this method before setting the media types.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputStreamInfo. See Creating Hybrid DMO/MFT Objects.
-Gets the buffer requirements and other information for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. |
?
It is valid to call this method before setting the media types.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputStreamInfo. See Creating Hybrid DMO/MFT Objects.
-Gets the global attribute store for this Media Foundation transform (MFT).
- Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not support attributes. |
?
Use the
Implementation of this method is optional unless the MFT needs to support a particular set of attributes. Exception: Hardware-based MFTs must implement this method. See Hardware MFTs.
-Gets the attribute store for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not support input stream attributes. |
| Invalid stream identifier. |
?
Implementation of this method is optional unless the MFT needs to support a particular set of attributes.
To get the attribute store for the entire MFT, call
Gets the attribute store for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not support output stream attributes. |
| Invalid stream identifier. |
?
Implementation of this method is optional unless the MFT needs to support a particular set of attributes.
To get the attribute store for the entire MFT, call
Removes an input stream from this Media Foundation transform (MFT).
-Identifier of the input stream to remove.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The transform has a fixed number of input streams. |
| The stream is not removable, or the transform currently has the minimum number of input streams it can support. |
| Invalid stream identifier. |
| The transform has unprocessed input buffers for the specified stream. |
?
If the transform has a fixed number of input streams, the method returns E_NOTIMPL.
An MFT might support this method but not allow certain input streams to be removed. If an input stream can be removed, the
If the transform still has unprocessed input for that stream, the method might succeed or it might return
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTDeleteInputStream. See Creating Hybrid DMO/MFT Objects.
-Adds one or more new input streams to this Media Foundation transform (MFT).
-Number of streams to add.
Array of stream identifiers. The new stream identifiers must not match any existing input streams.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| The MFT has a fixed number of input streams. |
?
If the new streams exceed the maximum number of input streams for this transform, the method returns E_INVALIDARG. To find the maximum number of input streams, call
If any of the new stream identifiers conflicts with an existing input stream, the method returns E_INVALIDARG.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTAddInputStreams. See Creating Hybrid DMO/MFT Objects.
-Gets an available media type for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Index of the media type to retrieve. Media types are indexed from zero and returned in approximate order of preference.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not have a list of available input types. |
| Invalid stream identifier. |
| The dwTypeIndex parameter is out of range. |
| You must set the output types before setting the input types. |
?
The MFT defines a list of available media types for each input stream and orders them by preference. This method enumerates the available media types for an input stream. To enumerate the available types, increment dwTypeIndex until the method returns
Setting the media type on one stream might change the available types for another stream, or change the preference order. However, an MFT is not required to update the list of available types dynamically. The only guaranteed way to test whether you can set a particular input type is to call
In some cases, an MFT cannot return a list of input types until one or more output types are set. If so, the method returns
An MFT is not required to implement this method. However, most MFTs should implement this method, unless the supported types are simple and can be discovered through the
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputAvailableType. See Creating Hybrid DMO/MFT Objects.
For encoders, after the output type is set, GetInputAvailableType must return a list of input types that are compatible with the current output type. This means that all types returned by GetInputAvailableType after the output type is set must be valid types for SetInputType.
Encoders should reject input types if the attributes of the input media type and output media type do not match, such as resolution setting with
Gets an available media type for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Index of the media type to retrieve. Media types are indexed from zero and returned in approximate order of preference.
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT does not have a list of available output types. |
| Invalid stream identifier. |
| The dwTypeIndex parameter is out of range. |
| You must set the input types before setting the output types. |
?
The MFT defines a list of available media types for each output stream and orders them by preference. This method enumerates the available media types for an output stream. To enumerate the available types, increment dwTypeIndex until the method returns MF_E_NO_MORE_TYPES.
Setting the media type on one stream can change the available types for another stream (or change the preference order). However, an MFT is not required to update the list of available types dynamically. The only guaranteed way to test whether you can set a particular input type is to call
In some cases, an MFT cannot return a list of output types until one or more input types are set. If so, the method returns
An MFT is not required to implement this method. However, most MFTs should implement this method, unless the supported types are simple and can be discovered through the
This method can return a partial media type. A partial media type contains an incomplete description of a format, and is used to provide a hint to the caller. For example, a partial type might include just the major type and subtype GUIDs. However, after the client sets the input types on the MFT, the MFT should generally return at least one complete output type, which can be used without further modification. For more information, see Complete and Partial Media Types.
Some MFTs cannot provide an accurate list of output types until the MFT receives the first input sample. For example, the MFT might need to read the first packet header to deduce the format. An MFT should handle this situation as follows:
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputAvailableType. See Creating Hybrid DMO/MFT Objects.
-Sets, tests, or clears the media type for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Pointer to the
Zero or more flags from the _MFT_SET_TYPE_FLAGS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The MFT cannot use the proposed media type. |
| Invalid stream identifier. |
| The proposed type is not valid. This error code indicates that the media type itself is not configured correctly; for example, it might contain mutually contradictory attributes. |
| The MFT cannot switch types while processing data. Try draining or flushing the MFT. |
| You must set the output types before setting the input types. |
| The MFT could not find a suitable DirectX Video Acceleration (DXVA) configuration. |
?
This method can be used to set, test without setting, or clear the media type:
Setting the media type on one stream may change the acceptable types on another stream.
An MFT may require the caller to set one or more output types before setting the input type. If so, the method returns
If the MFT supports DirectX Video Acceleration (DXVA) but is unable to find a suitable DXVA configuration (for example, if the graphics driver does not have the right capabilities), the method should return
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTSetInputType. See Creating Hybrid DMO/MFT Objects.
-Sets, tests, or clears the media type for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Pointer to the
Zero or more flags from the _MFT_SET_TYPE_FLAGS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The transform cannot use the proposed media type. |
| Invalid stream identifier. |
| The proposed type is not valid. This error code indicates that the media type itself is not configured correctly; for example, it might contain mutually contradictory flags. |
| The MFT cannot switch types while processing data. Try draining or flushing the MFT. |
| You must set the input types before setting the output types. |
| The MFT could not find a suitable DirectX Video Acceleration (DXVA) configuration. |
?
This method can be used to set, test without setting, or clear the media type:
Setting the media type on one stream may change the acceptable types on another stream.
An MFT may require the caller to set one or more input types before setting the output type. If so, the method returns
If the MFT supports DirectX Video Acceleration (DXVA) but is unable to find a suitable DXVA configuration (for example, if the graphics driver does not have the right capabilities), the method should return
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTSetOutputType. See Creating Hybrid DMO/MFT Objects.
-Gets the current media type for an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
| The input media type has not been set. |
?
If the specified input stream does not yet have a media type, the method returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputCurrentType. See Creating Hybrid DMO/MFT Objects.
-Gets the current media type for an output stream on this Media Foundation transform (MFT).
- Output stream identifier. To get the list of stream identifiers, call
Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
| The output media type has not been set. |
?
If the specified output stream does not yet have a media type, the method returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputCurrentType. See Creating Hybrid DMO/MFT Objects.
-Queries whether an input stream on this Media Foundation transform (MFT) can accept more data.
- Input stream identifier. To get the list of stream identifiers, call
Receives a member of the _MFT_INPUT_STATUS_FLAGS enumeration, or zero. If the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
| The media type is not set on one or more streams. |
?
If the method returns the
Use this method to test whether the input stream is ready to accept more data, without incurring the overhead of allocating a new sample and calling ProcessInput.
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output (or both).
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetInputStatus. See Creating Hybrid DMO/MFT Objects.
-Queries whether the Media Foundation transform (MFT) is ready to produce output data.
- Receives a member of the _MFT_OUTPUT_STATUS_FLAGS enumeration, or zero. If the value is
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
| The media type is not set on one or more streams. |
?
If the method returns the
MFTs are not required to implement this method. If the method returns E_NOTIMPL, you must call ProcessOutput to determine whether the transform has output data.
If the MFT has more than one output stream, but it does not produce samples at the same time for each stream, it can set the
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTGetOutputStatus. See Creating Hybrid DMO/MFT Objects.
-Sets the range of time stamps the client needs for output.
-Specifies the earliest time stamp. The Media Foundation transform (MFT) will accept input until it can produce an output sample that begins at this time; or until it can produce a sample that ends at this time or later. If there is no lower bound, use the value MFT_OUTPUT_BOUND_LOWER_UNBOUNDED.
Specifies the latest time stamp. The MFT will not produce an output sample with time stamps later than this time. If there is no upper bound, use the value MFT_OUTPUT_BOUND_UPPER_UNBOUNDED.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
| The media type is not set on one or more streams. |
?
This method can be used to optimize preroll, especially in formats that have gaps between time stamps, or formats where the data must start on a sync point, such as MPEG-2. Calling this method is optional, and implementation of this method by an MFT is optional. If the MFT does not implement the method, the return value is E_NOTIMPL.
If an MFT implements this method, it must limit its output data to the range of times specified by hnsLowerBound and hnsUpperBound. The MFT discards any input data that is not needed to produce output within this range. If the sample boundaries do not exactly match the range, the MFT should split the output samples, if possible. Otherwise, the output samples can overlap the range.
For example, suppose the output range is 100 to 150 milliseconds (ms), and the output format is video with each frame lasting 33 ms. A sample with a time stamp of 67 ms overlaps the range (67 + 33 = 100) and is produced as output. A sample with a time stamp of 66 ms is discarded (66 + 33 = 99). Similarly, a sample with a time stamp of 150 ms is produced as output, but a sample with a time stamp of 151 is discarded.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTSetOutputBounds. See Creating Hybrid DMO/MFT Objects.
-Sends an event to an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Not implemented. |
| Invalid stream number. |
| The media type is not set on one or more streams. |
| The pipeline should not propagate the event. |
?
An MFT can handle sending the event downstream, or it can let the pipeline do this, as indicated by the return value:
To send the event downstream, the MFT adds the event to the collection object that is provided by the client in the pEvents member of the
Events must be serialized with the samples that come before and after them. Attach the event to the output sample that follows the event. (The pipeline will process the event first, and then the sample.) If an MFT holds back one or more samples between calls to
If an MFT does not hold back samples and does not need to examine any events, it can return E_NOTIMPL.
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTProcessEvent. See Creating Hybrid DMO/MFT Objects.
-Sends a message to the Media Foundation transform (MFT).
- The message to send, specified as a member of the
Message parameter. The meaning of this parameter depends on the message type.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream number. Applies to the |
| The media type is not set on one or more streams. |
?
Before calling this method, set the media types on all input and output streams.
The MFT might ignore certain message types. If so, the method returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTProcessMessage. See Creating Hybrid DMO/MFT Objects.
-Delivers data to an input stream on this Media Foundation transform (MFT).
- Input stream identifier. To get the list of stream identifiers, call
Pointer to the
Reserved. Must be zero.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid argument. |
| Invalid stream identifier. |
| The input sample requires a valid sample duration. To set the duration, call Some MFTs require that input samples have valid durations. Some MFTs do not require sample durations. |
| The input sample requires a time stamp. To set the time stamp, call Some MFTs require that input samples have valid time stamps. Some MFTs do not require time stamps. |
| The transform cannot process more input at this time. |
| The media type is not set on one or more streams. |
| The media type is not supported for DirectX Video Acceleration (DXVA). A DXVA-enabled decoder might return this error code. |
?
Note??If you are converting a DirectX Media Object (DMO) to an MFT, be aware that S_FALSE is not a valid return code for In most cases, if the method succeeds, the MFT stores the sample and holds a reference count on the
If the MFT already has enough input data to produce an output sample, it does not accept new input data, and ProcessInput returns
An exception to this rule is the
An MFT can process the input data in the ProcessInput method. However, most MFTs wait until the client calls ProcessOutput.
After the client has set valid media types on all of the streams, the MFT should always be in one of two states: Able to accept more input, or able to produce more output. It should never be in both states or neither state. An MFT should only accept as much input as it needs to generate at least one output sample, at which point ProcessInput returns
If an MFT encounters a non-fatal error in the input data, it can simply drop the data and attempt to recover when it gets the more input data. To request more input data, the MFT returns
If MFT_UNIQUE_METHOD_NAMES is defined before including mftransform.h, this method is renamed MFTProcessInput. See Creating Hybrid DMO/MFT Objects.
-Generates output from the current input data.
-Bitwise OR of zero or more flags from the _MFT_PROCESS_OUTPUT_FLAGS enumeration.
Number of elements in the pOutputSamples array. The value must be at least 1.
Pointer to an array of
Receives a bitwise OR of zero or more flags from the _MFT_PROCESS_OUTPUT_STATUS enumeration.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ProcessOutput method was called on an asynchronous MFT that was not expecting this method call. |
| Invalid stream identifier in the dwStreamID member of one or more |
| The transform cannot produce output data until it receives more input data. |
| The format has changed on an output stream, or there is a new preferred format, or there is a new output stream. |
| You must set the media type on one or more streams of the MFT. |
?
Note??If you are converting a DirectX Media Object (DMO) to an MFT, be aware that S_FALSE is not a valid return code for The size of the pOutputSamples array must be equal to or greater than the number of selected output streams. The number of selected output streams equals the total number of output streams minus the number of deselected streams. A stream is deselected if it has the
This method generates output samples and can also generate events. If the method succeeds, at least one of the following conditions is true:
If MFT_UNIQUE_METHOD_NAMES is defined before including Mftransform.h, this method is renamed MFTProcessOutput. See Creating Hybrid DMO/MFT Objects.
-Implemented by components that provide input trust authorities (ITAs). This interface is used to get the ITA for each of the component's streams.
-
Retrieves the input trust authority (ITA) for a specified stream.
-The stream identifier for which the ITA is being requested.
The interface identifier (IID) of the interface being requested. Currently the only supported value is IID_IMFInputTrustAuthority.
Receives a reference to the ITA's
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The ITA does not expose the requested interface. |
?
Implemented by components that provide output trust authorities (OTAs). Any Media Foundation transform (MFT) or media sink that is designed to work within the protected media path (PMP) and also sends protected content outside the Media Foundation pipeline must implement this interface.
The policy engine uses this interface to negotiate what type of content protection should be applied to the content. Applications do not use this interface directly.
-If an MFT supports
Gets the number of output trust authorities (OTAs) provided by this trusted output. Each OTA reports a single action.
-
Queries whether this output is a policy sink, meaning it handles the rights and restrictions required by the input trust authority (ITA).
-A trusted output is generally considered to be a policy sink if it does not pass the media content that it receives anywhere else; or, if it does pass the media content elsewhere, either it protects the content using some proprietary method such as encryption, or it sufficiently devalues the content so as not to require protection.
-Gets the number of output trust authorities (OTAs) provided by this trusted output. Each OTA reports a single action.
-Receives the number of OTAs.
If this method succeeds, it returns
Gets an output trust authority (OTA), specified by index.
- Zero-based index of the OTA to retrieve. To get the number of OTAs provided by this object, call
Receives a reference to the
If this method succeeds, it returns
Queries whether this output is a policy sink, meaning it handles the rights and restrictions required by the input trust authority (ITA).
-Receives a Boolean value. If TRUE, this object is a policy sink. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
A trusted output is generally considered to be a policy sink if it does not pass the media content that it receives anywhere else; or, if it does pass the media content elsewhere, either it protects the content using some proprietary method such as encryption, or it sufficiently devalues the content so as not to require protection.
-Limits the effective video resolution.
-This method limits the effective resolution of the video image. The actual resolution on the target device might be higher, due to stretching the image.
The EVR might call this method at any time if the
Limits the effective video resolution.
-This method limits the effective resolution of the video image. The actual resolution on the target device might be higher, due to stretching the image.
The EVR might call this method at any time if the
Queries whether the plug-in has any transient vulnerabilities at this time.
-Receives a Boolean value. If TRUE, the plug-in has no transient vulnerabilities at the moment and can receive protected content. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method provides a way for the plug-in to report temporary conditions that would cause the input trust authority (ITA) to distrust the plug-in. For example, if an EVR presenter is in windowed mode, it is vulnerable to GDI screen captures.
To disable screen capture in Direct3D, the plug-in must do the following:
Create the Direct3D device in full-screen exlusive mode.
Specify the D3DCREATE_DISABLE_PRINTSCREEN flag when you create the device. For more information, see IDirect3D9::CreateDevice in the DirectX documentation.
In addition, the graphics adapter must support the Windows Vista Display Driver Model (WDDM) and the Direct3D extensions for Windows Vista (sometimes called D3D9Ex or D3D9L).
If these conditions are met, the presenter can return TRUE in the pYes parameter. Otherwise, it should return
The EVR calls this method whenever the device changes. If the plug-in returns
This method should be used only to report transient conditions. A plug-in that is never in a trusted state should not implement the
Queries whether the plug-in can limit the effective video resolution.
-Receives a Boolean value. If TRUE, the plug-in can limit the effective video resolution. Otherwise, the plug-in cannot limit the video resolution. If the method fails, the EVR treats the value as
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Constriction is a protection mechanism that limits the effective resolution of the video frame to a specified maximum number of pixels.
Video constriction can be implemented by either the mixer or the presenter.
If the method returns TRUE, the EVR might call
Limits the effective video resolution.
-Maximum number of source pixels that may appear in the final video image, in thousands of pixels. If the value is zero, the video is disabled. If the value is MAXDWORD (0xFFFFFFFF), video constriction is removed and the video may be rendered at full resolution.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method limits the effective resolution of the video image. The actual resolution on the target device might be higher, due to stretching the image.
The EVR might call this method at any time if the
Enables or disables the ability of the plug-in to export the video image.
-Boolean value. Specify TRUE to disable image exporting, or
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
An EVR plug-in might expose a way for the application to get a copy of the video frames. For example, the standard EVR presenter implements
If the plug-in supports image exporting, this method enables or disables it. Before this method has been called for the first time, the EVR assumes that the mechanism is enabled.
If the plug-in does not support image exporting, this method should return
While image exporting is disabled, any associated export method, such as GetCurrentImage, should return
Returns the device identifier supported by a video renderer component. This interface is implemented by mixers and presenters for the enhanced video renderer (EVR). If you replace either of these components, the mixer and presenter must report the same device identifier.
-
Returns the identifier of the video device supported by an EVR mixer or presenter.
-If a mixer or presenter uses Direct3D 9, it must return the value IID_IDirect3DDevice9 in pDeviceID. The EVR's default mixer and presenter both return this value. If you write a custom mixer or presenter, it can return some other value. However, the mixer and presenter must use matching device identifiers.
-
Returns the identifier of the video device supported by an EVR mixer or presenter.
-Receives the device identifier. Generally, the value is IID_IDirect3DDevice9.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
If a mixer or presenter uses Direct3D 9, it must return the value IID_IDirect3DDevice9 in pDeviceID. The EVR's default mixer and presenter both return this value. If you write a custom mixer or presenter, it can return some other value. However, the mixer and presenter must use matching device identifiers.
-Controls how the Enhanced Video Renderer (EVR) displays video.
The EVR presenter implements this interface. To get a reference to the interface, call
If you implement a custom presenter for the EVR, the presenter can optionally expose this interface as a service.
-Queries how the enhanced video renderer (EVR) handles the aspect ratio of the source video.
-Gets or sets the clipping window for the video.
-There is no default clipping window. The application must set the clipping window.
-Gets or sets the border color for the video.
-The border color is used for areas where the enhanced video renderer (EVR) does not draw any video.
The border color is not used for letterboxing. To get the letterbox color, call IMFVideoProcessor::GetBackgroundColor.
-Gets or sets various video rendering settings.
-Queries whether the enhanced video renderer (EVR) is currently in full-screen mode.
-Gets the size and aspect ratio of the video, prior to any stretching by the video renderer.
-Receives the size of the native video rectangle. This parameter can be
Receives the aspect ratio of the video. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one of the parameters must be non- |
| The video renderer has been shut down. |
?
If no media types have been set on any video streams, the method succeeds but all parameters are set to zero.
You can set pszVideo or pszARVideo to
Gets the range of sizes that the enhanced video renderer (EVR) can display without significantly degrading performance or image quality.
-Receives the minimum ideal size. This parameter can be
Receives the maximum ideal size. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one parameter must be non- |
| The video renderer has been shut down. |
?
You can set pszMin or pszMax to
Sets the source and destination rectangles for the video.
-Pointer to an
Specifies the destination rectangle. This parameter can be
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one parameter must be non- |
| The video renderer has been shut down. |
?
The source rectangle defines which portion of the video is displayed. It is specified in normalized coordinates. For more information, see
The destination rectangle defines a rectangle within the clipping window where the video appears. It is specified in pixels, relative to the client area of the window. To fill the entire window, set the destination rectangle to {0, 0, width, height}, where width and height are dimensions of the window client area. The default destination rectangle is {0, 0, 0, 0}.
To update just one of these rectangles, set the other parameter to
Before setting the destination rectangle (prcDest), you must set the video window by calling
Gets the source and destination rectangles for the video.
-Pointer to an
Receives the current destination rectangle.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| One or more required parameters are |
| The video renderer has been shut down. |
?
Specifies how the enhanced video renderer (EVR) handles the aspect ratio of the source video.
-Bitwise OR of one or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid flags. |
| The video renderer has been shut down. |
?
Queries how the enhanced video renderer (EVR) handles the aspect ratio of the source video.
-Receives a bitwise OR of one or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
Sets the source and destination rectangles for the video.
-Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| At least one parameter must be non- |
| The video renderer has been shut down. |
?
The source rectangle defines which portion of the video is displayed. It is specified in normalized coordinates. For more information, see
The destination rectangle defines a rectangle within the clipping window where the video appears. It is specified in pixels, relative to the client area of the window. To fill the entire window, set the destination rectangle to {0, 0, width, height}, where width and height are dimensions of the window client area. The default destination rectangle is {0, 0, 0, 0}.
To update just one of these rectangles, set the other parameter to
Before setting the destination rectangle (prcDest), you must set the video window by calling
Gets the clipping window for the video.
-Receives a handle to the window where the enhanced video renderer (EVR) will draw the video.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
There is no default clipping window. The application must set the clipping window.
-
Repaints the current video frame. Call this method whenever the application receives a WM_PAINT message.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The EVR cannot repaint the frame at this time. This error can occur while the EVR is switching between full-screen and windowed mode. The caller can safely ignore this error. |
| The video renderer has been shut down. |
?
Gets a copy of the current image being displayed by the video renderer.
-Pointer to a sizeof(
before calling the method.
Receives a reference to a buffer that contains a packed Windows device-independent bitmap (DIB). The caller must free the memory for the bitmap by calling CoTaskMemFree.
Receives the size of the buffer returned in pDib, in bytes.
Receives the time stamp of the captured image.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The content is protected and the license does not permit capturing the image. |
| The video renderer has been shut down. |
?
This method can be called at any time. However, calling the method too frequently degrades the video playback performance.
This method retrieves a copy of the final composited image, which includes any substreams, alpha-blended bitmap, aspect ratio correction, background color, and so forth.
In windowed mode, the bitmap is the size of the destination rectangle specified in
Sets the border color for the video.
-Specifies the border color as a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
By default, if the video window straddles two monitors, the enhanced video renderer (EVR) clips the video to one monitor and draws the border color on the remaining portion of the window. (To change the clipping behavior, call
The border color is not used for letterboxing. To change the letterbox color, call IMFVideoProcessor::SetBackgroundColor.
-Gets the border color for the video.
-Receives the border color, as a
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
The border color is used for areas where the enhanced video renderer (EVR) does not draw any video.
The border color is not used for letterboxing. To get the letterbox color, call IMFVideoProcessor::GetBackgroundColor.
-
Sets various preferences related to video rendering.
-Bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid flags. |
| The video renderer has been shut down. |
?
Gets various video rendering settings.
-Receives a bitwise OR of zero or more flags from the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
[This API is not supported and may be altered or unavailable in the future. ]
Sets or unsets full-screen rendering mode.
To implement full-screen playback, an application should simply resize the video window to cover the entire area of the monitor. Also set the window to be a topmost window, so that the application receives all mouse-click messages. For more information about topmost windows, see the documentation for the SetWindowPos function.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
The default EVR presenter implements full-screen mode using Direct3D exclusive mode.
If you use this method to switch to full-screen mode, set the application window to be a topmost window and resize the window to cover the entire monitor. This ensures that the application window receives all mouse-click messages. Also set the keyboard focus to the application window. When you switch out of full-screen mode, restore the window's original size and position.
By default, the cursor is still visible in full-screen mode. To hide the cursor, call ShowCursor.
The transition to and from full-screen mode occurs asynchronously. To get the current mode, call
Queries whether the enhanced video renderer (EVR) is currently in full-screen mode.
-Receives a Boolean value. If TRUE, the EVR is in full-screen mode. If
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The EVR is currently switching between full-screen and windowed mode. |
?
Represents a description of a video format.
-If the major type of a media type is
Applications should avoid using this interface except when a method or function requires an
Represents a description of a video format.
-If the major type of a media type is
Applications should avoid using this interface except when a method or function requires an
Represents a description of a video format.
-If the major type of a media type is
Applications should avoid using this interface except when a method or function requires an
[This API is not supported and may be altered or unavailable in the future. Instead, applications should set the
Retrieves an alternative representation of the media type.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is equivalent to
Instead of calling this method, applications should set the
Controls how the Enhanced Video Renderer (EVR) mixes video substreams. Applications can use this interface to control video mixing during playback.
The EVR mixer implements this interface. To get a reference to the interface, call
If you implement a custom mixer for the EVR, the mixer can optionally expose this interface as a service.
-
Sets the z-order of a video stream.
-Identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Z-order value. The z-order of the reference stream must be zero. The maximum z-order value is the number of streams minus one.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The value of dwZ is larger than the maximum z-order value. |
| Invalid z-order for this stream. For the reference stream, dwZ must be zero. For all other streams, dwZ must be greater than zero. |
| Invalid stream identifier. |
?
The EVR draws the video streams in the order of their z-order values, starting with zero. The reference stream must be first in the z-order, and the remaining streams can be in any order.
-
Retrieves the z-order of a video stream.
-Identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Receives the z-order value.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
?
Sets the position of a video stream within the composition rectangle.
-Identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The coordinates of the bounding rectangle given in pnrcOutput are not valid. |
| Invalid stream identifier. |
?
The mixer draws each video stream inside a bounding rectangle that is specified relative to the final video image. This bounding rectangle is given in normalized coordinates. For more information, see
The coordinates of the bounding rectangle must fall within the range [0.0, 1.0]. Also, the X and Y coordinates of the upper-left corner cannot exceed the X and Y coordinates of the lower-right corner. In other words, the bounding rectangle must fit entirely within the composition rectangle and cannot be flipped vertically or horizontally.
The following diagram shows how the EVR mixes substreams.
The output rectangle for the stream is specified by calling SetStreamOutputRect. The source rectangle is specified by calling
Retrieves the position of a video stream within the composition rectangle.
-The identifier of the stream. For the EVR media sink, the stream identifier is defined when the
Pointer to an
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid stream identifier. |
?
Controls preferences for video deinterlacing.
The default video mixer for the Enhanced Video Renderer (EVR) implements this interface.
To get a reference to the interface, call
Gets or sets the current preferences for video deinterlacing.
-Sets the preferences for video deinterlacing.
-Bitwise OR of zero or more flags from the
If this method succeeds, it returns
Gets the current preferences for video deinterlacing.
-Receives a bitwise OR of zero or more flags from the
If this method succeeds, it returns
Maps a position on an input video stream to the corresponding position on an output video stream.
To obtain a reference to this interface, call
Maps output image coordinates to input image coordinates. This method provides the reverse transformation for components that map coordinates on the input image to different coordinates on the output image.
-X-coordinate of the output image, normalized to the range [0...1].
Y-coordinate of the output image, normalized to the range [0...1].
Output stream index for the coordinate mapping.
Input stream index for the coordinate mapping.
Receives the mapped x-coordinate of the input image, normalized to the range [0...1].
Receives the mapped y-coordinate of the input image, normalized to the range [0...1].
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The video renderer has been shut down. |
?
In the following diagram, R(dest) is the destination rectangle for the video. You can obtain this rectangle by calling
The position of P relative to R(dest) in normalized coordinates is calculated as follows:
float xn = float(x + 0.5) / widthDest;
- float xy = float(y + 0.5) / heightDest;
-
where widthDest and heightDest are the width and height of R(dest) in pixels.
To calculate the position of P relative to R1, call MapOutputCoordinateToInputStream as follows:
float x1 = 0, y1 = 0;
- hr = pMap->MapOutputCoordinateToInputStream(xn, yn, 0, dwInputStreamIndex, &x1, &y1);
The values returned in x1 and y1 are normalized to the range [0...1]. To convert back to pixel coordinates, scale these values by the size of R1:
int scaledx = int(floor(x1 * widthR1));
- int scaledy = int(floor(xy * heightR1));
Note that x1 and y1 might fall outside the range [0...1] if P lies outside of R1.
-Represents a video presenter. A video presenter is an object that receives video frames, typically from a video mixer, and presents them in some way, typically by rendering them to the display. The enhanced video renderer (EVR) provides a default video presenter, and applications can implement custom presenters.
The video presenter receives video frames as soon as they are available from upstream. The video presenter is responsible for presenting frames at the correct time and for synchronizing with the presentation clock.
-Configures the Video Processor MFT.
-This interface controls how the Video Processor MFT generates output frames.
-Sets the border color.
-Sets the source rectangle. The source rectangle is the portion of the input frame that is blitted to the destination surface.
-See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Sets the destination rectangle. The destination rectangle is the portion of the output surface where the source rectangle is blitted.
-See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Specifies whether to flip the video image.
-Specifies whether to rotate the video to the correct orientation.
-The original orientation of the video is specified by the
If eRotation is
Specifies the amount of downsampling to perform on the output.
-Sets the border color.
-A reference to an
If this method succeeds, it returns
Sets the source rectangle. The source rectangle is the portion of the input frame that is blitted to the destination surface.
-A reference to a
If this method succeeds, it returns
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Sets the destination rectangle. The destination rectangle is the portion of the output surface where the source rectangle is blitted.
-A reference to a
If this method succeeds, it returns
See Video Processor MFT for info regarding source and destination rectangles in the Video Processor MFT.
-Specifies whether to flip the video image.
-An
If this method succeeds, it returns
Specifies whether to rotate the video to the correct orientation.
-A
If this method succeeds, it returns
The original orientation of the video is specified by the
If eRotation is
Specifies the amount of downsampling to perform on the output.
-The sampling size. To disable constriction, set this parameter to
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Configures the Video Processor MFT.
-This interface controls how the Video Processor MFT generates output frames.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Overrides the rotation operation that is performed in the video processor.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Returns the list of supported effects in the currently configured video processor.
-[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Overrides the rotation operation that is performed in the video processor.
-Rotation value in degrees. Typically, you can only use values from the
If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Enables effects that were implemented with IDirectXVideoProcessor::VideoProcessorBlt.
-If this method succeeds, it returns
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Returns the list of supported effects in the currently configured video processor.
-A combination of
If this method succeeds, it returns
Sets a new mixer or presenter for the Enhanced Video Renderer (EVR).
Both the EVR media sink and the DirectShow EVR filter implement this interface. To get a reference to the interface, call QueryInterface on the media sink or the filter. Do not use
The EVR activation object returned by the
Sets a new mixer or presenter for the enhanced video renderer (EVR).
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Either the mixer or the presenter is invalid. |
| The mixer and presenter cannot be replaced in the current state. (EVR media sink.) |
| The video renderer has been shut down. |
| One or more input pins are connected. (DirectShow EVR filter.) |
?
Call this method directly after creating the EVR, before you do any of the following:
Call
Call
Connect any pins on the EVR filter, or set any media types on EVR media sink.
The EVR filter returns VFW_E_WRONG_STATE if any of the filter's pins are connected. The EVR media sink returns
The device identifiers for the mixer and the presenter must match. The
If the video renderer is in the protected media path (PMP), the mixer and presenter objects must be certified safe components and pass any trust authority verification that is being enforced. Otherwise, this method will fail.
-Allocates video samples for a video media sink.
The stream sinks on the enhanced video renderer (EVR) expose this interface as a service. To obtain a reference to the interface, call
Specifies the Direct3D device manager for the video media sink to use.
-The media sink uses the Direct3D device manager to obtain a reference to the Direct3D device, which it uses to allocate Direct3D surfaces. The device manager enables multiple objects in the pipeline (such as a video renderer and a video decoder) to share the same Direct3D device.
-
Specifies the Direct3D device manager for the video media sink to use.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
The media sink uses the Direct3D device manager to obtain a reference to the Direct3D device, which it uses to allocate Direct3D surfaces. The device manager enables multiple objects in the pipeline (such as a video renderer and a video decoder) to share the same Direct3D device.
-
Releases all of the video samples that have been allocated.
-The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Specifies the number of samples to allocate and the media type for the samples.
-Number of samples to allocate.
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| Invalid media type. |
?
Gets a video sample from the allocator.
-Receives a reference to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The allocator was not initialized. Call |
| No samples are available. |
?
Enables an application to track video samples allocated by the enhanced video renderer (EVR).
The stream sinks on the EVR expose this interface as a service. To get a reference to the interface, call the
Sets the callback object that receives notification whenever a video sample is returned to the allocator.
-To get a video sample from the allocator, call the
The allocator holds at most one callback reference. Calling this method again replaces the previous callback reference.
-Sets the callback object that receives notification whenever a video sample is returned to the allocator.
-A reference to the
If this method succeeds, it returns
To get a video sample from the allocator, call the
The allocator holds at most one callback reference. Calling this method again replaces the previous callback reference.
-Gets the number of video samples that are currently available for use.
-Receives the number of available samples.
If this method succeeds, it returns
To get a video sample from the allocator, call the
Allocates video samples that contain Microsoft Direct3D?11 texture surfaces.
-You can use this interface to allocateDirect3D?11 video samples, rather than allocate the texture surfaces and media samples directly. To get a reference to this interface, call the
To allocate video samples, perform the following steps:
Initializes the video sample allocator object.
-The initial number of samples to allocate.
The maximum number of samples to allocate.
A reference to the
A reference to the
If this method succeeds, it returns
The callback for the
Called when a video sample is returned to the allocator.
-If this method succeeds, it returns
To get a video sample from the allocator, call the
The callback for the
Called when allocator samples are released for pruning by the allocator, or when the allocator is removed.
-The sample to be pruned.
If this method succeeds, it returns
Completes an asynchronous request to register the topology work queues with the Multimedia Class Scheduler Service (MMCSS).
-Call this method when the
Registers the topology work queues with the Multimedia Class Scheduler Service (MMCSS).
-A reference to the
A reference to the
If this method succeeds, it returns
Each source node in the topology defines one branch of the topology. The branch includes every topology node that receives data from that node. An application can assign each branch of a topology its own work queue and then associate those work queues with MMCSS tasks.
To use this method, perform the following steps.
The BeginRegisterTopologyWorkQueuesWithMMCSS method is asynchronous. When the operation completes, the callback object's
To unregister the topology work queues from MMCSS, call
Completes an asynchronous request to register the topology work queues with the Multimedia Class Scheduler Service (MMCSS).
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when the
Unregisters the topology work queues from the Multimedia Class Scheduler Service (MMCSS).
-Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is asynchronous. When the operation completes, the callback object's
Completes an asynchronous request to unregister the topology work queues from the Multimedia Class Scheduler Service (MMCSS).
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when the
Retrieves the Multimedia Class Scheduler Service (MMCSS) class for a specified branch of the current topology.
-Identifies the work queue assigned to this topology branch. The application defines this value by setting the
Pointer to a buffer that receives the name of the MMCSS class. This parameter can be
On input, specifies the size of the pwszClass buffer, in characters. On output, receives the required size of the buffer, in characters. The size includes the terminating null character.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| There is no work queue with the specified identifier. |
| The pwszClass buffer is too small to receive the class name. |
?
Retrieves the Multimedia Class Scheduler Service (MMCSS) task identifier for a specified branch of the current topology.
-Identifies the work queue assigned to this topology branch. The application defines this value by setting the
Receives the task identifier.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Associates a platform work queue with a Multimedia Class Scheduler Service (MMCSS) task.
- The platform work queue to register with MMCSS. See Work Queue Identifiers. To register all of the standard work queues to the same MMCSS task, set this parameter to
The name of the MMCSS task to be performed.
The unique task identifier. To obtain a new task identifier, set this value to zero.
A reference to the
A reference to the
If this method succeeds, it returns
This method is asynchronous. When the operation completes, the callback object's
To unregister the work queue from the MMCSS class, call
Completes an asynchronous request to associate a platform work queue with a Multimedia Class Scheduler Service (MMCSS) task.
-Pointer to the
The unique task identifier.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this function when the
To unregister the work queue from the MMCSS class, call
Unregisters a platform work queue from a Multimedia Class Scheduler Service (MMCSS) task.
-Platform work queue to register with MMCSS. See
Pointer to the
Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
This method is asynchronous. When the operation completes, the callback object's
Completes an asynchronous request to unregister a platform work queue from a Multimedia Class Scheduler Service (MMCSS) task.
-Pointer to the
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Call this method when the
Retrieves the Multimedia Class Scheduler Service (MMCSS) class for a specified platform work queue.
-Platform work queue to query. See
Pointer to a buffer that receives the name of the MMCSS class. This parameter can be
On input, specifies the size of the pwszClass buffer, in characters. On output, receives the required size of the buffer, in characters. The size includes the terminating null character.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
| The pwszClass buffer is too small to receive the class name. |
?
Retrieves the Multimedia Class Scheduler Service (MMCSS) task identifier for a specified platform work queue.
-Platform work queue to query. See
Receives the task identifier.
The method returns an
Return code | Description |
---|---|
| The method succeeded. |
?
Extends the
This interface allows applications to control - both platform and topology work queues.
The
Retrieves the Multimedia Class Scheduler Service (MMCSS) string associated with the given topology work queue.
-The id of the topology work queue.
Pointer to the buffer the work queue's MMCSS task id will be copied to.
If this method succeeds, it returns
Registers a platform work queue with Multimedia Class Scheduler Service (MMCSS) using the specified class and task id.
-The id of one of the standard platform work queues.
The MMCSS class which the work queue should be registered with.
The task id which the work queue should be registered with. If dwTaskId is 0, a new MMCSS bucket will be created.
The priority.
Standard callback used for async operations in Media Foundation.
Standard state used for async operations in Media Foundation.
If this method succeeds, it returns
Gets the priority of the Multimedia Class Scheduler Service (MMCSS) priority associated with the specified platform work queue.
-Topology work queue id for which the info will be returned.
Pointer to a buffer allocated by the caller that the work queue's MMCSS task id will be copied to.
Contains an image that is stored as metadata for a media source. This structure is used as the data item for the WM/Picture metadata attribute.
-The WM/Picture attribute is defined in the Windows Media Format SDK. The attribute contains a picture related to the content, such as album art.
To get this attribute from a media source, call
Image data.
This format differs from the WM_PICTURE structure used in the Windows Media Format SDK. The WM_PICTURE structure contains internal references to two strings and the image data. If the structure is copied, these references become invalid. The
Contains synchronized lyrics stored as metadata for a media source. This structure is used as the data item for the WM/Lyrics_Synchronised metadata attribute.
-The WM/Lyrics_Synchronised attribute is defined in the Windows Media Format SDK. The attribute contains lyrics synchronized to times in the source file.
To get this attribute from a media source, call
Null-terminated wide-character string that contains a description.
Lyric data. The format of the lyric data is described in the Windows Media Format SDK documentation.
This format differs from the WM_SYNCHRONISED_LYRICS structure used in the Windows Media Format SDK. The WM_SYNCHRONISED_LYRICS structure contains internal references to two strings and the lyric data. If the structure is copied, these references become invalid. The
Specifies the format of time stamps in the lyrics. This member is equivalent to the bTimeStampFormat member in the WM_SYNCHRONISED_LYRICS structure. The WM_SYNCHRONISED_LYRICS structure is documented in the Windows Media Format SDK.
Specifies the type of synchronized strings that are in the lyric data. This member is equivalent to the bContentType member in the WM_SYNCHRONISED_LYRICS structure.
Size, in bytes, of the lyric data.
Describes the indexing configuration for a stream and type of index.
-
Number of bytes used for each index entry. If the value is MFASFINDEXER_PER_ENTRY_BYTES_DYNAMIC, the index entries have variable size.
Optional text description of the index.
Indexing interval. The units of this value depend on the index type. A value of MFASFINDEXER_NO_FIXED_INTERVAL indicates that there is no fixed indexing interval.
Specifies an index for the ASF indexer object.
-The index object of an ASF file can contain a number of distinct indexes. Each index is identified by the type of index and the stream number. No ASF index object can contain more than one index for a particular combination of stream number and index type.
-The type of index. Currently this value must be GUID_NULL, which specifies time-based indexing.
The stream number to which this structure applies.
Contains statistics about the progress of the ASF multiplexer.
-Use
Number of frames written by the ASF multiplexer.
Number of frames dropped by the ASF multiplexer.
Describes a 4:4:4:4 Y'Cb'Cr' sample.
-Cr (chroma difference) value.
Cb (chroma difference) value.
Y (luma) value.
Alpha value.
Specifies the buffering parameters for a network byte stream.
-Size of the file, in bytes. If the total size is unknown, set this member to -1.
Size of the playable media data in the file, excluding any trailing data that is not useful for playback. If this value is unknown, set this member to -1.
Pointer to an array of
The number of elements in the prgBuckets array.
Amount of data to buffer from the network, in 100-nanosecond units. This value is in addition to the buffer windows defined in the prgBuckets member.
Amount of additional data to buffer when seeking, in 100-nanosecond units. This value reflects the fact that downloading must start from the previous key frame before the seek point. If the value is unknown, set this member to zero.
The playback duration of the file, in 100-nanosecond units. If the duration is unknown, set this member to zero.
Playback rate.
Specifies a range of bytes.
-The offset, in bytes, of the start of the range.
The offset, in bytes, of the end of the range.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A transform describing the location of a camera relative to other cameras or an established external reference.
-The Position value should be expressed in real-world coordinates in units of meters. The coordinate system of both position and orientation should be right-handed Cartesian as shown in the following diagram.
Important??The position and orientation are expressed as transforms toward the reference frame or origin. For example, a Position value of {-5, 0, 0} means that the origin is 5 meters to the left of the sensor, and therefore the sensor is 5 meters to the right of the origin. A sensor that is positioned 2 meters above the origin should specify a Position of {0, -2, 0} because that is the translation from the sensor to the origin.
If the sensor is aligned with the origin, the rotation is the identity quaternion and the forward vector is along the -Z axis {0, 0, -1}. If the sensor is rotated +30 degrees around the Y axis from the origin, then the Orientation value should be a rotation of -30 degrees around the Y axis, because it represents the rotation from the sensor to the origin.
? -A reference
The transform position.
The transform rotation.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Describes the location of a camera relative to other cameras or an established external reference.
-The number of transforms in the CalibratedTransforms array.
The array of transforms in the extrinsic data.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a polynomial lens distortion model.
-The first radial distortion coefficient.
The second radial distortion coefficient.
The third radial distortion coefficient.
The first tangential distortion coefficient.
The second tangential distortion coefficient.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a pinhole camera model.
-For square pixels, the X and Y fields of the FocalLength should be the same.
The PrincipalPoint field is expressed in pixels, not in normalized coordinates. The origin [0,0] is the bottom, left corner of the image.
-The focal length of the camera.
The principal point of the camera.
This structure contains blob information for the EV compensation feedback for the photo captured.
-A KSCAMERA_EXTENDEDPROP_EVCOMP_XXX step flag.
The EV compensation value in units of the step specified.
The CapturedMetadataISOGains structure describes the blob format for MF_CAPTURE_METADATA_ISO_GAINS.
-The CapturedMetadataISOGains structure only describes the blob format for the MF_CAPTURE_METADATA_ISO_GAINS attribute. The metadata item structure for ISO gains (KSCAMERA_METADATA_ITEMHEADER + ISO gains metadata payload) is up to driver and must be 8-byte aligned.
-This structure describes the blob format for the MF_CAPTURE_METADATA_WHITEBALANCE_GAINS attribute.
-The MF_CAPTURE_METADATA_WHITEBALANCE_GAINS attribute contains the white balance gains applied to R, G, B by the sensor or ISP when the preview frame was captured. This is a unitless.
The CapturedMetadataWhiteBalanceGains structure only describes the blob format for the MF_CAPTURE_METADATA_WHITEBALANCE_GAINS attribute. The metadata item structure for white balance gains (KSCAMERA_METADATA_ITEMHEADER + white balance gains metadata payload) is up to driver and must be 8-byte aligned.
-The R value of the blob.
The G value of the blob.
The B value of the blob.
Defines the properties of a clock.
- The interval at which the clock correlates its clock time with the system time, in 100-nanosecond units. If the value is zero, the correlation is made whenever the
The unique identifier of the underlying device that provides the time. If two clocks have the same unique identifier, they are based on the same device. If the underlying device is not shared between two clocks, the value can be GUID_NULL.
A bitwise OR of flags from the
The clock frequency in Hz. A value of MFCLOCK_FREQUENCY_HNS means that the clock has a frequency of 10 MHz (100-nanosecond ticks), which is the standard MFTIME time unit in Media Foundation. If the
The amount of inaccuracy that may be present on the clock, in parts per billion (ppb). For example, an inaccuracy of 50 ppb means the clock might drift up to 50 seconds per billion seconds of real time. If the tolerance is not known, the value is MFCLOCK_TOLERANCE_UNKNOWN. This constant is equal to 50 parts per million (ppm).
The amount of jitter that may be present, in 100-nanosecond units. Jitter is the variation in the frequency due to sampling the underlying clock. Jitter does not include inaccuracies caused by drift, which is reflected in the value of dwClockTolerance.
For clocks based on a single device, the minimum jitter is the length of the tick period (the inverse of the frequency). For example, if the frequency is 10 Hz, the jitter is 0.1 second, which is 1,000,000 in MFTIME units. This value reflects the fact that the clock might be sampled just before the next tick, resulting in a clock time that is one period less than the actual time. If the frequency is greater than 10 MHz, the jitter should be set to 1 (the minimum value).
If a clock's underlying hardware device does not directly time stamp the incoming data, the jitter also includes the time required to dispatch the driver's interrupt service routine (ISR). In that case, the expected jitter should include the following values:
Value | Meaning |
---|---|
| Jitter due to time stamping during the device driver's ISR. |
| Jitter due to time stamping during the deferred procedure call (DPC) processing. |
| Jitter due to dropping to normal thread execution before time stamping. |
?
Contains information about the data that you want to provide as input to a protection system function.
-The identifier of the function that you need to run. This value is defined by the implementation of the protection system.
The size of the private data that the implementation of the security processor implementation reserved. You can determine this value by calling the
The size of the data provided as input to the protection system function that you want to run.
Reserved.
The data to provide as input to the protection system function.
If the value of the PrivateDataByteCount member is greater than 0, bytes 0 through PrivateDataByteCount - 1 are reserved for use by the independent hardware vendor (IHV). Bytes PrivateDataByteCount through HWProtectionDataByteCount + PrivateDataByteCount - 1 contain the input data for the protection system function.
The protection system specification defines the format and size of the DRM function.
Contains information about the data you received as output from a protection system function.
-The size of the private data that the implementation of the security processor reserves, in bytes. You can determine this value by calling the
The maximum size of data that the independent hardware vendor (IHV) can return in the output buffer, in bytes.
The size of the data that the IHV wrote to the output buffer, in bytes.
The result of the protection system function.
The number of 100 nanosecond units spent transporting the data.
The number of 100 nanosecond units spent running the protection system function. -
The output of the protection system function.
If the value of the PrivateDataByteCount member is greater than 0, bytes 0 through PrivateDataByteCount - 1 are reserved for IHV use. Bytes PrivateDataByteCount through MaxHWProtectionDataByteCount + PrivateDataByteCount - 1 contain the region of the array into which the driver should return the output data from the protection system function.
The protection system specification defines the format and size of the function.
Advises the secure processor of the Multimedia Class Scheduler service (MMCSS) parameters so that real-time tasks can be scheduled at the expected priority.
-The identifier for the MMCSS task.
The name of the MMCSS task.
The base priority of the thread that runs the MMCSS task.
The
This structure is identical to the DirectShow
Major type
Subtype
If TRUE, samples are of a fixed size. This field is informational only. For audio, it is generally set to TRUE. For video, it is usually TRUE for uncompressed video and
If TRUE, samples are compressed using temporal (interframe) compression. (A value of TRUE indicates that not all frames are key frames.) This field is informational only.
Size of the sample in bytes. For compressed data, the value can be zero.
Format type | Format structure |
---|---|
| DVINFO |
| |
| |
| None. |
| |
| |
| |
?
Not used. Set to
Size of the format block of the media type.
Pointer to the format structure. The structure type is specified by the formattype member. The format structure must be present, unless formattype is GUID_NULL or FORMAT_None.
The FaceCharacterization structure describes the blob format for the MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute.
-The MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute contains the blink and facial expression state for the face ROIs identified in MF_CAPTURE_METADATA_FACEROIS. For a device that does not support blink or facial expression detection, this attribute should be omitted.
The facial expressions that can be detected are defined as follows:
#define MF_METADATAFACIALEXPRESSION_SMILE 0x00000001
The FaceCharacterizationBlobHeader and FaceCharacterization structures only describe the blob format for the MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute. The metadata item structure for the face characterizations (KSCAMERA_METADATA_ITEMHEADER + face characterizations metadata payload) is up to driver and must be 8-byte aligned.
-0 indicates no blink for the left eye, 100 indicates definite blink for the left eye (0 - 100).
0 indicates no blink for the right eye, 100 indicates definite blink for the right eye (0 - 100).
A defined facial expression value.
0 indicates no such facial expression as identified, 100 indicates definite such facial expression as defined (0 - 100).
The FaceCharacterizationBlobHeader structure describes the size and count information of the blob format for the MF_CAPTURE_METADATA_FACEROICHARACTERIZATIONS attribute.
-Size of this header + all following FaceCharacterization structures.
Number of FaceCharacterization structures in the blob. Must match the number of FaceRectInfo structures in FaceRectInfoBlobHeader.
The FaceRectInfo structure describes the blob format for the MF_CAPTURE_METADATA_FACEROIS attribute.
-The MF_CAPTURE_METADATA_FACEROIS attribute contains the face rectangle info detected by the driver. By default driver\MFT0 should provide the face information on preview stream. If the driver advertises the capability on other streams, driver\MFT must provide the face info on the corresponding streams if the application enables face detection on those streams. When video stabilization is enabled on the driver, the face information should be provided post-video stabilization. The dominate face must be the first FaceRectInfo in the blob.
The FaceRectInfoBlobHeader and FaceRectInfo structures only describe the blob format for the MF_CAPTURE_METADATA_FACEROIS attribute. The metadata item structure for face ROIs (KSCAMERA_METADATA_ITEMHEADER + face ROIs metadata payload) is up to driver and must be 8-byte aligned.
-Relative coordinates on the frame that face detection is running (Q31 format).
Confidence level of the region being a face (0 - 100).
The FaceRectInfoBlobHeader structure describes the size and count information of the blob format for the MF_CAPTURE_METADATA_FACEROIS attribute.
-Size of this header + all following FaceRectInfo structures.
Number of FaceRectInfo structures in the blob.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A vector with two components.
-X component of the vector.
Y component of the vector.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A vector with three components.
-X component of the vector.
Y component of the vector.
Z component of the vector.
Contains coefficients used to transform multichannel audio into a smaller number of audio channels. This process is called fold-down.
-To specify this information in the media type, set the
The ASF media source supports fold-down from six channels (5.1 audio) to two channels (stereo). It gets the information from the g_wszFold6To2Channels3 attribute in the ASF header. This attribute is documented in the Windows Media Format SDK documentation.
-Size of the structure, in bytes.
Number of source channels.
Number of destination channels.
Specifies the assignment of audio channels to speaker positions in the transformed audio. This member is a bitwise OR of flags that define the speaker positions. For a list of valid flags, see
Array that contains the fold-down coefficients. The number of coefficients is cSrcChannels?cDstChannels. If the number of coefficients is less than the size of the array, the remaining elements in the array are ignored. For more information about how the coefficients are applied, see Windows Media Audio Professional Codec Features.
The HistogramBlobHeader structure describes the blob size and the number of histograms in the blob for the MF_CAPTURE_METADATA_HISTOGRAM attribute.
-Size of the entire histogram blob in bytes.
Number of histograms in the blob. Each histogram is identified by a HistogramHeader.
The HistogramDataHeader structure describes the blob format for the MF_CAPTURE_METADATA_HISTOGRAM attribute.
-Size in bytes of this header + all following histogram data.
Mask of the color channel for the histogram data.
1 if linear, 0 if nonlinear.
The HistogramGrid structure describes the blob format for MF_CAPTURE_METADATA_HISTOGRAM.
-Width of the sensor output that histogram is collected from.
Height of the sensor output that histogram is collected from.
Absolute coordinates of the region on the sensor output that the histogram is collected for.
The HistogramHeader structure describes the blob format for MF_CAPTURE_METADATA_HISTOGRAM.
-The MF_CAPTURE_METADATA_HISTOGRAM attribute contains a histogram when a preview frame is captured.
For the ChannelMasks field, the following bitmasks indicate the available channels in the histogram:
#define MF_HISTOGRAM_CHANNEL_Y 0x00000001 - #define MF_HISTOGRAM_CHANNEL_R 0x00000002 - #define MF_HISTOGRAM_CHANNEL_G 0x00000004 - #define MF_HISTOGRAM_CHANNEL_B 0x00000008 - #define MF_HISTOGRAM_CHANNEL_Cb 0x00000010 - #define MF_HISTOGRAM_CHANNEL_Cr 0x00000020
Each blob can contain multiple histograms collected from different regions or different color spaces of the same frame. Each histogram in the blob is identified by its own HistogramHeader. Each histogram has its own region and sensor output size associated. For full frame histogram, the region will match the sensor output size specified in HistogramGrid.
Histogram data for all available channels are grouped under one histogram. Histogram data for each channel is identified by a HistogramDataHeader immediate above the data. ChannelMasks indicate how many and what channels are having the histogram data, which is the bitwise OR of the supported MF_HISTOGRAM_CHANNEL_* bitmasks as defined above. ChannelMask indicates what channel the data is for, which is identified by any one of the MF_HISTOGRAM_CHANNEL_* bitmasks.
Histogram data is an array of ULONG with each entry representing the number of pixels falling under a set of tonal values as categorized by the bin. The data in the array should start from bin 0 to bin N-1, where N is the number of bins in the histogram, for example, HistogramBlobHeader.Bins.
For Windows?10, if KSPROPERTY_CAMERACONTROL_EXTENDED_HISTOGRAM is supported, at minimum a full frame histogram with Y channel must be provided which should be the first histogram in the histogram blob. - Note that HistogramBlobHeader, HistogramHeader, HistogramDataHeader and Histogram data only describe the blob format for the MF_CAPTURE_METADATA_HISTOGRAM attribute. The metadata item structure for the histogram (KSCAMERA_METADATA_ITEMHEADER + all histogram metadata payload) is up to driver and must be 8-byte aligned.
-Size of this header + (HistogramDataHeader + histogram data following) * number of channels available.
Number of bins in the histogram.
Color space that the histogram is collected from
Masks of the color channels that the histogram is collected for.
Grid that the histogram is collected from.
Describes an action requested by an output trust authority (OTA). The request is sent to an input trust authority (ITA).
-Specifies the action as a member of the
Pointer to a buffer that contains a ticket object, provided by the OTA.
Size of the ticket object, in bytes.
Contains parameters for the
Specifies the buffering requirements of a file.
-This structure describes the buffering requirements for content encoded at the bit rate specified in the dwBitrate. The msBufferWindow member indicates how much data should be buffered before starting playback. The size of the buffer in bytes is msBufferWinow?dwBitrate / 8000.
-Bit rate, in bits per second.
Size of the buffer window, in milliseconds.
The MetadataTimeStamps structure describes the blob format for the MF_CAPTURE_METADATA_FACEROITIMESTAMPS attribute.
-The MF_CAPTURE_METADATA_FACEROITIMESTAMPS attribute contains the time stamp information for the face ROIs identified in MF_CAPTURE_METADATA_FACEROIS. For a device that cannot provide the time stamp for face ROIs, this attribute should be omitted.
For the Flags field, the following bit flags indicate which time stamp is valid:
#define MF_METADATATIMESTAMPS_DEVICE 0x00000001 - #define MF_METADATATIMESTAMPS_PRESENTATION 0x00000002
MFT0 must set Flags to MF_METADATATIEMSTAMPS_DEVICE and the appropriate QPC time for Device, if the driver provides the timestamp metadata for the face ROIs.
The MetadataTimeStamps structure only describes the blob format for the MF_CAPTURE_METADATA_FACEROITIMESTAMPS attribute. The metadata item structure for timestamp (KSCAMERA_METADATA_ITEMHEADER + timestamp metadata payload) is up to driver and must be 8-byte aligned.
-Bitwise OR of the MF_METADATATIMESTAMPS_* flags.
QPC time for the sample the face rectangle is derived from (in 100ns).
PTS for the sample the face rectangle is derived from (in 100ns).
Provides information on a screen-to-screen move and a dirty rectangle copy operation.
-A
A
Contains encoding statistics from the Digital Living Network Alliance (DLNA) media sink.
This structure is used with the
Contains format data for a binary stream in an Advanced Streaming Format (ASF) file.
-This structure is used with the
This structure corresponds to the first 60 bytes of the Type-Specific Data field of the Stream Properties Object, in files where the stream type is ASF_Binary_Media. For more information, see the ASF specification.
The Format Data field of the Type-Specific Data field is contained in the
Major media type. This value is the
Media subtype.
If TRUE, samples have a fixed size in bytes. Otherwise, samples have variable size.
If TRUE, the data in this stream uses temporal compression. Otherwise, samples are independent of each other.
If bFixedSizeSamples is TRUE, this member specifies the sample size in bytes. Otherwise, the value is ignored and should be 0.
Format type
Defines custom color primaries for a video source. The color primaries define how to convert colors from RGB color space to CIE XYZ color space.
-This structure is used with the
Red x-coordinate.
Red y-coordinate.
Green x-coordinate.
Green y-coordinate.
Blue x-coordinate.
Blue y-coordinate.
White point x-coordinate.
White point y-coordinate.
Contains the authentication information for the credential manager.
-The response code of the authentication challenge. For example, NS_E_PROXY_ACCESSDENIED.
Set this flag to TRUE if the currently logged on user's credentials should be used as the default credentials.
If TRUE, the authentication package will send unencrypted credentials over the network. Otherwise, the authentication package encrypts the credentials.
The original URL that requires authentication.
The name of the site or proxy that requires authentication.
The name of the realm for this authentication.
The name of the authentication package. For example, "Digest" or "MBS_BASIC".
The number of times that the credential manager should retry after authentication fails.
Specifies an offset as a fixed-point real number.
-The value of the number is value + (fract / 65536.0f).
-The fractional part of the number.
The integer part of the number.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
If the flags member contains the
To cancel authentication, set fProceedWithAuthentication equal to
By default, MFPlay uses the network source's implementation of
Contains one palette entry in a color table.
-This union can be used to represent both RGB palettes and Y'Cb'Cr' palettes. The video format that defines the palette determines which union member should be used.
-
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
This event is not used to signal the failure of an asynchronous
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Contains information that is common to every type of MFPlay event.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Represents a pinhole camera intrinsic model for a specified resolution.
-The width for the pinhole camera intrinsic model.
The height for the pinhole camera intrinsic model.
The pinhole camera model.
The lens distortion model.
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
Contains zero or 1 pinhole camera intrinsic models that describe how to project a 3D point in physical world onto the 2D image frame of a camera.
-The number of camera intrinsic models in the IntrinsicModels array.
The array of camera intrinsic models in the intrinsic data.
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Media items are created asynchronously. If multiple items are created, the operations can complete in any order, not necessarily in the same order as the method calls. You can use the dwUserData member to identify the items, if you have simultaneous requests pending.
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
If one or more streams could not be connected to a media sink, the event property store contains the MFP_PKEY_StreamRenderingResults property. The value of the property is an array of
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
If MFEventType is
Property | Description |
---|---|
MFP_PKEY_StreamIndex | The index of the stream whose format changed. |
?
-Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
Important??Deprecated. This API may be removed from future releases of Windows. Applications should use the Media Session for playback.?
Event structure for the
To get a reference to this structure, cast the pEventHeader parameter of the
[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]
A four dimensional vector, used to represent a rotation.
-X component of the vector.
Y component of the vector.
Z component of the vector.
W component of the vector.
Represents a ratio.
-Numerator of the ratio.
Denominator of the ratio.
Defines a regions of interest.
-The bounds of the region.
Specifies the quantization parameter delta for the specified region from the rest of the frame.
Contains information about a revoked component.
-Specifies the reason for the revocation. The following values are defined.
Value | Meaning |
---|---|
| A boot driver could not be verified. |
| A certificate in a trusted component's certificate chain was revoked. |
| The high-security certificate for authenticating the protected environment (PE) was revoked. The high-security certificate is typically used by ITAs that handle high-definition content and next-generation formats such as HD-DVD. |
| A certificate's extended key usage (EKU) object is invalid. |
| The root certificate is not valid. |
| The low-security certificate for authenticating the PE was revoked. The low-security certificate is typically used by ITAs that handle standard-definition content and current-generation formats. |
| A trusted component was revoked. |
| The GRL was not found. |
| Could not load the global revocation list (GRL). |
| The GRL signature is invalid. |
| A certificate chain was not well-formed, or a boot driver is unsigned or is signed with an untrusted certificate. |
| A component was signed by a test certificate. |
?
In addition, one of the following flags might be present, indicating the type of component that failed to load.
Value | Meaning |
---|---|
| User-mode component. |
| Kernel-mode component. |
?
Contains a hash of the file header.
Contains a hash of the public key in the component's certificate.
File name of the revoked component.
Contains information about one or more revoked components.
-Revocation information version.
Number of elements in the pRRComponents array.
Array of
Contains statistics about the performance of the sink writer.
-The size of the structure, in bytes.
The time stamp of the most recent sample given to the sink writer. The sink writer updates this value each time the application calls
The time stamp of the most recent sample to be encoded. The sink writer updates this value whenever it calls
The time stamp of the most recent sample given to the media sink. The sink writer updates this value whenever it calls
The time stamp of the most recent stream tick. The sink writer updates this value whenever the application calls
The system time of the most recent sample request from the media sink. The sink writer updates this value whenever it receives an
The number of samples received.
The number of samples encoded.
The number of samples given to the media sink.
The number of stream ticks received.
The amount of data, in bytes, currently waiting to be processed.
The total amount of data, in bytes, that has been sent to the media sink.
The number of pending sample requests.
The average rate, in media samples per 100-nanoseconds, at which the application sent samples to the sink writer.
The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the encoder.
The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the media sink.
Not for application use.
-This structure is used internally by the Microsoft Media Foundation AVStream proxy.
-Reserved.
Reserved.
Contains information about an input stream on a Media Foundation transform (MFT). To get these values, call
Before the media types are set, the only values that should be considered valid are the
The
The
After you set a media type on all of the input and output streams (not including optional streams), all of the values returned by the GetInputStreamInfo method are valid. They might change if you set different media types.
-Specifies a new attribute value for a topology node.
- Due to an error in the structure declaration, the u64 member is declared as a 32-bit integer, not a 64-bit integer. Therefore, any 64-bit value passed to the
The identifier of the topology node to update. To get the identifier of a topology node, call
Attribute type, specified as a member of the
Attribute value (unsigned 32-bit integer). This member is used when attrType equals
Attribute value (unsigned 32-bit integer). This member is used when attrType equals
Attribute value (floating point). This member is used when attrType equals
Contains information about an output buffer for a Media Foundation transform. This structure is used in the
You must provide an
MFTs can support two different allocation models for output samples:
To find which model the MFT supports for a given output stream, call
Flag | Allocation Model |
---|---|
The MFT allocates the output samples for the stream. Set pSample to | |
The MFT supports both allocation models. | |
Neither (default) | The client must allocate the output samples for the stream. |
?
The behavior of ProcessOutput depends on the initial value of pSample and the value of the dwFlags parameter in the ProcessOutput method.
If pSample is
Restriction: This output stream must have the
If pSample is
Restriction: This output stream must have the
If pSample is non-
Restriction: This output stream must not have the
Any other combinations are invalid and cause ProcessOutput to return E_INVALIDARG.
Each call to ProcessOutput can produce zero or more events and up to one sample per output stream.
-
Contains information about an output stream on a Media Foundation transform (MFT). To get these values, call
Before the media types are set, the only values that should be considered valid is the
After you set a media type on all of the input and output streams (not including optional streams), all of the values returned by the GetOutputStreamInfo method are valid. They might change if you set different media types.
-Contains information about the audio and video streams for the transcode sink activation object.
To get the information stored in this structure, call
The
Contains media type information for registering a Media Foundation transform (MFT).
-The major media type. For a list of possible values, see Major Media Types.
The media subtype. For a list of possible values, see the following topics:
Contains parameters for the
Specifies a rectangular area within a video frame.
- An
An
A
Contains information about a video compression format. This structure is used in the
For uncompressed video formats, set the structure members to zero.
-
Describes a video format.
-Applications should avoid using this structure. Instead, it is recommended that applications use attributes to describe the video format. For a list of media type attributes, see Media Type Attributes. With attributes, you can set just the format information that you know, which is easier (and more likely to be accurate) than trying to fill in complete format information for the
To initialize a media type object from an
You can use the
Size of the structure, in bytes. This value includes the size of the palette entries that may appear after the surfaceInfo member.
Video subtype. See Video Subtype GUIDs.
Contains video format information that applies to both compressed and uncompressed formats.
This structure is used in the
Developers are encouraged to use media type attributes instead of using the
Structure Member | Media Type Attribute |
---|---|
dwWidth, dwHeight | |
PixelAspectRatio | |
SourceChromaSubsampling | |
InterlaceMode | |
TransferFunction | |
ColorPrimaries | |
TransferMatrix | |
SourceLighting | |
FramesPerSecond | |
NominalRange | |
GeometricAperture | |
MinimumDisplayAperture | |
PanScanAperture | |
VideoFlags | See |
?
-
Defines a normalized rectangle, which is used to specify sub-rectangles in a video rectangle. When a rectangle N is normalized relative to some other rectangle R, it means the following:
The coordinate (0.0, 0.0) on N is mapped to the upper-left corner of R.
The coordinate (1.0, 1.0) on N is mapped to the lower-right corner of R.
Any coordinates of N that fall outside the range [0...1] are mapped to positions outside the rectangle R. A normalized rectangle can be used to specify a region within a video rectangle without knowing the resolution or even the aspect ratio of the video. For example, the upper-left quadrant is defined as {0.0, 0.0, 0.5, 0.5}.
-X-coordinate of the upper-left corner of the rectangle.
Y-coordinate of the upper-left corner of the rectangle.
X-coordinate of the lower-right corner of the rectangle.
Y-coordinate of the lower-right corner of the rectangle.
Contains information about an uncompressed video format. This structure is used in the
Applies to: desktop apps | Metro style apps
Initializes Microsoft Media Foundation.
- An application must call this function before using Media Foundation. Before your application quits, call
Do not call
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Applies to: desktop apps | Metro style apps
Shuts down the Microsoft Media Foundation platform. Call this function once for every call to
If this function succeeds, it returns
This function is available on the following platforms if the Windows Media Format 11 SDK redistributable components are installed:
Represents an audio data buffer, used with
XAudio2 audio data is interleaved, data from each channel is adjacent for a particular sample number. For example if there was a 4 channel wave playing into an XAudio2 source voice, the audio data would be a sample of channel 0, a sample of channel 1, a sample of channel 2, a sample of channel 3, and then the next sample of channels 0, 1, 2, 3, etc.
The AudioBytes and pAudioData members of
Memory allocated to hold a
Contains information about an XAPO for use in an effect chain.
-XAPO instances are passed to XAudio2 as
For additional information on using XAPOs with XAudio2 see How to: Create an Effect Chain and How to: Use an XAPO in XAudio2.
-The
This interface should be implemented by the XAudio2 client. XAudio2 calls these methods via an interface reference provided by the client, using the XAudio2Create method. Methods in this interface return void, rather than an
See XAudio2 Callbacks for restrictions on callback implementation.
Describes I3DL2 (Interactive 3D Audio Rendering Guidelines Level 2.0) parameters for use in the ReverbConvertI3DL2ToNative function.
-There are many preset values defined for the
Describes parameters for use in the reverb APO.
-All parameters related to sampling rate or time are relative to a 48kHz voice and must be scaled for use with other sampling rates. For example, setting ReflectionsDelay to 300ms gives a true 300ms delay when the reverb is hosted in a 48kHz voice, but becomes a 150ms delay when hosted in a 24kHz voice.
-Percentage of the output that will be reverb. Allowable values are from 0 to 100.
The delay time of the first reflection relative to the direct path. Permitted range is from 0 to 300 milliseconds.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Delay of reverb relative to the first reflection. Permitted range is from 0 to 85 milliseconds.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Delay for the left rear output and right rear output. Permitted range is from 0 to 5 milliseconds.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Delay for the left side output and right side output. Permitted range is from 0 to 5 milliseconds.
Note??This value is supported beginning with Windows?10. ? Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Position of the left input within the simulated space relative to the listener. With PositionLeft set to the minimum value, the left input is placed close to the listener. In this position, early reflections are dominant, and the reverb decay is set back in the sound field and reduced in amplitude. With PositionLeft set to the maximum value, the left input is placed at a maximum distance from the listener within the simulated room. PositionLeft does not affect the reverb decay time (liveness of the room), only the apparent position of the source relative to the listener. Permitted range is from 0 to 30 (no units).
Same as PositionLeft, but affecting only the right input. Permitted range is from 0 to 30 (no units).
Note??PositionRight is ignored in mono-in/mono-out mode. ?Gives a greater or lesser impression of distance from the source to the listener. Permitted range is from 0 to 30 (no units).
Gives a greater or lesser impression of distance from the source to the listener. Permitted range is from 0 to 30 (no units).
Note??PositionMatrixRight is ignored in mono-in/mono-out mode. ?Controls the character of the individual wall reflections. Set to minimum value to simulate a hard flat surface and to maximum value to simulate a diffuse surface. Permitted range is from 0 to 15 (no units).
Controls the character of the individual wall reverberations. Set to minimum value to simulate a hard flat surface and to maximum value to simulate a diffuse surface. Permitted range is from 0 to 15 (no units). -
Adjusts the decay time of low frequencies relative to the decay time at 1 kHz. The values correspond to dB of gain as follows:
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Gain (dB) | -8 | -7 | -6 | -5 | -4 | -3 | -2 | -1 | 0 | +1 | +2 | +3 | +4 |
?
Note??A LowEQGain value of 8 results in the decay time of low frequencies being equal to the decay time at 1 kHz. ?Permitted range is from 0 to 12 (no units).
Sets the corner frequency of the low pass filter that is controlled by the LowEQGain parameter. The values correspond to frequency in Hz as follows:
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
---|---|---|---|---|---|---|---|---|---|---|
Frequency (Hz) | 50 | 100 | 150 | 200 | 250 | 300 | 350 | 400 | 450 | 500 |
?
Permitted range is from 0 to 9 (no units).
Adjusts the decay time of high frequencies relative to the decay time at 1 kHz. When set to zero, high frequencies decay at the same rate as 1 kHz. When set to maximum value, high frequencies decay at a much faster rate than 1 kHz.
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|---|
Gain (dB) | -8 | -7 | -6 | -5 | -4 | -3 | -2 | -1 | 0 |
?
Permitted range is from 0 to 8 (no units).
Sets the corner frequency of the high pass filter that is controlled by the HighEQGain parameter. The values correspond to frequency in kHz as follows:
Value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Frequency (kHz) | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 | 4.5 | 5 | 5.5 | 6 | 6.5 | 7 | 7.5 | 8 |
?
Permitted range is from 0 to 14 (no units).
Sets the corner frequency of the low pass filter for the room effect. Permitted range is from 20 to 20,000 Hz.
Note??All parameters related to sampling rate or time are relative to a 48kHz sampling rate and must be scaled for use with other sampling rates. See remarks section below for additional information. ?Sets the pass band intensity level of the low-pass filter for both the early reflections and the late field reverberation. Permitted range is from -100 to 0 dB.
Sets the intensity of the low-pass filter for both the early reflections and the late field reverberation at the corner frequency (RoomFilterFreq). Permitted range is from -100 to 0 dB.
Adjusts the intensity of the early reflections. Permitted range is from -100 to 20 dB.
Adjusts the intensity of the reverberations. Permitted range is from -100 to 20 dB.
Reverberation decay time at 1 kHz. This is the time that a full scale input signal decays by 60 dB. Permitted range is from 0.1 to infinity seconds.
Controls the modal density in the late field reverberation. For colorless spaces, Density should be set to the maximum value (100). As Density is decreased, the sound becomes hollow (comb filtered). This is an effect that can be useful if you are trying to model a silo. Permitted range as a percentage is from 0 to 100.
The apparent size of the acoustic space. Permitted range is from 1 to 100 feet.
If set to TRUE, disables late field reflection calculations. Disabling late field reflection calculations results in a significant CPU time savings.
Note??The DirectX SDK versions of XAUDIO2 don't support this member. ?Describes parameters for use with the volume meter APO.
-This structure is used with the XAudio2
pPeakLevels and pRMSLevels are not returned by
ChannelCount must be set by the application to match the number of channels in the voice the effect is applied to.
-Array that will be filled with the maximum absolute level for each channel during a processing pass. The array must be at least ChannelCount ? sizeof(float) bytes. pPeakLevels may be
Array that will be filled with root mean square level for each channel during a processing pass. The array must be at least ChannelCount ? sizeof(float) bytes. pRMSLevels may be
Number of channels being processed.
Represents an audio data buffer, used with
XAudio2 audio data is interleaved, data from each channel is adjacent for a particular sample number. For example if there was a 4 channel wave playing into an XAudio2 source voice, the audio data would be a sample of channel 0, a sample of channel 1, a sample of channel 2, a sample of channel 3, and then the next sample of channels 0, 1, 2, 3, etc.
The AudioBytes and pAudioData members of
Memory allocated to hold a
Indicates the filter type.
-Attenuates (reduces) frequencies above the cutoff frequency.
Attenuates frequencies outside a given range.
Attenuates frequencies below the cutoff frequency.
Attenuates frequencies inside a given range.
Attenuates frequencies above the cutoff frequency. This is a one-pole filter, and
Attenuates frequencies below the cutoff frequency. This is a one-pole filter, and
Contains information about the creation flags, input channels, and sample rate of a voice.
-Note the DirectX SDK versions of XAUDIO2 do not support the ActiveFlags member.
-Flags used to create the voice; see the individual voice interfaces for more information.
Flags that are currently set on the voice.
The number of input channels the voice expects.
The input sample rate the voice expects.
XAudio2 constants that specify default parameters, maximum values, and flags.
XAudio2 boundary values
-A mastering voice is used to represent the audio output device.
Data buffers cannot be submitted directly to mastering voices, but data submitted to other types of voices must be directed to a mastering voice to be heard. -
Returns the channel mask for this voice.
- Returns the channel mask for this voice. This corresponds to the dwChannelMask member of the
This method does not return a value.
The pChannelMask argument is a bit-mask of the various channels in the speaker geometry reported by the audio system. This information is needed for the X3DAudioInitialize SpeakerChannelMask parameter.
The X3DAUDIO.H header declares a number of SPEAKER_ positional defines to decode these channels masks.
Examples include:
Note??For the DirectX SDK versions of XAUDIO, the channel mask for the output device was obtained via the IXAudio2::GetDeviceDetails method, which doesn't exist in Windows?8 and later.? -// (0x1) | (0x2) // (0x1) | (0x2) // | (0x4) // | (0x8) // | (0x10) | (0x20)
Returns the channel mask for this voice. (Only valid for XAudio 2.8, returns 0 otherwise)
-The pChannelMask argument is a bit-mask of the various channels in the speaker geometry reported by the audio system. This information is needed for the
The X3DAUDIO.H header declares a number of SPEAKER_ positional defines to decode these channels masks.
Examples include:
// (0x1) | (0x2) // (0x1) | (0x2) // | (0x4) // | (0x8) // | (0x10) | (0x20)
Note??For the DirectX SDK versions of XAUDIO, the channel mask for the output device was obtained via the IXAudio2::GetDeviceDetails method, which doesn't exist in Windows?8 and later.
-Use a source voice to submit audio data to the XAudio2 processing pipeline.You must send voice data to a mastering voice to be heard, either directly or through intermediate submix voices. -
-Returns the frequency adjustment ratio of the voice.
-GetFrequencyRatio always returns the voice's actual current frequency ratio. However, this may not match the ratio set by the most recent
For information on frequency ratios, see
Reconfigures the voice to consume source data at a different sample rate than the rate specified when the voice was created.
-The SetSourceSampleRate method supports reuse of XAudio2 voices by allowing a voice to play sounds with a variety of sample rates. To use SetSourceSampleRate the voice must have been created without the
The typical use of SetSourceSampleRate is to support voice pooling. For example to support voice pooling an application would precreate all the voices it expects to use. Whenever a new sound will be played the application chooses an inactive voice or ,if all voices are busy, picks the least important voice and calls SetSourceSampleRate on the voice with the new sound's sample rate. After SetSourceSampleRate has been called on the voice, the application can immediately start submitting and playing buffers with the new sample rate. This allows the application to avoid the overhead of creating and destroying voices frequently during gameplay. -
-Starts consumption and processing of audio by the voice. Delivers the result to any connected submix or mastering voices, or to the output device.
-Flags that control how the voice is started. Must be 0.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
If the XAudio2 engine is stopped, the voice stops running. However, it remains in the started state, so that it starts running again as soon as the engine starts.
When first created, source voices are in the stopped state. Submix and mastering voices are in the started state.
After Start is called it has no further effect if called again before
Stops consumption of audio by the current voice.
-Flags that control how the voice is stopped. Can be 0 or the following:
Value | Description |
---|---|
Continue emitting effect output after the voice is stopped.? |
?
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
All source buffers that are queued on the voice and the current cursor position are preserved. This allows the voice to continue from where it left off, when it is restarted. The
By default, any pending output from voice effects?for example, reverb tails?is not played. Instead, the voice is immediately rendered silent. The
A voice stopped with the
Stop is always asynchronous, even if called within a callback.
Note??XAudio2 never calls any voice callbacks for a voice if the voice is stopped (even if it was stopped withAdds a new audio buffer to the voice queue.
- Pointer to an
Pointer to an additional
Returns
The voice processes and plays back the buffers in its queue in the order that they were submitted.
The
If the voice is started and has no buffers queued, the new buffer will start playing immediately. If the voice is stopped, the buffer is added to the voice's queue and will be played when the voice starts.
If only part of the given buffer should be played, the PlayBegin and PlayLength fields in the
If all or part of the buffer should be played in a continuous loop, the LoopBegin, LoopLength and LoopCount fields in
If an explicit play region is specified, it must begin and end within the given audio buffer (or, in the compressed case, within the set of samples that the buffer will decode to). In addition, the loop region cannot end past the end of the play region.
Xbox 360 |
---|
For certain audio formats, there may be additional restrictions on the valid endpoints of any play or loop regions; e.g. for XMA buffers, the regions can only begin or end at 128-sample boundaries in the decoded audio. - |
?
The pBuffer reference can be reused or freed immediately after calling this method, but the actual audio data referenced by pBuffer must remain valid until the buffer has been fully consumed by XAudio2 (which is indicated by the
Up to
SubmitSourceBuffer takes effect immediately when called from an XAudio2 callback with an OperationSet of
Xbox 360 |
---|
This method can be called from an Xbox system thread (most other XAudio2 methods cannot). However, a maximum of two source buffers can be submitted from a system thread at a time. |
?
-Removes all pending audio buffers from the voice queue.
-Returns
If the voice is started, the buffer that is currently playing is not removed from the queue.
FlushSourceBuffers can be called regardless of whether the voice is currently started or stopped.
For every buffer removed, an OnBufferEnd callback will be made, but none of the other per-buffer callbacks (OnBufferStart, OnStreamEnd or OnLoopEnd) will be made.
FlushSourceBuffers does not change a the voice's running state, so if the voice was playing a buffer prior to the call, it will continue to do so, and will deliver all the callbacks for the buffer normally. This means that the OnBufferEnd callback for this buffer will take place after the OnBufferEnd callbacks for the buffers that were removed. Thus, an XAudio2 client that calls FlushSourceBuffers cannot expect to receive OnBufferEnd callbacks in the order in which the buffers were submitted.
No warnings for starvation of the buffer queue will be emitted when the currently playing buffer completes; it is assumed that the client has intentionally removed the buffers that followed it. However, there may be an audio pop if this buffer does not end at a zero crossing. If the application must ensure that the flush operation takes place while a specific buffer is playing?perhaps because the buffer ends with a zero crossing?it must call FlushSourceBuffers from a callback, so that it executes synchronously.
Calling FlushSourceBuffers after a voice is stopped and then submitting new data to the voice resets all of the voice's internal counters.
A voice's state is not considered reset after calling FlushSourceBuffers until the OnBufferEnd callback occurs (if a buffer was previously submitted) or
Notifies an XAudio2 voice that no more buffers are coming after the last one that is currently in its queue.
-Returns
Discontinuity suppresses the warnings that normally occur in the debug build of XAudio2 when a voice runs out of audio buffers to play. It is preferable to mark the final buffer of a stream by tagging it with the
Because calling Discontinuity is equivalent to applying the
Stops looping the voice when it reaches the end of the current loop region.
-Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
If the cursor for the voice is not in a loop region, ExitLoop does nothing.
-Returns the voice's current state and cursor position data.
-Number of audio buffers currently queued on the voice, including the one that is processed currently.
For all encoded formats, including constant bit rate (CBR) formats such as adaptive differential pulse code modulation (ADPCM), SamplesPlayed is expressed in terms of decoded samples. For pulse code modulation (PCM) formats, SamplesPlayed is expressed in terms of either input or output samples. There is a one-to-one mapping from input to output for PCM formats.
If a client needs to get the correlated positions of several voices?that is, to know exactly which sample of a particular voice is playing when a specified sample of another voice is playing?it must make the
Sets the frequency adjustment ratio of the voice.
-Frequency adjustment ratio. This value must be between
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
Frequency adjustment is expressed as source frequency / target frequency. Changing the frequency ratio changes the rate audio is played on the voice. A ratio greater than 1.0 will cause the audio to play faster and a ratio less than 1.0 will cause the audio to play slower. Additionally, the frequency ratio affects the pitch of audio on the voice. As an example, a value of 1.0 has no effect on the audio, whereas a value of 2.0 raises pitch by one octave and 0.5 lowers it by one octave.
If SetFrequencyRatio is called specifying a Ratio value outside the valid range, the method will set the frequency ratio to the nearest valid value. A warning also will be generated for debug builds.
Note??Returns the frequency adjustment ratio of the voice.
-Returns the current frequency adjustment ratio if successful.
GetFrequencyRatio always returns the voice's actual current frequency ratio. However, this may not match the ratio set by the most recent
For information on frequency ratios, see
Reconfigures the voice to consume source data at a different sample rate than the rate specified when the voice was created.
-The new sample rate the voice should process submitted data at. Valid sample rates are 1kHz to 200kHz.
Returns
The SetSourceSampleRate method supports reuse of XAudio2 voices by allowing a voice to play sounds with a variety of sample rates. To use SetSourceSampleRate the voice must have been created without the
The typical use of SetSourceSampleRate is to support voice pooling. For example to support voice pooling an application would precreate all the voices it expects to use. Whenever a new sound will be played the application chooses an inactive voice or ,if all voices are busy, picks the least important voice and calls SetSourceSampleRate on the voice with the new sound's sample rate. After SetSourceSampleRate has been called on the voice, the application can immediately start submitting and playing buffers with the new sample rate. This allows the application to avoid the overhead of creating and destroying voices frequently during gameplay. -
-A submix voice is used primarily for performance improvements and effects processing.
-Data buffers cannot be submitted directly to submix voices and will not be audible unless submitted to a mastering voice. A submix voice can be used to ensure that a particular set of voice data is converted to the same format and/or to have a particular effect chain processed on the collective result. -
Designates a new set of submix or mastering voices to receive the output of the voice.
-This method is only valid for source and submix voices. Mastering voices can not send audio to another voice.
After calling SetOutputVoices a voice's current send levels will be replaced by a default send matrix. The
It is invalid to call SetOutputVoices from within a callback (that is,
Gets the voice's filter parameters.
-GetFilterParameters will fail if the voice was not created with the
GetFilterParameters always returns this voice's actual current filter parameters. However, these may not match the parameters set by the most recent
Sets the overall volume level for the voice.
-SetVolume controls a voice's master input volume level. The master volume level is applied at different times depending on the type of voice. For submix and mastering voices the volume level is applied just before the voice's built in filter and effect chain is applied. For source voices the master volume level is applied after the voice's filter and effect chain is applied.
Volume levels are expressed as floating-point amplitude multipliers between -
Returns information about the creation flags, input channels, and sample rate of a voice.
-
Designates a new set of submix or mastering voices to receive the output of the voice.
-Array of
Returns
This method is only valid for source and submix voices. Mastering voices can not send audio to another voice.
After calling SetOutputVoices a voice's current send levels will be replaced by a default send matrix. The
It is invalid to call SetOutputVoices from within a callback (that is,
Replaces the effect chain of the voice.
-Pointer to an
Returns
See XAudio2 Error Codes for descriptions of XAudio2 specific error codes.
The number of output channels allowed for a voice's effect chain is locked at creation of the voice. If you create the voice with an effect chain, any new effect chain passed to SetEffectChain must have the same number of input and output channels as the original effect chain. If you create the voice without an effect chain, the number of output channels allowed for the effect chain will default to the number of input channels for the voice. If any part of effect chain creation fails, none of it is applied.
After you attach an effect to an XAudio2 voice, XAudio2 takes control of the effect, and the client should not make any further calls to it. The simplest way to ensure this is to release all references to the effect.
It is invalid to call SetEffectChain from within a callback (that is,
The
Enables the effect at a given position in the effect chain of the voice.
-Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
Be careful when you enable an effect while the voice that hosts it is running. Such an action can result in a problem if the effect significantly changes the audio's pitch or volume.
The effects in a given XAudio2 voice's effect chain must consume and produce audio at that voice's processing sample rate. The only aspect of the audio format they can change is the channel count. For example a reverb effect can convert mono data to 5.1. The client can use the
EnableEffect takes effect immediately when you call it from an XAudio2 callback with an OperationSet of
Disables the effect at a given position in the effect chain of the voice.
-Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
The effects in a given XAudio2 voice's effect chain must consume and produce audio at that voice's processing sample rate. The only aspect of the audio format they can change is the channel count. For example a reverb effect can convert mono data to 5.1. The client can use the
Disabling an effect immediately removes it from the processing graph. Any pending audio in the effect?such as a reverb tail?is not played. Be careful disabling an effect while the voice that hosts it is running. This can result in an audible artifact if the effect significantly changes the audio's pitch or volume.
DisableEffect takes effect immediately when called from an XAudio2 callback with an OperationSet of
Returns the running state of the effect at a specified position in the effect chain of the voice.
-Zero-based index of an effect in the effect chain of the voice.
GetEffectState always returns the effect's actual current state. However, this may not be the state set by the most recent
Sets parameters for a given effect in the voice's effect chain.
-Zero-based index of an effect within the voice's effect chain.
Returns the current values of the effect-specific parameters.
Size of the pParameters array in bytes.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
Fails with E_NOTIMPL if the effect does not support a generic parameter control interface.
The specific effect being used determines the valid size and format of the pParameters buffer. The call will fail if pParameters is invalid or if ParametersByteSize is not exactly the size that the effect expects. The client must take care to direct the SetEffectParameters call to the right effect. If this call is directed to a different effect that happens to accept the same parameter block size, the parameters will be interpreted differently. This may lead to unexpected results.
The memory pointed to by pParameters must not be freed immediately, because XAudio2 will need to refer to it later when the parameters actually are applied to the effect. This happens during the next audio processing pass if the OperationSet argument is
SetEffectParameters takes effect immediately when called from an XAudio2 callback with an OperationSet of
Returns the current effect-specific parameters of a given effect in the voice's effect chain.
-Zero-based index of an effect within the voice's effect chain.
Returns the current values of the effect-specific parameters.
Size, in bytes, of the pParameters array.
Returns
Fails with E_NOTIMPL if the effect does not support a generic parameter control interface.
GetEffectParameters always returns the effect's actual current parameters. However, these may not match the parameters set by the most recent call to
Sets the voice's filter parameters.
-Pointer to an
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetFilterParameters will fail if the voice was not created with the
This method is usable only on source and submix voices and has no effect on mastering voices.
Note??Gets the voice's filter parameters.
-Pointer to an
GetFilterParameters will fail if the voice was not created with the
GetFilterParameters always returns this voice's actual current filter parameters. However, these may not match the parameters set by the most recent
Sets the filter parameters on one of this voice's sends.
-
Pointer to an
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetOutputFilterParameters will fail if the send was not created with the
Returns the filter parameters from one of this voice's sends.
-
Pointer to an
GetOutputFilterParameters will fail if the send was not created with the
Sets the overall volume level for the voice.
-Overall volume level to use. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetVolume controls a voice's master input volume level. The master volume level is applied at different times depending on the type of voice. For submix and mastering voices the volume level is applied just before the voice's built in filter and effect chain is applied. For source voices the master volume level is applied after the voice's filter and effect chain is applied.
Volume levels are expressed as floating-point amplitude multipliers between -
Sets the overall volume level for the voice.
-Overall volume level to use. See Remarks for more information on volume levels.
SetVolume controls a voice's master input volume level. The master volume level is applied at different times depending on the type of voice. For submix and mastering voices the volume level is applied just before the voice's built in filter and effect chain is applied. For source voices the master volume level is applied after the voice's filter and effect chain is applied.
Volume levels are expressed as floating-point amplitude multipliers between -
Sets the volume levels for the voice, per channel.
-Number of channels in the voice.
Array containing the new volumes of each channel in the voice. The array must have Channels elements. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
SetChannelVolumes controls a voice's per-channel output levels and is applied just after the voice's final SRC and before its sends.
This method is valid only for source and submix voices, because mastering voices do not specify volume per channel.
Volume levels are expressed as floating-point amplitude multipliers between -
Returns the volume levels for the voice, per channel.
-Confirms the channel count of the voice.
Returns the current volume level of each channel in the voice. The array must have at least Channels elements. See Remarks for more information on volume levels.
These settings are applied after the effect chain is applied. This method is valid only for source and submix voices, because mastering voices do not specify volume per channel.
Volume levels are expressed as floating-point amplitude multipliers between -2?? to 2??, with a maximum gain of 144.5 dB. A volume of 1 means there is no attenuation or gain, 0 means silence, and negative levels can be used to invert the audio's phase. See XAudio2 Volume and Pitch Control for additional information on volume control.
Note??GetChannelVolumes always returns the volume levels most recently set bySets the volume level of each channel of the final output for the voice. These channels are mapped to the input channels of a specified destination voice.
-Pointer to a destination
Confirms the output channel count of the voice. This is the number of channels that are produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels ? DestinationChannels] volume levels sent to the destination voice. The level sent from source channel S to destination channel D is specified in the form pLevelMatrix[SourceChannels ? D + S].
For example, when rendering two-channel stereo input into 5.1 output that is weighted toward the front channels?but is absent from the center and low-frequency channels?the matrix might have the values shown in the following table.
Output | Left Input [Array Index] | Right Input [Array Index] |
---|---|---|
Left | 1.0 [0] | 0.0 [1] |
Right | 0.0 [2] | 1.0 [3] |
Front Center | 0.0 [4] | 0.0 [5] |
LFE | 0.0 [6] | 0.0 [7] |
Rear Left | 0.8 [8] | 0.0 [9] |
Rear Right | 0.0 [10] | 0.8 [11] |
?
Note??The left and right input are fully mapped to the output left and right channels; 80 percent of the left and right input is mapped to the rear left and right channels. ?See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. See the XAudio2 Operation Sets overview for more information.
Returns
This method is valid only for source and submix voices, because mastering voices write directly to the device with no matrix mixing.
Volume levels are expressed as floating-point amplitude multipliers between -
The X3DAudio function X3DAudioCalculate can produce an output matrix for use with SetOutputMatrix based on a sound's position and a listener's position.
Note??Gets the volume level of each channel of the final output for the voice. These channels are mapped to the input channels of a specified destination voice.
-Pointer specifying the destination
Confirms the output channel count of the voice. This is the number of channels that are produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels * DestinationChannels] volume levels sent to the destination voice. The level sent from source channel S to destination channel D is returned in the form pLevelMatrix[DestinationChannels ? S + D]. See Remarks for more information on volume levels.
This method applies only to source and submix voices, because mastering voices write directly to the device with no matrix mixing. Volume levels are expressed as floating-point amplitude multipliers between -2?? to 2??, with a maximum gain of 144.5 dB. A volume level of 1 means there is no attenuation or gain and 0 means silence. Negative levels can be used to invert the audio's phase. See XAudio2 Volume and Pitch Control for additional information on volume control.
See
Destroys the voice. If necessary, stops the voice and removes it from the XAudio2 graph.
-If any other voice is currently sending audio to this voice, the method fails.
DestroyVoice waits for the audio processing thread to be idle, so it can take a little while (typically no more than a couple of milliseconds). This is necessary to guarantee that the voice will no longer make any callbacks or read any audio data, so the application can safely free up these resources as soon as the call returns.
To avoid title thread interruptions from a blocking DestroyVoice call, the application can destroy voices on a separate non-critical thread, or the application can use voice pooling strategies to reuse voices rather than destroying them. Note that voices can only be reused with audio that has the same data format and the same number of channels the voice was created with. A voice can play audio data with different sample rates than that of the voice by calling
It is invalid to call DestroyVoice from within a callback (that is,
Returns information about the creation flags, input channels, and sample rate of a voice.
-The
This interface should be implemented by the XAudio2 client. XAudio2 calls these methods through an interface reference provided by the client in the
See the XAudio2 Callbacks topic for restrictions on callback implementation.
This is the only XAudio2 interface that is derived from the COM
The DirectX SDK versions of XAUDIO2 included three member functions that are not present in the Windows 8 version: GetDeviceCount, GetDeviceDetails, and Initialize. These enumeration methods are no longer provided and standard Windows Audio APIs should be used for device enumeration instead.
-Returns current resource usage details, such as available memory or CPU usage.
-For specific information on the statistics returned by GetPerformanceData, see the
Adds an
Returns
This method can be called multiple times, allowing different components or layers of the same application to manage their own engine callback implementations separately.
It is invalid to call RegisterForCallbacks from within a callback (that is,
Removes an
It is invalid to call UnregisterForCallbacks from within a callback (that is,
Creates and configures a source voice.
-If successful, returns a reference to the new
Pointer to a one of the structures in the table below. This structure contains the expected format for all audio buffers submitted to the source voice. XAudio2 supports PCM and ADPCM voice types.
Format tag | Wave format structure | Size (in bytes) |
---|---|---|
PCMWAVEFORMAT | 16 | |
-or- | | 18 |
PCMWAVEFORMAT | 18 | |
ADPCMWAVEFORMAT | 50 | |
| 40 |
?
XAudio2 supports the following PCM formats.
The number of channels in a source voice must be less than or equal to
Flags that specify the behavior of the source voice. A flag can be 0 or a combination of one or more of the following:
Value | Description |
---|---|
No pitch control is available on the voice.? | |
No sample rate conversion is available on the voice. The voice's outputs must have the same sample rate.Note??The | |
The filter effect should be available on this voice.? |
?
Note??The XAUDIO2_VOICE_MUSIC flag is not supported on Windows. ?Highest allowable frequency ratio that can be set on this voice. The value for this argument must be between
If MaxFrequencyRatio is less than 1.0, the voice will use that ratio immediately after being created (rather than the default of 1.0).
Xbox 360 |
---|
For XMA voices, there is one more restriction on the MaxFrequencyRatio argument and the voice's sample rate. The product of these two numbers cannot exceed XAUDIO2_MAX_RATIO_TIMES_RATE_XMA_MONO for one-channel voices or XAUDIO2_MAX_RATIO_TIMES_RATE_XMA_MULTICHANNEL for voices with any other number of channels. If the value specified for MaxFrequencyRatio is too high for the specified format, the call to CreateSourceVoice fails and produces a debug message. |
?
Note??You can use the lowest possible MaxFrequencyRatio value to reduce XAudio2's memory usage. ?Pointer to a client-provided callback interface,
Pointer to a list of
Pointer to a list of
Returns
See XAudio2 Error Codes for descriptions of XAudio2-specific error codes.
Source voices read audio data from the client. They process the data and send it to the XAudio2 processing graph.
A source voice includes a variable-rate sample rate conversion, to convert data from the source format sample rate to the output rate required for the voice send list. If you use a
You cannot create any source or submix voices until a mastering voice exists, and you cannot destory a mastering voice if any source or submix voices still exist.
Source voices are always processed before any submix or mastering voices. This means that you do not need a ProcessingStage parameter to control the processing order.
When first created, source voices are in the stopped state.
XAudio2 uses an internal memory pooler for voices with the same format. This means memory allocation for voices will occur less frequently as more voices are created and then destroyed. To minimize just-in-time allocations, a title can create the anticipated maximum number of voices needed up front, and then delete them as necessary. Voices will then be reused from the XAudio2 pool. The memory pool is tied to an XAudio2 engine instance. You can reclaim all the memory used by an instance of the XAudio2 engine by destroying the XAudio2 object and recreating it as necessary (forcing the memory pool to grow via preallocation would have to be reapplied as needed).
It is invalid to call CreateSourceVoice from within a callback (that is,
The
Creates and configures a submix voice.
-On success, returns a reference to the new
Number of channels in the input audio data of the submix voice. InputChannels must be less than or equal to
Sample rate of the input audio data of submix voice. This rate must be a multiple of XAUDIO2_QUANTUM_DENOMINATOR. InputSampleRate must be between
Flags that specify the behavior of the submix voice. It can be 0 or the following:
Value | Description |
---|---|
The filter effect should be available on this voice. |
?
An arbitrary number that specifies when this voice is processed with respect to other submix voices, if the XAudio2 engine is running other submix voices. The voice is processed after all other voices that include a smaller ProcessingStage value and before all other voices that include a larger ProcessingStage value. Voices that include the same ProcessingStage value are processed in any order. A submix voice cannot send to another submix voice with a lower or equal ProcessingStage value. This prevents audio being lost due to a submix cycle.
Pointer to a list of
Pointer to a list of
Returns
See XAudio2 Error Codes for descriptions of XAudio2 specific error codes.
Submix voices receive the output of one or more source or submix voices. They process the output, and then send it to another submix voice or to a mastering voice.
A submix voice performs a sample rate conversion from the input sample rate to the input rate of its output voices in pSendList. If you specify multiple voice sends, they must all have the input same sample rate.
You cannot create any source or submix voices until a mastering voice exists, and you cannot destroy a mastering voice if any source or submix voices still exist.
When first created, submix voices are in the started state.
XAudio2 uses an internal memory pooler for voices with the same format. This means that memory allocation for voices will occur less frequently as more voices are created and then destroyed. To minimize just-in-time allocations, a title can create the anticipated maximum number of voices needed up front, and then delete them as necessary. Voices will then be reused from the XAudio2 pool. The memory pool is tied to an XAudio2 engine instance. You can reclaim all the memory used by an instance of the XAudio2 engine by destroying the XAudio2 object and recreating it as necessary (forcing the memory pool to grow via preallocation would have to be reapplied as needed).
It is invalid to call CreateSubmixVoice from within a callback (that is,
The
Creates and configures a mastering voice.
- If successful, returns a reference to the new
Number of channels the mastering voice expects in its input audio. InputChannels must be less than or equal to
You can set InputChannels to
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of XAUDIO2_QUANTUM_DENOMINATOR. InputSampleRate must be between
You can set InputSampleRate to
Windows XP defaults to 44100.
Windows Vista and Windows 7 default to the setting specified in the Sound Control Panel. The default for this setting is 44100 (or 48000 if required by the driver). Flags
Flags that specify the behavior of the mastering voice. Must be 0.
Identifier of the device to receive the output audio. Specifying the default value of
Pointer to an
The audio stream category to use for this mastering voice.
Returns
See XAudio2 Error Codes for descriptions of XAudio2 specific error codes.
Mastering voices receive the output of one or more source or submix voices. They process the data, and send it to the audio output device.
Typically, you should create a mastering voice with an input sample rate that will be used by the majority of the title's audio content. The mastering voice performs a sample rate conversion from this input sample rate to the actual device output rate.
You cannot create a source or submix voices until a mastering voice exists. You cannot destroy a mastering voice if any source or submix voices still exist.
Mastering voices are always processed after all source and submix voices. This means that you need not specify a ProcessingStage parameter to control the processing order.
XAudio2 only allows one mastering voice to exist at once. If you attempt to create more than one voice,
When first created, mastering voices are in the started state.
It is invalid to call CreateMasteringVoice from within a callback (that is,
The
Note that the DirectX SDK XAUDIO2 version of CreateMasteringVoice took a DeviceIndex argument instead of a szDeviceId and a StreamCategory argument. This reflects the changes needed for the standard Windows device enumeration model.
-Starts the audio processing thread.
-Returns
After StartEngine is called, all started voices begin to consume audio. All enabled effects start running, and the resulting audio is sent to any connected output devices. When XAudio2 is first initialized, the engine is already in the started state.
It is invalid to call StartEngine from within a callback (that is,
Stops the audio processing thread.
-When StopEngine is called, all output is stopped immediately. However, the audio graph is left untouched, preserving effect parameters, effect histories (for example, the data stored by a reverb effect in order to emit echoes of a previous sound), voice states, pending source buffers, cursor positions, and so forth. When the engine is restarted, the resulting audio output will be identical?apart from a period of silence?to the output that would have been produced if the engine had never been stopped.
It is invalid to call StopEngine from within a callback (that is,
Atomically applies a set of operations that are tagged with a given identifier.
-Identifier of the set of operations to be applied. To commit all pending operations, pass
Returns
CommitChanges does nothing if no operations are tagged with the given identifier.
See the XAudio2 Operation Sets overview about working with CommitChanges and XAudio2 interface methods that may be deferred. -
-Returns current resource usage details, such as available memory or CPU usage.
-On success, reference to an
For specific information on the statistics returned by GetPerformanceData, see the
Changes global debug logging options for XAudio2.
-Pointer to a
This parameter is reserved and must be
SetDebugConfiguration sets the debug configuration for the given instance of XAudio2 engine. See
Used with
When streaming an xWMA file a few packets at a time,
In addition, when streaming an xWMA file a few packets at a time, the application should subtract pDecodedPacketCumulativeBytes[PacketCount-1] of the previous packet from all the entries of the currently submitted packet.
The members of
Memory allocated to hold a
XAUDIO 2.8 in Windows 8.x does not support xWMA decoding. Use Windows Media Foundation APIs to perform the decoding from WMA to PCM instead. This functionality is available in the DirectX SDK versions of XAUDIO and in XAUDIO 2.9 in Windows?10.
-Contains the new global debug configuration for XAudio2. Used with the SetDebugConfiguration function.
-Debugging messages can be completely turned off by initializing
Defines an effect chain.
-Number of effects in the effect chain for the voice.
Array of
Defines filter parameters for a source voice.
-Setting
FilterParams; - FilterParams.Frequency = 1.0f; - FilterParams.OneOverQ = 1.0f; - FilterParams.Type = LowPassFilter; -
The following formulas show the relationship between the members of
Yl( n ) = F1 yb( n ) + yl( n - 1 ) - Yb( n ) = F1 yh( n ) + yb( n - 1 ) - Yh( n ) = x( n ) - yl( n ) - OneOverQ(yb( n - 1 ) - Yn( n ) = Yl(n) + Yh(n)
Where:
Yl = lowpass output - Yb = bandpass output - Yh = highpass output - Yn = notch output - F1 =-.Frequency - OneOverQ = .OneOverQ
The
Filter radian frequency calculated as (2 * sin(pi * (desired filter cutoff frequency) / sampleRate)). The frequency must be greater than or equal to 0 and less than or equal to
Reciprocal of Q factor. Controls how quickly frequencies beyond Frequency are dampened. Larger values result in quicker dampening while smaller values cause dampening to occur more gradually. Must be greater than 0 and less than or equal to
Contains performance information.
-CPU cycles are recorded using . Use to convert these values.
-CPU cycles spent on audio processing since the last call to the
Total CPU cycles elapsed since the last call.
Note??This only counts cycles on the CPU on which XAudio2 is running. ?Fewest CPU cycles spent on processing any single audio quantum since the last call.
Most CPU cycles spent on processing any single audio quantum since the last call.
Total memory currently in use.
Minimum delay that occurs between the time a sample is read from a source buffer and the time it reaches the speakers.
Windows |
---|
The delay reported is a variable value equal to the rough distance between the last sample submitted to the driver by XAudio2 and the sample currently playing. The following factors can affect the delay: playing multichannel audio on a hardware-accelerated device; the type of audio device (WavePci, WaveCyclic, or WaveRT); and, to a lesser extent, audio hardware implementation. - |
?
Xbox 360 |
---|
The delay reported is a fixed value, which is normally 1,024 samples (21.333 ms at 48 kHz). If XOverrideSpeakerConfig has been called using the XAUDIOSPEAKERCONFIG_LOW_LATENCY flag, the delay reported is 512 samples (10.667 ms at 48 kHz). - |
?
Total audio dropouts since the engine started.
Number of source voices currently playing.
Total number of source voices currently in existence.
Number of submix voices currently playing.
Number of resampler xAPOs currently active.
Number of matrix mix xAPOs currently active.
Windows |
---|
Unsupported. |
?
Xbox 360 |
---|
Number of source voices decoding XMA data. |
?
Windows |
---|
Unsupported. |
?
Xbox 360 |
---|
A voice can use more than one XMA stream. |
?
Contains information about the creation flags, input channels, and sample rate of a voice.
-Note the DirectX SDK versions of XAUDIO2 do not support the ActiveFlags member.
-Flags used to create the voice; see the individual voice interfaces for more information.
Flags that are currently set on the voice.
The number of input channels the voice expects.
The input sample rate the voice expects.
Defines a destination voice that is the target of a send from another voice and specifies whether a filter should be used.
-Indicates whether a filter should be used on data sent to the voice pointed to by pOutputVoice. Flags can be 0 or
A reference to an
Defines a set of voices to receive data from a single output voice.
-If pSends is not
Setting SendCount to 0 is useful for certain effects such as volume meters or file writers that don't generate any audio output to pass on to another voice.
If needed, a voice will perform a single sample rate conversion, from the voice's input sample rate to the input sample rate of the voice's output voices. Because only one sample rate conversion will be performed, all the voice's output voices must have the same input sample rate. If the input sample rates of the voice and its output voices are the same, no sample rate conversion is performed. -
-Number of voices to receive the output of the voice. An OutputCount value of 0 indicates the voice should not send output to any voices.
Array of
Returns the voice's current state and cursor position data.
-For all encoded formats, including constant bit rate (CBR) formats such as adaptive differential pulse code modulation (ADPCM), SamplesPlayed is expressed in terms of decoded samples. For pulse code modulation (PCM) formats, SamplesPlayed is expressed in terms of either input or output samples. There is a one-to-one mapping from input to output for PCM formats.
If a client needs to get the correlated positions of several voices?that is, to know exactly which sample of a particular voice is playing when a specified sample of another voice is playing?it must make the
Pointer to a buffer context provided in the
Number of audio buffers currently queued on the voice, including the one that is processed currently.
Total number of samples processed by this voice since it last started, or since the last audio stream ended (as marked with the
Creates a new XAudio2 object and returns a reference to its
Returns
The DirectX SDK versions of XAUDIO2 supported a flag
Note??No versions of the DirectX SDK contain the xaudio2.lib import library. DirectX SDK versions use COM to create a new XAudio2 object.
-Creates a new reverb audio processing object (APO), and returns a reference to it.
-Contains a reference to the reverb APO that is created.
If this function succeeds, it returns
XAudio2CreateReverb creates an effect performing Princeton Digital Reverb. The XAPO effect library (XAPOFX) includes an alternate reverb effect. Use CreateFX to create this alternate effect.
The reverb APO supports has the following restrictions:
For information about creating new effects for use with XAudio2, see the XAPO Overview.
Windows |
---|
Because XAudio2CreateReverb calls CoCreateInstance on Windows, the application must have called the CoInitializeEx method before calling XAudio2CreateReverb. A typical calling pattern on Windows would be as follows: #ifndef _XBOX - CoInitializeEx( |
?
The xaudio2fx.h header defines the AudioReverb class
class __declspec(uuid("C2633B16-471B-4498-B8C5-4F0959E2EC09")) AudioReverb;
-
XAudio2CreateReverb returns this object as a reference to a reference to
The reverb uses the
Note??XAudio2CreateReverb is an inline function in xaudio2fx.h that calls CreateAudioReverb:
XAUDIO2FX_STDAPI CreateAudioReverb(_Outptr_ ** ppApo);
- __inline XAudio2CreateReverb(_Outptr_ ** ppApo, UINT32 /*Flags*/ DEFAULT(0))
- { return CreateAudioReverb(ppApo);
- }
-
- Creates a new volume meter audio processing object (APO) and returns a reference to it.
-Contains the created volume meter APO.
If this function succeeds, it returns
For information on creating new effects for use with XAudio2, see the XAPO Overview.
Windows |
---|
Because XAudio2CreateVolumeMeter calls CoCreateInstance on Windows, the application must have called the CoInitializeEx method before calling XAudio2CreateVolumeMeter. A typical calling pattern on Windows would be as follows: #ifndef _XBOX - CoInitializeEx( |
?
The xaudio2fx.h header defines the AudioVolumeMeter class
class __declspec(uuid("4FC3B166-972A-40CF-BC37-7DB03DB2FBA3")) AudioVolumeMeter;
-
XAudio2CreateVolumeMeter returns this object as a reference to a reference to
The volume meter uses the
Note??XAudio2CreateVolumeMeter is an inline function in xaudio2fx.h that calls CreateAudioVolumeMeter:
XAUDIO2FX_STDAPI CreateAudioVolumeMeter(_Outptr_ ** ppApo);
- __inline XAudio2CreateVolumeMeter(_Outptr_ ** ppApo, UINT32 /*Flags*/ DEFAULT(0))
- { return CreateAudioVolumeMeter(ppApo);
- }
-
- Specifies directionality for a single-channel non-LFE emitter by scaling DSP behavior with respect to the emitter's orientation.
-For a detailed explanation of sound cones see Sound Cones.
-Inner cone angle in radians. This value must be within 0.0f to X3DAUDIO_2PI.
Outer cone angle in radians. This value must be within InnerAngle to X3DAUDIO_2PI.
Volume scaler on/within inner cone. This value must be within 0.0f to 2.0f.
Volume scaler on/beyond outer cone. This value must be within 0.0f to 2.0f.
LPF direct-path or reverb-path coefficient scaler on/within inner cone. This value is only used for LPF calculations and must be within 0.0f to 1.0f.
LPF direct-path or reverb-path coefficient scaler on or beyond outer cone. This value is only used for LPF calculations and must be within 0.0f to 1.0f.
Reverb send level scaler on or within inner cone. This must be within 0.0f to 2.0f.
Reverb send level scaler on/beyond outer cone. This must be within 0.0f to 2.0f. -
Defines a DSP setting at a given normalized distance.
-Normalized distance. This must be within 0.0f to 1.0f.
DSP control setting.
Defines an explicit piecewise curve made up of linear segments, directly defining DSP behavior with respect to normalized distance.
-
Number of distance curve points. There must be two or more points since all curves must have at least two endpoints defining values at 0.0f and 1.0f normalized distance, respectively.
Receives the results from a call to X3DAudioCalculate.
-The following members must be initialized before passing this structure to the X3DAudioCalculate function:
The following members are returned by passing this structure to the X3DAudioCalculate function:
Defines a single-point or multiple-point 3D audio source that is used with an arbitrary number of sound channels.
-The parameter type
X3DAudio uses a left-handed Cartesian coordinate system, with values on the x-axis increasing from left to right, on the y-axis from bottom to top, and on the z-axis from near to far. Azimuths are measured clockwise from a given reference direction. To use X3DAudio with right-handed coordinates, you must negate the .z element of OrientFront, OrientTop, Position, and Velocity.
For user-defined distance curves, the distance field of the first point must be 0.0f and the distance field of the last point must be 1.0f.
If an emitter moves beyond a distance of (CurveDistanceScaler ? 1.0f), the last point on the curve is used to compute the volume output level. The last point is determined by the following: -
-.pPoints[PointCount-1].DSPSetting)
Pointer to a sound cone. Used only with single-channel emitters for matrix, LPF (both direct and reverb paths), and reverb calculations.
Orientation of the front direction. This value must be orthonormal with OrientTop. OrientFront must be normalized when used. For single-channel emitters without cones OrientFront is only used for emitter angle calculations. For multi channel emitters or single-channel with cones OrientFront is used for matrix, LPF (both direct and reverb paths), and reverb calculations.
Orientation of the top direction. This value must be orthonormal with OrientFront. OrientTop is only used with multi-channel emitters for matrix calculations.
Position in user-defined world units. This value does not affect Velocity.
Velocity vector in user-defined world units/second. This value is used only for doppler calculations. It does not affect Position. -
Value to be used for the inner radius calculations. If InnerRadius is 0, then no inner radius is used, but InnerRadiusAngle may still be used. This value must be between 0.0f and MAX_FLT. -
Value to be used for the inner radius angle calculations. This value must be between 0.0f and X3DAUDIO_PI/4.0.
Number of emitters defined by the
Distance from Position that channels will be placed if ChannelCount is greater than 1. ChannelRadius is only used with multi-channel emitters for matrix calculations. Must be greater than or equal to 0.0f.
Table of channel positions, expressed as an azimuth in radians along the channel radius with respect to the front orientation vector in the plane orthogonal to the top orientation vector. An azimuth of X3DAUDIO_2PI specifies a channel is a low-frequency effects (LFE) channel. LFE channels are positioned at the emitter base and are calculated with respect to pLFECurve only, never pVolumeCurve. pChannelAzimuths must have at least ChannelCount elements, but can be
Volume-level distance curve, which is used only for matrix calculations.
LFE roll-off distance curve, or
Low-pass filter (LPF) direct-path coefficient distance curve, or
LPF reverb-path coefficient distance curve, or
Reverb send level distance curve, or
Curve distance scaler that is used to scale normalized distance curves to user-defined world units, and/or to exaggerate their effect. This does not affect any other calculations. The value must be within the range FLT_MIN to FLT_MAX. CurveDistanceScaler is only used for matrix, LPF (both direct and reverb paths), and reverb calculations.
Doppler shift scaler that is used to exaggerate Doppler shift effect. DopplerScaler is only used for Doppler calculations and does not affect any other calculations. The value must be within the range 0.0f to FLT_MAX.
Defines a point of 3D audio reception.
-X3DAudio uses a left-handed Cartesian coordinate system, with values on the x-axis increasing from left to right, on the y-axis from bottom to top, and on the z-axis from near to far. Azimuths are measured clockwise from a given reference direction. To use X3DAudio with right-handed coordinates, you must negate the .z element of OrientFront, OrientTop, Position, and Velocity.
The parameter type
A listener's front and top vectors must be orthonormal. To be considered orthonormal, a pair of vectors must have a magnitude of 1 +- 1x10-5 and a dot product of 0 +- 1x10-5.
-Orientation of front direction. When pCone is
Orientation of top direction, used only for matrix and delay calculations. This value must be orthonormal with OrientFront when used.
Position in user-defined world units. This value does not affect Velocity.
Velocity vector in user-defined world units per second, used only for doppler calculations. This value does not affect Position.
Pointer to an
Calculates DSP settings with respect to 3D parameters.
-3D audio instance handle. Call
Pointer to an
Pointer to an
Value | Description |
---|---|
Enables matrix coefficient table calculation.? | |
Enables delay time array calculation (stereo only).? | |
Enables low pass filter (LPF) direct-path coefficient calculation.? | |
Enables LPF reverb-path coefficient calculation.? | |
Enables reverb send level calculation.? | |
Enables Doppler shift factor calculation.? | |
Enables emitter-to-listener interior angle calculation.? | |
Fills the center channel with silence. This flag allows you to keep a 6-channel matrix so you do not have to remap the channels, but the center channel will be silent. This flag is only valid if you also set | |
Applies an equal mix of all source channels to a low frequency effect (LFE) destination channel. It only applies to matrix calculations with a source that does not have an LFE channel and a destination that does have an LFE channel. This flag is only valid if you also set |
?
Pointer to an
You typically call
Important?? The listener and emitter values must be valid. Floating-point specials (NaN, QNaN, +INF, -INF) can cause the entire audio output to go silent if introduced into a running audio graph.
-Sets all global 3D audio constants.
-Assignment of channels to speaker positions. This value must not be zero. The only permissible value on Xbox 360 is SPEAKER_XBOX.
Speed of sound, in user-defined world units per second. Use this value only for doppler calculations. It must be greater than or equal to FLT_MIN.
3D audio instance handle. Use this handle when you call
This function does not return a value.
X3DAUDIO_HANDLE is an opaque data structure. Because the operating system doesn't allocate any additional storage for the 3D audio instance handle, you don't need to free or close it.
-Calculates DSP settings with respect to 3D parameters.
-3D audio instance handle. Call
Pointer to an
Pointer to an
Value | Description |
---|---|
Enables matrix coefficient table calculation.? | |
Enables delay time array calculation (stereo only).? | |
Enables low pass filter (LPF) direct-path coefficient calculation.? | |
Enables LPF reverb-path coefficient calculation.? | |
Enables reverb send level calculation.? | |
Enables Doppler shift factor calculation.? | |
Enables emitter-to-listener interior angle calculation.? | |
Fills the center channel with silence. This flag allows you to keep a 6-channel matrix so you do not have to remap the channels, but the center channel will be silent. This flag is only valid if you also set | |
Applies an equal mix of all source channels to a low frequency effect (LFE) destination channel. It only applies to matrix calculations with a source that does not have an LFE channel and a destination that does have an LFE channel. This flag is only valid if you also set |
?
Pointer to an
You typically call
Important?? The listener and emitter values must be valid. Floating-point specials (NaN, QNaN, +INF, -INF) can cause the entire audio output to go silent if introduced into a running audio graph.
-Sets all global 3D audio constants.
-Assignment of channels to speaker positions. This value must not be zero. The only permissible value on Xbox 360 is SPEAKER_XBOX.
Speed of sound, in user-defined world units per second. Use this value only for doppler calculations. It must be greater than or equal to FLT_MIN.
3D audio instance handle. Use this handle when you call
This function does not return a value.
X3DAUDIO_HANDLE is an opaque data structure. Because the operating system doesn't allocate any additional storage for the 3D audio instance handle, you don't need to free or close it.
-Describes the contents of a stream buffer.
-This metadata can be used to implement optimizations that require knowledge of a stream buffer's contents. For example, XAPOs that always produce silent output from silent input can check the flag on the input stream buffer to determine if any signal processing is necessary. If silent, the XAPO can simply set the flag on the output stream buffer to silent and return, thus averting the work of processing silent data.
Likewise, XAPOs that receive valid input data, but generate silence (for any reason), may set the output stream buffer's flag accordingly, rather than writing silent samples to the buffer.
These flags represent what should be assumed is in the respective buffer. The flags may not reflect what is actually stored in memory. For example, the
Stream buffer contains only silent samples.
Stream buffer contains audio data to be processed.
Initialization parameters for use with the FXECHO XAPOFX.
-Use of this structure is optional. The default MaxDelay is
Parameters for use with the FXECHO XAPOFX.
-Echo only supports FLOAT32 audio formats.
-Parameters for use with the FXEQ XAPO.
-Each band ranges from FrequencyCenterN - (BandwidthN / 2) to FrequencyCenterN + (BandwidthN / 2).
-Center frequency in Hz for band 0. Must be between
The boost or decrease to frequencies in band 0. Must be between
Width of band 0. Must be between
Center frequency in Hz for band 1. Must be between
The boost or decrease to frequencies in band 1. Must be between
Width of band 1. Must be between
Center frequency in Hz for band 2. Must be between
The boost or decrease to frequencies in band 2. Must be between
Width of band 2. Must be between
Center frequency in Hz for band 3. Must be between
The boost or decrease to frequencies in band 3. Must be between
Width of band 3. Must be between
Parameters for use with the FXMasteringLimiter XAPO.
-Parameters for use with the FXReverb XAPO.
-Controls the character of the individual wall reflections. Set to minimum value to simulate a hard flat surface and to maximum value to simulate a diffuse surface.Value must be between
Size of the room. Value must be between
The interface for an Audio Processing Object which be used in an XAudio2 effect chain.
-The interface for an Audio Processing Object which be used in an XAudio2 effect chain.
-Returns the registration properties of an XAPO.
- Receives a reference to a
Returns
Queries if a specific input format is supported for a given output format.
-Output format.
Input format to check for being supported.
If not
Returns
The
Queries if a specific output format is supported for a given input format.
-Input format.
Output format to check for being supported.
If not
Returns
The
Performs any effect-specific initialization.
- Effect-specific initialization parameters, may be
Size of pData in bytes, may be 0 if pData is
Returns
The contents of pData are defined by a given XAPO. Immutable parameters (constant for the lifetime of the XAPO) should be set in this method. Once initialized, an XAPO cannot be initialized again. An XAPO should be initialized before passing it to XAudio2 as part of an effect chain.
Note??XAudio2 does not call this method, it should be called by the client before passing the XAPO to XAudio2.? -Resets variables dependent on frame history.
-Constant and locked parameters such as the input and output formats remain unchanged. Variables set by
For example, an effect with delay should zero out its delay line during this method, but should not reallocate anything as the XAPO remains locked with a constant input and output configuration.
XAudio2 only calls this method if the XAPO is locked.
This method is called from the realtime thread and should not block. -
-Called by XAudio2 to lock the input and output configurations of an XAPO allowing it to do any final initialization before Process is called on the realtime thread.
-Returns
Once locked, the input and output configuration and any other locked parameters remain constant until UnLockForProcess is called. After an XAPO is locked, further calls to LockForProcess have no effect until the UnLockForProcess function is called.
An XAPO indicates what specific formats it supports through its implementation of the IsInputFormatSupported and IsOutputFormatSupported methods. An XAPO should assert the input and output configurations are supported and that any required effect-specific initialization is complete. The IsInputFormatSupported, IsOutputFormatSupported, and Initialize methods should be used as necessary before calling this method.
Because Process is a nonblocking method, all internal memory buffers required for Process should be allocated in LockForProcess.
Process is never called before LockForProcess returns successfully.
LockForProcess is called directly by XAudio2 and should not be called by the client code.
-Deallocates variables that were allocated with the LockForProcess method.
-Unlocking an XAPO instance allows it to be reused with different input and output formats.
-Runs the XAPO's digital signal processing (DSP) code on the given input and output buffers.
-Number of elements in pInputProcessParameters.
Note??XAudio2 currently supports only one input stream and one output stream. ? Input array of
Number of elements in pOutputProcessParameters.
Note??XAudio2 currently supports only one input stream and one output stream. ?Output array of
TRUE to process normally;
Implementations of this function should not block, as the function is called from the realtime audio processing thread.
All code that could cause a delay, such as format validation and memory allocation, should be put in the
For in-place processing, the pInputProcessParameters parameter will not necessarily be the same as pOutputProcessParameters. Rather, their pBuffer members will point to the same memory.
Multiple input and output buffers may be used with in-place XAPOs, though the input buffer count must equal the output buffer count. For in-place processing when multiple input and output buffers are used, the XAPO may assume the number of input buffers equals the number of output buffers.
In addition to writing to the output buffer, as appropriate, an XAPO is responsible for setting the output stream's buffer flags and valid frame count.
When IsEnabled is
When writing a Process method, it is important to note XAudio2 audio data is interleaved, which means data from each channel is adjacent for a particular sample number. For example, if there was a 4-channel wave playing into an XAudio2 source voice, the audio data would be a sample of channel 0, a sample of channel 1, a sample of channel 2, a sample of channel 3, and then the next sample of channels 0, 1, 2, 3, and so on. -
-Returns the number of input frames required to generate the given number of output frames.
-The number of output frames desired.
Returns the number of input frames required.
XAudio2 calls this method to determine what size input buffer an XAPO requires to generate the given number of output frames. This method only needs to be called once while an XAPO is locked. CalcInputFrames is only called by XAudio2 if the XAPO is locked.
This function should not block, because it may be called from the realtime audio processing thread.
-Returns the number of output frames that will be generated from a given number of input frames.
-The number of input frames.
Returns the number of output frames that will be produced.
XAudio2 calls this method to determine how large of an output buffer an XAPO will require for a certain number of input frames. CalcOutputFrames is only called by XAudio2 if the XAPO is locked.
This function should not block, because it may be called from the realtime audio processing thread.
-An optional interface that allows an XAPO to use effect-specific parameters.
-An optional interface that allows an XAPO to use effect-specific parameters.
-Sets effect-specific parameters.
-Effect-specific parameter block.
Size of pParameters, in bytes.
The data in pParameters is completely effect-specific and determined by the implementation of the
SetParameters can only be called on the real-time audio processing thread; no synchronization between SetParameters and the
Gets the current values for any effect-specific parameters.
-Receives an effect-specific parameter block.
Size of pParameters, in bytes.
The data in pParameters is completely effect-specific and determined by the implementation of the
Unlike SetParameters, XAudio2 does not call this method on the realtime audio processing thread. Thus, the XAPO must protect variables shared with
XAudio2 calls this method from the
This method may block and should never be called from the realtime audio processing thread instead get the current parameters from CXAPOParametersBase::BeginProcess.
-Defines stream buffer parameters that may change from one call to the next. Used with the Process method.
-Although the format and maximum size values of a particular stream buffer are constant, as defined by the
Defines stream buffer parameters that remain constant while an XAPO is locked. Used with the
The byte size of the respective stream buffer must be at least MaxFrameCount ? (pFormat->nBlockAlign) bytes.
-Describes general characteristics of an XAPO. Used with
Describes the current state of the Xbox 360 Controller.
-This structure is used by the
The specific mapping of button to game function varies depending on the game type.
The constant XINPUT_GAMEPAD_TRIGGER_THRESHOLD may be used as the value which bLeftTrigger and bRightTrigger must be greater than to register as pressed. This is optional, but often desirable. Xbox 360 Controller buttons do not manifest crosstalk. -
-Bitmask of the device digital buttons, as follows. A set bit indicates that the corresponding button is pressed.
Device button | Bitmask |
---|---|
0x0001 | |
0x0002 | |
0x0004 | |
0x0008 | |
0x0010 | |
0x0020 | |
0x0040 | |
0x0080 | |
0x0100 | |
0x0200 | |
0x1000 | |
0x2000 | |
0x4000 | |
0x8000 |
?
Bits that are set but not defined above are reserved, and their state is undefined.
The current value of the left trigger analog control. The value is between 0 and 255.
The current value of the right trigger analog control. The value is between 0 and 255.
Left thumbstick x-axis value. Each of the thumbstick axis members is a signed value between -32768 and 32767 describing the position of the thumbstick. A value of 0 is centered. Negative values signify down or to the left. Positive values signify up or to the right. The constants
Left thumbstick y-axis value. The value is between -32768 and 32767.
Right thumbstick x-axis value. The value is between -32768 and 32767.
Right thumbstick y-axis value. The value is between -32768 and 32767.
Retrieves the battery type and charge status of a wireless controller.
-Index of the signed-in gamer associated with the device. Can be a value in the range 0?XUSER_MAX_COUNT ? 1.
Specifies which device associated with this user index should be queried. Must be
Contains information on battery type and charge state.
-The type of battery. BatteryType will be one of the following values.
Value | Description |
---|---|
The device is not connected.? | |
The device is a wired device and does not have a battery.? | |
The device has an alkaline battery.? | |
The device has a nickel metal hydride battery.? | |
The device has an unknown battery type.? |
?
The charge state of the battery. This value is only valid for wireless devices with a known battery type. BatteryLevel will be one of the following values.
Value |
---|
?
A table of controller subtypes available in XInput.
-Describes the capabilities of a connected controller. The
The SubType member indicates the specific subtype of controller present. Games may detect the controller subtype and tune their handling of controller input or output based on subtypes that are well suited to their game genre. For example, a car racing game might check for the presence of a wheel controller to provide finer control of the car being driven. However, titles must not disable or ignore a device based on its subtype. Subtypes not recognized by the game or for which the game is not specifically tuned should be treated as a standard Xbox 360 Controller (
Older XUSB Windows drivers report incomplete capabilities information, particularly for wireless devices. The latest XUSB Windows driver provides full support for wired and wireless devices, and more complete and accurate capabilties flags.
-Retrieves a gamepad input event.
-Wireless controllers are not considered active upon system startup, and calls to any of the XInput functions before a wireless controller is made active return
[in] Index of the signed-in gamer associated with the device. Can be a value in the range 0?XUSER_MAX_COUNT ? 1, or
[in] Reserved
[out] Pointer to an
Retrieves the current state of the specified controller.
-Index of the user's controller. Can be a value from 0 to 3. For information about how this value is determined and how the value maps to indicators on the controller, see Multiple Controllers.
Pointer to an
If the function succeeds, the return value is
If the controller is not connected, the return value is
If the function fails, the return value is an error code defined in Winerror.h. The function does not use SetLastError to set the calling thread's last-error code.
When
Sends data to a connected controller. This function is used to activate the vibration function of a controller.
-Index of the user's controller. Can be a value from 0 to 3. For information about how this value is determined and how the value maps to indicators on the controller, see Multiple Controllers.
Pointer to an
If the function succeeds, the return value is
If the controller is not connected, the return value is
If the function fails, the return value is an error code defined in WinError.h. The function does not use SetLastError to set the calling thread's last-error code.
Retrieves the capabilities and features of a connected controller.
-Index of the user's controller. Can be a value in the range 0?3. For information about how this value is determined and how the value maps to indicators on the controller, see Multiple Controllers.
Input flags that identify the controller type. If this value is 0, then the capabilities of all controllers connected to the system are returned. Currently, only one value is supported:
Value | Description |
---|---|
Limit query to devices of Xbox 360 Controller type. |
?
Any value of dwflags other than the above or 0 is illegal and will result in an error break when debugging.
Pointer to an
If the function succeeds, the return value is
If the controller is not connected, the return value is
If the function fails, the return value is an error code defined in WinError.h. The function does not use SetLastError to set the calling thread's last-error code.
Sets the reporting state of XInput.
-If enable is
This function is meant to be called when an application gains or loses focus (such as via WM_ACTIVATEAPP). Using this function, you will not have to change the XInput query loop in your application as neutral data will always be reported if XInput is disabled. -
In a controller that supports vibration effects:
Retrieves the sound rendering and sound capture audio device IDs that are associated with the headset connected to the specified controller.
-Index of the gamer associated with the device.
Windows Core Audio device ID string for render (speakers).
Size, in wide-chars, of the render device ID string buffer.
Windows Core Audio device ID string for capture (microphone).
Size, in wide-chars, of capture device ID string buffer.
If the function successfully retrieves the device IDs for render and capture, the return code is
If there is no headset connected to the controller, the function will also retrieve
If the controller port device is not physically connected, the function will return
If the function fails, it will return a valid Win32 error code.
Callers must allocate the memory for the buffers passed to
Retrieves the battery type and charge status of a wireless controller.
-Index of the signed-in gamer associated with the device. Can be a value in the range 0?XUSER_MAX_COUNT ? 1.
Specifies which device associated with this user index should be queried. Must be
Pointer to an
If the function succeeds, the return value is
Retrieves a gamepad input event.
-[in] Index of the signed-in gamer associated with the device. Can be a value in the range 0?XUSER_MAX_COUNT ? 1, or
[in] Reserved
[out] Pointer to an
If the function succeeds, the return value is
If no new keys have been pressed, the return value is
If the controller is not connected or the user has not activated it, the return value is
If the function fails, the return value is an error code defined in Winerror.h. The function does not use SetLastError to set the calling thread's last-error code.
Wireless controllers are not considered active upon system startup, and calls to any of the XInput functions before a wireless controller is made active return
Contains information on battery type and charge state.
-The type of battery. BatteryType will be one of the following values.
Value | Description |
---|---|
The device is not connected.? | |
The device is a wired device and does not have a battery.? | |
The device has an alkaline battery.? | |
The device has a nickel metal hydride battery.? | |
The device has an unknown battery type.? |
?
The charge state of the battery. This value is only valid for wireless devices with a known battery type. BatteryLevel will be one of the following values.
Value |
---|
?
Describes the capabilities of a connected controller. The
The SubType member indicates the specific subtype of controller present. Games may detect the controller subtype and tune their handling of controller input or output based on subtypes that are well suited to their game genre. For example, a car racing game might check for the presence of a wheel controller to provide finer control of the car being driven. However, titles must not disable or ignore a device based on its subtype. Subtypes not recognized by the game or for which the game is not specifically tuned should be treated as a standard Xbox 360 Controller (
Older XUSB Windows drivers report incomplete capabilities information, particularly for wireless devices. The latest XUSB Windows driver provides full support for wired and wireless devices, and more complete and accurate capabilties flags.
-Specifies keystroke data returned by
Future devices may return HID codes and virtual key values that are not supported on current devices, and are currently undefined. Applications should ignore these unexpected values.
A virtual-key code is a byte value that represents a particular physical key on the keyboard, not the character or characters (possibly none) that the key can be mapped to based on keyboard state. The keyboard state at the time a virtual key is pressed modifies the character reported. For example, VK_4 might represent a "4" or a "$", depending on the state of the SHIFT key.
A reported keyboard event includes the virtual key that caused the event, whether the key was pressed or released (or is repeating), and the state of the keyboard at the time of the event. The keyboard state includes information about whether any CTRL, ALT, or SHIFT keys are down.
If the keyboard event represents an Unicode character (for example, pressing the "A" key), the Unicode member will contain that character. Otherwise, Unicode will contain the value zero.
The valid virtual-key (VK_xxx) codes are defined in XInput.h. In addition to codes that indicate key presses, the following codes indicate controller input.
Value | Description |
---|---|
A button? | |
B button? | |
X button? | |
Y button? | |
Right shoulder button? | |
Left shoulder button? | |
Left trigger? | |
Right trigger? | |
Directional pad up? | |
Directional pad down? | |
Directional pad left? | |
Directional pad right? | |
START button? | |
BACK button? | |
Left thumbstick click? | |
Right thumbstick click? | |
Left thumbstick up? | |
Left thumbstick down? | |
Left thumbstick right? | |
Left thumbstick left? | |
Left thumbstick up and left? | |
Left thumbstick up and right? | |
Left thumbstick down and right? | |
Left thumbstick down and left? | |
Right thumbstick up? | |
Right thumbstick down? | |
Right thumbstick right? | |
Right thumbstick left? | |
Right thumbstick up and left? | |
Right thumbstick up and right? | |
Right thumbstick down and right? | |
Right thumbstick down and left? |
?
-Represents the state of a controller.
-The dwPacketNumber member is incremented only if the status of the controller has changed since the controller was last polled.
-State packet number. The packet number indicates whether there have been any changes in the state of the controller. If the dwPacketNumber member is the same in sequentially returned
Specifies motor speed levels for the vibration function of a controller.
-The left motor is the low-frequency rumble motor. The right motor is the high-frequency rumble motor. The two motors are not the same, and they create different vibration effects.
-Speed of the left motor. Valid values are in the range 0 to 65,535. Zero signifies no motor use; 65,535 signifies 100 percent motor use.
Speed of the right motor. Valid values are in the range 0 to 65,535. Zero signifies no motor use; 65,535 signifies 100 percent motor use.
The
-
You can specify
Typically, use
The
-
The
-
The
-
The
-
The methods in this interface present your object's data as a contiguous sequence of bytes that you can read or write. There are also methods for committing and reverting changes on streams that are open in transacted mode and methods for restricting access to a range of bytes in the stream.
Streams can remain open for long periods of time without consuming file-system resources. The IUnknown::Release method is similar to a close function on a file. Once released, the stream object is no longer valid and cannot be used.
Clients of asynchronous monikers can choose between a data-pull or data-push model for driving an asynchronous
- IMoniker::BindToStorage operation and for receiving asynchronous notifications. See
- URL Monikers for more information. The following table compares the behavior of asynchronous
-
The Seek method changes the seek reference to a new location. The new location is relative to either the beginning of the stream, the end of the stream, or the current seek reference.
-The displacement to be added to the location indicated by the dwOrigin parameter. If dwOrigin is STREAM_SEEK_SET, this is interpreted as an unsigned value rather than a signed value.
The origin for the displacement specified in dlibMove. The origin can be the beginning of the file (STREAM_SEEK_SET), the current seek reference (STREAM_SEEK_CUR), or the end of the file (STREAM_SEEK_END). For more information about values, see the STREAM_SEEK enumeration.
A reference to the location where this method writes the value of the new seek reference from the beginning of the stream.
You can set this reference to
You can also use this method to obtain the current value of the seek reference by calling this method with the dwOrigin parameter set to STREAM_SEEK_CUR and the dlibMove parameter set to 0 so that the seek reference is not changed. The current seek reference is returned in the plibNewPosition parameter.
-The SetSize method changes the size of the stream object.
-Specifies the new size, in bytes, of the stream.
This method can return one of these values.
The size of the stream object was successfully changed.
Asynchronous Storage only: Part or all of the stream's data is currently unavailable. For more information, see IFillLockBytes and Asynchronous Storage.
The stream size is not changed because there is no space left on the storage device.
The value of the libNewSize parameter is not supported by the implementation. Not all streams support greater than 2?? bytes. If a stream does not support more than 2?? bytes, the high DWORD data type of libNewSize must be zero. If it is nonzero, the implementation may return STG_E_INVALIDFUNCTION. In general, COM-based implementations of the
The object has been invalidated by a revert operation above it in the transaction tree.
If the libNewSize parameter is smaller than the current stream, the stream is truncated to the indicated size.
The seek reference is not affected by the change in stream size.
Calling
The CopyTo method copies a specified number of bytes from the current seek reference in the stream to the current seek reference in another stream.
-A reference to the destination stream. The stream pointed to by pstm can be a new stream or a clone of the source stream.
The number of bytes to copy from the source stream.
A reference to the location where this method writes the actual number of bytes written to the destination. You can set this reference to
A reference to the location where this method writes the actual number of bytes read from the source. You can set this reference to
The CopyTo method copies the specified bytes from one stream to another. It can also be used to copy a stream to itself. The seek reference in each stream instance is adjusted for the number of bytes read or written. This method is equivalent to reading cb bytes into memory using
-
The destination stream can be a clone of the source stream created by calling the
-
If
If
To copy the remainder of the source from the current seek reference, specify the maximum large integer value for the cb parameter. If the seek reference is the beginning of the stream, this operation copies the entire stream.
-The Commit method ensures that any changes made to a stream object open in transacted mode are reflected in the parent storage. If the stream object is open in direct mode,
Controls how the changes for the stream object are committed. See the
This method can return one of these values.
Changes to the stream object were successfully committed to the parent level.
Asynchronous Storage only: Part or all of the stream's data is currently unavailable. For more information see IFillLockBytes and Asynchronous Storage.
The commit operation failed due to lack of space on the storage device.
The object has been invalidated by a revert operation above it in the transaction tree.
The Commit method ensures that changes to a stream object opened in transacted mode are reflected in the parent storage. Changes that have been made to the stream since it was opened or last committed are reflected to the parent storage object. If the parent is opened in transacted mode, the parent may revert at a later time, rolling back the changes to this stream object. The compound file implementation does not support the opening of streams in transacted mode, so this method has very little effect other than to flush memory buffers. For more information, see
-
If the stream is open in direct mode, this method ensures that any memory buffers have been flushed out to the underlying storage object. This is much like a flush in traditional file systems.
The
The Revert method discards all changes that have been made to a transacted stream since the last
-
This method can return one of these values.
The stream was successfully reverted to its previous version.
Asynchronous Storage only: Part or all of the stream's data is currently unavailable. For more information see IFillLockBytes and Asynchronous Storage.
The Revert method discards changes made to a transacted stream since the last commit operation.
- The Stat method retrieves the
-
The Clone method creates a new stream object with its own seek reference that references the same bytes as the original stream.
-When successful, reference to the location of an
The Clone method creates a new stream object for accessing the same bytes but using a separate seek reference. The new stream object sees the same data as the source-stream object. Changes written to one object are immediately visible in the other. Range locking is shared between the stream objects.
The initial setting of the seek reference in the cloned stream instance is the same as the current setting of the seek reference in the original stream at the time of the clone operation.
- The
-
Reads a specified number of bytes from the stream object into memory starting at the current read/write location within the stream.
-[in]Points to the buffer into which the stream is read. If an error occurs, this value is
[in]Specifies the number of bytes of data to attempt to read from the stream object.
[out]Pointer to a location where this method writes the actual number of bytes read from the stream object. You can set this reference to
Writes a specified number of bytes into the stream object starting at the current read/write location within the stream.
-[in] Points to the buffer into which the stream should be written.
[in] The number of bytes of data to attempt to write into the stream.
[out] Pointer to a location where this method writes the actual number of bytes written to the stream object. The caller can set this reference to
The
-
The
-
The methods in this interface present your object's data as a contiguous sequence of bytes that you can read or write. There are also methods for committing and reverting changes on streams that are open in transacted mode and methods for restricting access to a range of bytes in the stream.
Streams can remain open for long periods of time without consuming file-system resources. The IUnknown::Release method is similar to a close function on a file. Once released, the stream object is no longer valid and cannot be used.
Clients of asynchronous monikers can choose between a data-pull or data-push model for driving an asynchronous
- IMoniker::BindToStorage operation and for receiving asynchronous notifications. See
- URL Monikers for more information. The following table compares the behavior of asynchronous
-
The
-
The
-
This interface is used to return arbitrary length data.
-An
The ID3DBlob interface is type defined in the D3DCommon.h header file as a
Blobs can be used as a data buffer, storing vertex, adjacency, and material information during mesh optimization and loading operations. Also, these objects are used to return object code and error messages in APIs that compile vertex, geometry and pixel shaders.
-Get a reference to the data.
-Get the size.
-Get a reference to the data.
-Returns a reference.
Get the size.
-The size of the data, in bytes.
Defines a shader macro.
-You can use shader macros in your shaders. The
Shader_Macros[] = { "zero", "0", null ,null }; -
The following shader or effect creation functions take an array of shader macros as an input parameter:
The macro name.
The macro definition.
Driver type options.
-The driver type is required when calling
The driver type is unknown.
A hardware driver, which implements Direct3D features in hardware. This is the primary driver that you should use in your Direct3D applications because it provides the best performance. A hardware driver uses hardware acceleration (on supported hardware) but can also use software for parts of the pipeline that are not supported in hardware. This driver type is often referred to as a hardware abstraction layer or HAL.
A reference driver, which is a software implementation that supports every Direct3D feature. A reference driver is designed for accuracy rather than speed and as a result is slow but accurate. The rasterizer portion of the driver does make use of special CPU instructions whenever it can, but it is not intended for retail applications; use it only for feature testing, demonstration of functionality, debugging, or verifying bugs in other drivers. The reference device for this driver is installed by the Windows SDK 8.0 or later and is intended only as a debug aid for development purposes. This driver may be referred to as a REF driver, a reference driver, or a reference rasterizer.
Note??When you use the REF driver in Windows Store apps, the REF driver renders correctly but doesn't display any output on the screen. To verify bugs in hardware drivers for Windows Store apps, useA
A software driver, which is a driver implemented completely in software. The software implementation is not intended for a high-performance application due to its very slow performance.
A WARP driver, which is a high-performance software rasterizer. The rasterizer supports feature levels 9_1 through level 10_1 with a high performance software implementation. For information about limitations creating a WARP device on certain feature levels, see Limitations Creating WARP and Reference Devices. For more information about using a WARP driver, see Windows Advanced Rasterization Platform (WARP) In-Depth Guide.
Note??The WARP driver that Windows?8 includes supports feature levels 9_1 through level 11_1. ? Note??The WARP driver that Windows?8.1 includes fully supports feature level 11_1, including tiled resources,Describes the set of features targeted by a Direct3D device.
-For an overview of the capabilities of each feature level, see Overview For Each Feature Level.
For information about limitations creating non-hardware-type devices on certain feature levels, see Limitations Creating WARP and Reference Devices.
-Targets features supported by feature level 9.1 including shader model 2.
Targets features supported by feature level 9.2 including shader model 2.
Targets features supported by feature level 9.3 including shader model 2.0b.
Targets features supported by Direct3D 10.0 including shader model 4.
Targets features supported by Direct3D 10.1 including shader model 4.
Targets features supported by Direct3D 11.0 including shader model 5.
Targets features supported by Direct3D 11.1 including shader model 5 and logical blend operations. This feature level requires a display driver that is at least implemented to WDDM for Windows?8 (WDDM 1.2).
Targets features supported by Direct3D 12.0 including shader model 5.
Targets features supported by Direct3D 12.1 including shader model 5.
Specifies interpolation mode, which affects how values are calculated during rasterization.
-The interpolation mode is undefined.
Don't interpolate between register values.
Interpolate linearly between register values.
Interpolate linearly between register values but centroid clamped when multisampling.
Interpolate linearly between register values but with no perspective correction.
Interpolate linearly between register values but with no perspective correction and centroid clamped when multisampling.
Interpolate linearly between register values but sample clamped when multisampling.
Interpolate linearly between register values but with no perspective correction and sample clamped when multisampling.
Values that indicate the minimum desired interpolation precision.
-For more info, see Scalar Types and Using HLSL minimum precision.
-Default minimum precision, which is 32-bit precision.
Minimum precision is min16float, which is 16-bit floating point.
Minimum precision is min10float, which is 10-bit floating point.
Reserved
Minimum precision is min16int, which is 16-bit signed integer.
Minimum precision is min16uint, which is 16-bit unsigned integer.
Minimum precision is any 16-bit value.
Minimum precision is any 10-bit value.
Values that indicate how the pipeline interprets vertex data that is bound to the input-assembler stage. These primitive topology values determine how the vertex data is rendered on screen.
-Use the
The following diagram shows the various primitive types for a geometry shader object.
-Values that identify the type of resource to be viewed as a shader resource.
-A
The type is unknown.
The resource is a buffer.
The resource is a 1D texture.
The resource is an array of 1D textures.
The resource is a 2D texture.
The resource is an array of 2D textures.
The resource is a multisampling 2D texture.
The resource is an array of multisampling 2D textures.
The resource is a 3D texture.
The resource is a cube texture.
The resource is an array of cube textures.
The resource is a raw buffer. For more info about raw viewing of buffers, see Raw Views of Buffers.
A multithread interface accesses multithread settings and can only be used if the thread-safe layer is turned on.
-This interface is obtained by querying it from the ID3D10Device Interface using IUnknown::QueryInterface.
-Enter a device's critical section.
-Entering a device's critical section prevents other threads from simultaneously calling that device's methods (if multithread protection is set to true), calling DXGI methods, and calling the methods of all resource, view, shader, state, and asynchronous interfaces.
This function should be used in multithreaded applications when there is a series of graphics commands that must happen in order. This function is typically called at the beginning of the series of graphics commands, and
Leave a device's critical section.
-This function is typically used in multithreaded applications when there is a series of graphics commands that must happen in order.
Turn multithreading on or off.
-True to turn multithreading on, false to turn it off.
True if multithreading was turned on prior to calling this method, false otherwise.
Find out if multithreading is turned on or not.
-Whether or not multithreading is turned on. True means on, false means off.
The
The IAudioClient::Initialize and IAudioClient::IsFormatSupported methods use the constants defined in the
In shared mode, the client can share the audio endpoint device with clients that run in other user-mode processes. The audio engine always supports formats for client streams that match the engine's mix format. In addition, the audio engine might support another format if the Windows audio service can insert system effects into the client stream to convert the client format to the mix format.
In exclusive mode, the Windows audio service attempts to establish a connection in which the client has exclusive access to the audio endpoint device. In this mode, the audio engine inserts no system effects into the local stream to aid in the creation of the connection point. Either the audio device can handle the specified format directly or the method fails.
For more information about shared-mode and exclusive-mode streams, see User-Mode Audio Components.
-The audio stream will run in shared mode. For more information, see Remarks.
The audio stream will run in exclusive mode. For more information, see Remarks.
The AudioSessionState enumeration defines constants that indicate the current state of an audio session.
-When a client opens a session by assigning the first stream to the session (by calling the IAudioClient::Initialize method), the initial session state is inactive. The session state changes from inactive to active when a stream in the session begins running (because the client has called the IAudioClient::Start method). The session changes from active to inactive when the client stops the last running stream in the session (by calling the IAudioClient::Stop method). The session state changes to expired when the client destroys the last stream in the session by releasing all references to the stream object.
The system volume-control program, Sndvol, displays volume controls for both active and inactive sessions. Sndvol stops displaying the volume control for a session when the session state changes to expired. For more information about Sndvol, see Audio Sessions.
The IAudioSessionControl::GetState and IAudioSessionEvents::OnStateChanged methods use the constants defined in the AudioSessionState enumeration.
For more information about session states, see Audio Sessions.
-The audio session is inactive. (It contains at least one stream, but none of the streams in the session is currently running.)
The audio session is active. (At least one of the streams in the session is running.)
The audio session has expired. (It contains no streams.)
Specifies the category of an audio stream.
-Note that only a subset of the audio stream categories are valid for certain stream types.
Stream type | Valid categories |
---|---|
Render stream | All categories are valid. |
Capture stream | AudioCategory_Communications, AudioCategory_Speech, AudioCategory_Other |
Loopback stream | AudioCategory_Other |
?
Games should categorize their music streams as AudioCategory_GameMedia so that game music mutes automatically if another application plays music in the background. Music or video applications should categorize their streams as AudioCategory_Media or AudioCategory_Movie so they will take priority over AudioCategory_GameMedia streams.
The values AudioCategory_ForegroundOnlyMedia and AudioCategory_BackgroundCapableMedia are deprecated. For Windows Store apps, these values will continue to function the same when running on Windows?10 as they did on Windows?8.1. Attempting to use these values in a Universal Windows Platform (UWP) app, will result in compilation errors and an exception at runtime. Using these values in a Windows desktop application built with the Windows?10 SDK the will result in a compilation error.
-Other audio stream.
Media that will only stream when the app is in the foreground. This enumeration value has been deprecated. For more information, see the Remarks section.
Real-time communications, such as VOIP or chat.
Alert sounds.
Sound effects.
Game sound effects.
Background audio for games.
Game chat audio. Similar to AudioCategory_Communications except that AudioCategory_GameChat will not attenuate other streams.
Speech.
Stream that includes audio with dialog.
Stream that includes audio without dialog.