QVideoFrameclass represents a frame of video data. 更多 …
def
__eq__
(other)
def
__ne__
(other)
def
availableMetaData
()
def
bits
()
def
buffer
()
def
bytesPerLine
()
def
bytesPerLine
(plane)
def
endTime
()
def
fieldType
()
def
handle
()
def
handleType
()
def
height
()
def
image
()
def
isMapped
()
def
isReadable
()
def
isValid
()
def
isWritable
()
def
map
(mode)
def
mapMode
()
def
mappedBytes
()
def
metaData
(key)
def
pixelFormat
()
def
planeCount
()
def
setEndTime
(time)
def
setFieldType
(arg__1)
def
setMetaData
(key, value)
def
setStartTime
(time)
def
size
()
def
startTime
()
def
unmap
()
def
width
()
def
imageFormatFromPixelFormat
(format)
def
pixelFormatFromImageFormat
(format)
A
QVideoFrameencapsulates the pixel data of a video frame, and information about the frame.Video frames can come from several places - decoded
media,camera, or generated programmatically. The way pixels are described in these frames can vary greatly, and some pixel formats offer greater compression opportunities at the expense of ease of use.The pixel contents of a video frame can be mapped to memory using the
map()function. While mapped, the video data can accessed using thebits()function, which returns a pointer to a buffer. The total size of this buffer is given by themappedBytes()function, and the size of each line is given bybytesPerLine(). The return value of thehandle()function may also be used to access frame data using the internal buffer’s native APIs (for example - an OpenGL texture handle).A video frame can also have timestamp information associated with it. These timestamps can be used by an implementation of
QAbstractVideoSurfaceto determine when to start and stop displaying the frame, but not all surfaces might respect this setting.The video pixel data in a
QVideoFrameis encapsulated in aQAbstractVideoBuffer. AQVideoFramemay be constructed from any buffer type by subclassing theQAbstractVideoBuffer类。注意
Since video frames can be expensive to copy,
QVideoFrameis explicitly shared, so any change made to a video frame will also apply to any copies.
QVideoFrame
¶
QVideoFrame(buffer, size, format)
QVideoFrame(image)
QVideoFrame(other)
QVideoFrame(bytes, size, bytesPerLine, format)
- param bytes
int- param format
PixelFormat- param bytesPerLine
int- param image
QImage- param size
QSize- param buffer
- param other
构造 null 视频帧。
Constructs a video frame from a
buffer
with the given pixel
format
and
size
in pixels.
注意
This doesn’t increment the reference count of the video buffer.
Constructs a video frame of the given pixel
format
and
size
in pixels.
bytesPerLine
(stride) is the length of each scan line in bytes, and
bytes
is the total number of bytes that must be allocated for the frame.
PySide2.QtMultimedia.QVideoFrame.
FieldType
¶
Specifies the field an interlaced video frame belongs to.
|
常量 |
描述 |
|---|---|
|
QVideoFrame.ProgressiveFrame |
The frame is not interlaced. |
|
QVideoFrame.TopField |
The frame contains a top field. |
|
QVideoFrame.BottomField |
The frame contains a bottom field. |
|
QVideoFrame.InterlacedFrame |
The frame contains a merged top and bottom field. |
PySide2.QtMultimedia.QVideoFrame.
PixelFormat
¶
枚举视频数据类型。
|
常量 |
描述 |
|---|---|
|
QVideoFrame.Format_Invalid |
帧是无效的。 |
|
QVideoFrame.Format_ARGB32 |
The frame is stored using a 32-bit ARGB format (0xAARRGGBB). This is equivalent to
|
|
QVideoFrame.Format_ARGB32_Premultiplied |
The frame stored using a premultiplied 32-bit ARGB format (0xAARRGGBB). This is equivalent to
|
|
QVideoFrame.Format_RGB32 |
The frame stored using a 32-bit RGB format (0xffRRGGBB). This is equivalent to
|
|
QVideoFrame.Format_RGB24 |
The frame is stored using a 24-bit RGB format (8-8-8). This is equivalent to
|
|
QVideoFrame.Format_RGB565 |
The frame is stored using a 16-bit RGB format (5-6-5). This is equivalent to
|
|
QVideoFrame.Format_RGB555 |
The frame is stored using a 16-bit RGB format (5-5-5). This is equivalent to
|
|
QVideoFrame.Format_ARGB8565_Premultiplied |
The frame is stored using a 24-bit premultiplied ARGB format (8-5-6-5). |
|
QVideoFrame.Format_BGRA32 |
The frame is stored using a 32-bit BGRA format (0xBBGGRRAA). |
|
QVideoFrame.Format_BGRA32_Premultiplied |
The frame is stored using a premultiplied 32bit BGRA format. |
|
QVideoFrame.Format_ABGR32 |
The frame is stored using a 32-bit ABGR format (0xAABBGGRR). |
|
QVideoFrame.Format_BGR32 |
The frame is stored using a 32-bit BGR format (0xBBGGRRff). |
|
QVideoFrame.Format_BGR24 |
The frame is stored using a 24-bit BGR format (0xBBGGRR). |
|
QVideoFrame.Format_BGR565 |
帧使用 16 位 BGR 格式 (5-6-5) 存储。 |
|
QVideoFrame.Format_BGR555 |
帧使用 16 位 BGR 格式 (5-5-5) 存储。 |
|
QVideoFrame.Format_BGRA5658_Premultiplied |
The frame is stored using a 24-bit premultiplied BGRA format (5-6-5-8). |
|
QVideoFrame.Format_AYUV444 |
The frame is stored using a packed 32-bit AYUV format (0xAAYYUUVV). |
|
QVideoFrame.Format_AYUV444_Premultiplied |
The frame is stored using a packed premultiplied 32-bit AYUV format (0xAAYYUUVV). |
|
QVideoFrame.Format_YUV444 |
The frame is stored using a 24-bit packed YUV format (8-8-8). |
|
QVideoFrame.Format_YUV420P |
The frame is stored using an 8-bit per component planar YUV format with the U and V planes horizontally and vertically sub-sampled, i.e. the height and width of the U and V planes are half that of the Y plane. |
|
QVideoFrame.Format_YUV422P |
The frame is stored using an 8-bit per component planar YUV format with the U and V planes horizontally sub-sampled, i.e. the width of the U and V planes are half that of the Y plane, and height of U and V planes is the same as Y. |
|
QVideoFrame.Format_YV12 |
The frame is stored using an 8-bit per component planar YVU format with the V and U planes horizontally and vertically sub-sampled, i.e. the height and width of the V and U planes are half that of the Y plane. |
|
QVideoFrame.Format_UYVY |
The frame is stored using an 8-bit per component packed YUV format with the U and V planes horizontally sub-sampled (U-Y-V-Y), i.e. two horizontally adjacent pixels are stored as a 32-bit macropixel which has a Y value for each pixel and common U and V values. |
|
QVideoFrame.Format_YUYV |
The frame is stored using an 8-bit per component packed YUV format with the U and V planes horizontally sub-sampled (Y-U-Y-V), i.e. two horizontally adjacent pixels are stored as a 32-bit macropixel which has a Y value for each pixel and common U and V values. |
|
QVideoFrame.Format_NV12 |
The frame is stored using an 8-bit per component semi-planar YUV format with a Y plane (Y) followed by a horizontally and vertically sub-sampled, packed UV plane (U-V). |
|
QVideoFrame.Format_NV21 |
The frame is stored using an 8-bit per component semi-planar YUV format with a Y plane (Y) followed by a horizontally and vertically sub-sampled, packed VU plane (V-U). |
|
QVideoFrame.Format_IMC1 |
The frame is stored using an 8-bit per component planar YUV format with the U and V planes horizontally and vertically sub-sampled. This is similar to the type, except that the bytes per line of the U and V planes are padded out to the same stride as the Y plane. |
|
QVideoFrame.Format_IMC2 |
The frame is stored using an 8-bit per component planar YUV format with the U and V planes horizontally and vertically sub-sampled. This is similar to the type, except that the lines of the U and V planes are interleaved, i.e. each line of U data is followed by a line of V data creating a single line of the same stride as the Y data. |
|
QVideoFrame.Format_IMC3 |
The frame is stored using an 8-bit per component planar YVU format with the V and U planes horizontally and vertically sub-sampled. This is similar to the type, except that the bytes per line of the V and U planes are padded out to the same stride as the Y plane. |
|
QVideoFrame.Format_IMC4 |
The frame is stored using an 8-bit per component planar YVU format with the V and U planes horizontally and vertically sub-sampled. This is similar to the type, except that the lines of the V and U planes are interleaved, i.e. each line of V data is followed by a line of U data creating a single line of the same stride as the Y data. |
|
QVideoFrame.Format_Y8 |
The frame is stored using an 8-bit greyscale format. |
|
QVideoFrame.Format_Y16 |
The frame is stored using a 16-bit linear greyscale format. Little endian. |
|
QVideoFrame.Format_Jpeg |
The frame is stored in compressed Jpeg format. |
|
QVideoFrame.Format_CameraRaw |
The frame is stored using a device specific camera raw format. |
|
QVideoFrame.Format_AdobeDng |
The frame is stored using raw Adobe Digital Negative (DNG) format. |
|
QVideoFrame.Format_User |
用户定义像素格式的起始值。 |
PySide2.QtMultimedia.QVideoFrame.
availableMetaData
(
)
¶
返回关联此帧的任何额外元数据。
PySide2.QtMultimedia.QVideoFrame.
bits
(
)
¶
uchar
返回指向帧数据缓冲起始的指针。
此值才有效当帧数据为
mapped
.
若缓冲未采用读取访问被映射,此缓冲的内容将被初始为未被初始化。
PySide2.QtMultimedia.QVideoFrame.
buffer
(
)
¶
返回底层视频缓冲或
null
若没有。
PySide2.QtMultimedia.QVideoFrame.
bytesPerLine
(
plane
)
¶
plane
–
int
int
Returns the number of bytes in a scan line of a
plane
.
此值才有效当帧数据为
mapped
.
PySide2.QtMultimedia.QVideoFrame.
bytesPerLine
(
)
¶
int
返回扫描行字节数。
注意
For planar formats this is the bytes per line of the first plane only. The bytes per line of subsequent planes should be calculated as per the frame
pixel
format
.
此值才有效当帧数据为
mapped
.
另请参阅
PySide2.QtMultimedia.QVideoFrame.
endTime
(
)
¶
qint64
Returns the presentation time (in microseconds) when a frame should stop being displayed.
无效时间表示为 -1。
另请参阅
PySide2.QtMultimedia.QVideoFrame.
fieldType
(
)
¶
Returns the field an interlaced video frame belongs to.
If the video is not interlaced this will return WholeFrame.
另请参阅
PySide2.QtMultimedia.QVideoFrame.
handle
(
)
¶
object
Returns a type specific handle to a video frame’s buffer.
对于 OpenGL 纹理,这将是纹理 ID。
另请参阅
PySide2.QtMultimedia.QVideoFrame.
handleType
(
)
¶
HandleType
Returns the type of a video frame’s handle.
PySide2.QtMultimedia.QVideoFrame.
height
(
)
¶
int
返回视频帧的高度。
PySide2.QtMultimedia.QVideoFrame.
image
(
)
¶
QImage
基于像素格式将当前视频帧转换成图像。
PySide2.QtMultimedia.QVideoFrame.
imageFormatFromPixelFormat
(
format
)
¶
format
–
PixelFormat
格式
返回图像格式等效视频帧像素
format
。若没有等效格式
Format_Invalid
被返回取而代之。
注意
通常
QImage
没有 YUV 格式。
PySide2.QtMultimedia.QVideoFrame.
isMapped
(
)
¶
bool
Identifies if a video frame’s contents are currently mapped to system memory.
This is a convenience function which checks that the
MapMode
of the frame is not equal to
NotMapped
.
Returns true if the contents of the video frame are mapped to system memory, and false otherwise.
另请参阅
mapMode()
MapMode
PySide2.QtMultimedia.QVideoFrame.
isReadable
(
)
¶
bool
Identifies if the mapped contents of a video frame were read from the frame when it was mapped.
这是方便的校验函数若
MapMode
contains the
WriteOnly
标志。
Returns true if the contents of the mapped memory were read from the video frame, and false otherwise.
另请参阅
mapMode()
MapMode
PySide2.QtMultimedia.QVideoFrame.
isValid
(
)
¶
bool
标识视频帧是否有效。
无效帧没有关联它的视频缓冲。
返回 true 若帧有效,和 false 若帧无效。
PySide2.QtMultimedia.QVideoFrame.
isWritable
(
)
¶
bool
Identifies if the mapped contents of a video frame will be persisted when the frame is unmapped.
这是方便的校验函数若
MapMode
contains the
WriteOnly
标志。
Returns true if the video frame will be updated when unmapped, and false otherwise.
注意
The result of altering the data of a frame that is mapped in read-only mode is undefined. Depending on the buffer implementation the changes may be persisted, or worse alter a shared buffer.
另请参阅
mapMode()
MapMode
PySide2.QtMultimedia.QVideoFrame.
map
(
mode
)
¶
mode
–
MapMode
bool
把视频帧的内容映射到系统 (CPU 可寻址) 内存。
在某些情况下,视频帧数据可能被存储在视频内存或其它不可访问内存中,因此有必要映射帧在访问像素数据之前。这可能涉及围绕内容的拷贝,因此,应避免映射和取消映射 (除非要求)。
映射
mode
指示映射内存内容是否应读取自和/或写入帧。若映射模式包括
QAbstractVideoBuffer::ReadOnly
标志将采用视频帧内容填充映射内存当初始映射时。若映射模式包括
QAbstractVideoBuffer::WriteOnly
标志可能将修改映射内存内容写回到帧当取消映射时。
当映射时,可以直接访问视频帧内容透过指针返回通过
bits()
函数。
当访问数据不再需要时,确保调用
unmap()
function to release the mapped memory and possibly update the video frame contents.
若视频帧已映射在只读模式下,以只读模式多次映射它 (且取消映射它相应次数) 是准许的。在所有其它情况下,有必要先取消帧映射,在映射第二次之前。
注意
作为只读的写入映射内存是未定义的,并可能导致共享数据改变或崩溃。
返回 true 若帧被映射到内存按给定
mode
和 false 否则。
PySide2.QtMultimedia.QVideoFrame.
mappedBytes
(
)
¶
int
返回由映射帧数据所占据的字节数。
此值才有效当帧数据为
mapped
.
另请参阅
PySide2.QtMultimedia.QVideoFrame.
metaData
(
key
)
¶
key – unicode
object
返回此帧的任何元数据为给定
key
.
This might include frame specific information from a camera, or subtitles from a decoded video stream.
See the documentation for the relevant video frame producer for further information about available metadata.
另请参阅
PySide2.QtMultimedia.QVideoFrame.
__ne__
(
other
)
¶
other
–
QVideoFrame
bool
返回
true
若此
QVideoFrame
and
other
不反射相同帧。
PySide2.QtMultimedia.QVideoFrame.
__eq__
(
other
)
¶
other
–
QVideoFrame
bool
返回
true
若此
QVideoFrame
and
other
反射相同帧。
PySide2.QtMultimedia.QVideoFrame.
pixelFormat
(
)
¶
返回视频帧的颜色格式。
PySide2.QtMultimedia.QVideoFrame.
pixelFormatFromImageFormat
(
format
)
¶
format
–
格式
Returns a video pixel format equivalent to an image
format
. If there is no equivalent format QVideoFrame::InvalidType is returned instead.
注意
通常
QImage
没有 YUV 格式。
PySide2.QtMultimedia.QVideoFrame.
setEndTime
(
time
)
¶
time
–
qint64
设置呈现
time
(以微秒为单位) 当帧应停止显示时。
无效时间表示为 -1。
另请参阅
PySide2.QtMultimedia.QVideoFrame.
setFieldType
(
arg__1
)
¶
arg__1
–
FieldType
设置
field
属于隔行扫描视频帧。
另请参阅
PySide2.QtMultimedia.QVideoFrame.
setMetaData
(
key
,
value
)
¶
key – unicode
value – object
设置元数据为给定
key
to
value
.
若
value
is a null variant, any metadata for this key will be removed.
The producer of the video frame might use this to associate certain data with this frame, or for an intermediate processor to add information for a consumer of this frame.
另请参阅
PySide2.QtMultimedia.QVideoFrame.
setStartTime
(
time
)
¶
time
–
qint64
设置呈现
time
(以微秒为单位) 当帧应该被初始显示时。
无效时间表示为 -1。
另请参阅
PySide2.QtMultimedia.QVideoFrame.
size
(
)
¶
QSize
返回视频帧尺度。
PySide2.QtMultimedia.QVideoFrame.
startTime
(
)
¶
qint64
返回呈现时间 (以微秒为单位) 当帧应该被显示时。
无效时间表示为 -1。
另请参阅
PySide2.QtMultimedia.QVideoFrame.
unmap
(
)
¶
释放映射内存,映射通过
map()
函数。
若
MapMode
包括
WriteOnly
标志,则这会把映射内存当前内容坚持到视频帧。
should not be called if
map()
function failed.
另请参阅
PySide2.QtMultimedia.QVideoFrame.
width
(
)
¶
int
返回视频帧的宽度。