Libswscale
sws_getCachedContext VS sws_getContext
FFmpeg에서 libswscale 사용시 아래의 2가지 방식이 있다.
-
sws_getCachedContext
는 원형 함수 내부에 자체적으로sws_freeContext
를 포함하고 있으나, -
sws_getContext
는 명시적으로sws_freeContext
를 적어주어야 한다.
sws_freeContext
를 해주지 않으면, Memory Overflow가 발생된다.
sws_scale
convert RGB(A) to YUV
It turns out you can convert RGB or RGBA data into YUV using FFmpeg itself (SwScale), which then is compatible with output to a file. The basics are just a few lines: first, create an SwsContext that specifies the image size, and the source and destination data formats:
AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_MPEG2VIDEO);
AVCodecContext *c = avcodec_alloc_context3(codec);
// ...set up c's params
AVFrame *frame = av_frame_alloc();
// ...set up frame's params and allocate image buffer
SwsContext * ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_RGBA,
c->width, c->height,
AV_PIX_FMT_YUV420P,
0, 0, 0, 0);
And then apply the conversion to each RGBA frame (the rgba32Data
pointer) as it’s generated:
uint8_t *inData[4] = { rgba32Data, 0, 0, 0 }; // sws_scale 안에서 4개의 배열로 접근한다. 4개의 배열로 만들고 남은 공간은 공백으로 채우는게 좋다.
int inLinesize[4] = { 4 * c->width, 0, 0, 0 }; // sws_scale 안에서 4개의 배열로 접근한다. 4개의 배열로 만들고 남은 공간은 공백으로 채우는게 좋다.
sws_scale(ctx, inData, inLinesize, 0, c->height,
frame->data, frame->linesize);
Other input formats
The third argument to sws_getContext describes the format/packing of your data. There are a huge number of formats defined in FFmpeg (see pixfmt.h), so if your raw data is not RGBA you shouldn’t have to change how your image is generated. Be sure to compute the correct line width ( inLinesize in the code snippets) when you change the input format specification. I don’t know which input formats are supported by sws_scale (all, most, just a few?), so it would be wise to do a little experimentation.
For example, if your data is packed 24-bit RGB, and not 32-bit RGBA, then the code would look like this:
SwsContext * ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_RGB24,
c->width, c->height,
AV_PIX_FMT_YUV420P,
0, 0, 0, 0);
uint8_t *inData[4] = { rgb24Data, 0, 0, 0 };
int inLinesize[4] = { 3 * c->width, 0, 0, 0 };
sws_scale(ctx, inData, inLinesize, 0, c->height,
frame->data, frame->linesize);
Input frame
sws_scale
함수에서 H/W 가속을 위해 (SIMD 등) 정렬된 메모리를 사용한다. 따라서 av_frame_get_buffer
같은 함수를 사용하여 메모리를 할당하여 버퍼로 사용하는 것이 좋다.
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
{
AVFrame *picture;
int ret;
picture = av_frame_alloc();
if (!picture)
return NULL;
picture->format = pix_fmt;
picture->width = width;
picture->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(picture, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
}
return picture;
}
// AVFrame -> input_frame
// source data -> frame
int const frame_size = frame_height * frame_width * frame_channels;
memcpy(input_frame->data[0], frame->data, frame_size);
sws_scale(context, input_frame->data, input_frame->linesize, 0, height, scaled_frame.getDatas(), scaled_frame.getLineSizes());
YUV420P to RGB24
FFMPEG 에서 video frame을 decode하면 기본적으로 YUV420p 포맷을 기준으로 decode 되며, 이를 Win32 환경에서 보여지기 위하여 RGB24로 변환을 해야한다.
yuv420p_to_rgb24 는 보통 img_convert()
를 사용하지만, FFMPEG Compile시에 --enable-swscale
옵션이 적용되어 있으면, sws_scale()
함수를 사용해야 한다.
# img_convert()
# img_convert ( (AVPicture *)frameRGB, PIX_FMT_RGB24, (AVPicture*)frame,
is->video_st->codec->pix_fmt, is->video_st->codec->width, is->video_st->codec->height);
# sws_getContext(), sws_scale()
# static struct SwsContext *img_convert_ctx;
# img_convert_ctx = sws_getContext (is->video_st->codec->width,
is->video_st->codec->height, is->video_st->codec->pix_fmt,
is->video_st->codec->width, is->video_st->codec->height,
PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
# sws_scale (img_convert_ctx, frame->data, frame->linesize, 0, is->video_st->codec->height, frameRGB->data, frameRGB->linesize);
Troubleshooting
deprecated pixel format used, make sure you did set range correctly
이 내용은, libswscale/utils.c
파일, sws_init_context
함수 안에 다음과 같은 내용으로 정의되어 있다.
av_cold int sws_init_context(SwsContext *c, SwsFilter *srcFilter,
SwsFilter *dstFilter)
{
// ...
enum AVPixelFormat srcFormat = c->srcFormat;
enum AVPixelFormat dstFormat = c->dstFormat;
// ...
if(srcFormat!=c->srcFormat || dstFormat!=c->dstFormat)
av_log(c, AV_LOG_WARNING, "deprecated pixel format used, make sure you did set range correctly\n");
// ...
}
It seems you're trying to read AV_PIX_FMT_YUVJXXXP
frames which are deprecated (see the libav doc). You can use this workaround to manage it :
AVPixelFormat pixFormat;
switch (_videoStream->codec->pix_fmt) {
case AV_PIX_FMT_YUVJ420P :
pixFormat = AV_PIX_FMT_YUV420P;
break;
case AV_PIX_FMT_YUVJ422P :
pixFormat = AV_PIX_FMT_YUV422P;
break;
case AV_PIX_FMT_YUVJ444P :
pixFormat = AV_PIX_FMT_YUV444P;
break;
case AV_PIX_FMT_YUVJ440P :
pixFormat = AV_PIX_FMT_YUV440P;
break;
default:
pixFormat = _videoStream->codec->codec->pix_fmts;
break;
}