NVIDIA FFmpeg Transcoding Guide

来源:

1. https://developer.nvidia.com/blog/nvidia-ffmpeg-transcoding-guide/

2. https://docs.nvidia.com/video-technologies/video-codec-sdk/ffmpeg-with-nvidia-gpu/

All NVIDIA GPUs starting with the Kepler generation support fully-accelerated hardware video encoding, and all GPUs starting with Fermi generation support fully-accelerated hardware video decoding.  As of July 2019 Kepler, Maxwell, Pascal, Volta and Turing generation GPUs support hardware encoding, and Fermi, Kepler, Maxwell, Pascal, Volta and Turing generation GPUs support hardware decoding.

The processing demands from high quality video applications have pushed limits for broadcast and telecommunication networks. Consumer behavior has evolved, evident in the trends of OTT video subscription and the rapid uptake of live streaming. All social media applications now include the feature on their respective platforms. Live streaming will drive overall video data traffic growth for both cellular and Wi-Fi as consumers move beyond watching on-demand video to viewing live streams.

Video content distributed to viewers is often transcoded into several adaptive bit rate (ABR) profiles for delivery. Content in production may arrive in one of the large numbers of codec formats that needs to be transcoded into another for distribution or archiving.

This makes video transcoding a critical piece of an efficient video pipeline – whether it is 1:N or M:N profiles. The ideal solution for transcoding needs to be cost effective in terms of cost (Dollar/stream) and power efficiency (Watts/stream) along with delivering high quality content with maximum throughput for the datacenter. Video providers want to reduce the cost of delivering more content with great quality to more screens.

The massive video content generated on all fronts requires robust hardware acceleration of video encoding, decoding, and transcoding. Let’s take a look at how NVIDIA GPUs incorporate dedicated video processing hardware and how you can take advantage of it.

NVIDIA Encoding and Decoding Hardware

NVIDIA GPUs ship with an on-chip hardware encoder and decoder unit often referred to as NVENC and NVDEC. Separate from the CUDA cores, NVENC/NVDEC run encoding or decoding workloads without slowing the execution of graphics or CUDA workloads running at the same time.

NVENC and NVDEC support the many important codecs for encoding and decoding. Figure 1 lists many of the codecs, format and features supported with current NVIDIA hardware. Actual support depends on the GPU that is used. An up-to-date support matrix can be found at the Video Encode and Decode Support Matrix page.

GPU encode and decode capabilities image
Figure 1: GPU hardware capabilities

Hardware accelerated transcoding with FFmpeg

Using the FFmpeg library is common practice when transcoding video data. Hardware acceleration dramatically improves the performance of the workflow. Figure 2 shows the different elements of the transcoding process with FFmpeg.

Transcoding pipeline with ffmpeg flow diagram
Figure 2: Transcoding pipeline with FFmpeg using NVIDIA hardware acceleration

FFmpeg supports hardware accelerated decoding and encoding via the hwaccel cudah264_cuvidhevc_cuvid and h264_nvenchevc_nvenc modules. Activating support for hardware acceleration when building from source requires some extra steps:

  • Clone the FFmpeg git repository https://git.ffmpeg.org/ffmpeg.git
  • Download and install a compatible driver from the NVIDIA web site
  • Download and install the CUDA toolkit
  • Clone the nv-codec-headers repository  and install using this repository as header-only: make install
  • Configure FFmpeg using the following command (use correct CUDA library path):
./configure --enable-cuda --enable-cuvid --enable-nvdec --enable-nvenc --enable-nonfree --enable-libnpp --extra-cflags=-I/usr/local/cuda/include  --extra-ldflags=-L/usr/local/cuda/lib64
  • Build with multiple processes to increase build speed and suppress excessive output: make -j -s

Using FFmpeg to do software 1:1 transcode is simple:

ffmpeg -i input.mp4 -c:a copy -c:v h264 -b:v 5M output.mp4
-c:a copy copies the audio stream without any re-encoding
-c:v h264 selects the software H.264 encoder for the output stream
-b:v 5M sets the output bitrate to 5Mb/s

But this is going to be slow slow since it only uses the CPU-based software encoder and decoder. Using the hardware encoder NVENC and decoder NVDEC requires adding some more parameters to tell ffmpeg which encoder and decoders to use. Maximizing the transcoding speed also means making sure that the decoded image is kept in GPU memory so the encoder can efficiently access it.

ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input.mp4 -c:v h264_nvenc -b:v 5M output.mp4
-hwaccel cuda chooses appropriate hw accelerator
-hwaccel_output_format cuda keeps the decoded frames in GPU memory
-c:v h264_nvenc selects the NVIDIA hardware accelerated H.264 encoder

Without the -hwaccel cuda -hwaccel_output_format cuda option, the decoded raw frames would be copied back to system memory via the PCIe bus, shown in figure 3. Later, the same image would be copied back to GPU memory via PCIe to encode on the GPU. These two additional transfers create latency due to the transfer time and will increase PCIe bandwidth occupancy.

Memory flow without hwaccel diagram
Figure 3: Memory flow without hwaccel

Adding the -hwaccel cuvid option means the raw decoded frames will not be copied and the transcoding will be faster and use less system resources, as shown in figure 4.

Memory flow with hwaccel diagram
Figure 4: Memory flow with hwaccel

Given PCIe bandwidth limits, copying uncompressed image data would quickly saturate the PCIe bus. Prevent unnecessary copies between system and GPU memory, using -hwaccel cuda -hwaccel_output_format cuda result in up to 2x the throughput compared to the unoptimized call not using -hwaccel cuvid.

Processing filters

Transcoding often involves not only changing format or bitrate of the input stream, but also resizing it. Two options exist for resizing on the GPU: using the npp_scale filter or the nvcuvid resize option. The nvcuvid resize option can be used when transcoding from one input to one output stream with different resolution (1:1 transcode). See the next line for an example.

ffmpeg -vsync 0 hwaccel cuvid -c:v h264_cuvid resize 1280x720 -i input.mp4 -c:a copy -c:v h264_nvenc -b:v 5M output.mp4

If multiple output resolutions are needed (1:N transcode), the scale_npp filter can resize decoded frames on the GPU. This way we can generate multiple output streams with multiple different resolutions but only using one resize step for all streams. See the next line for an example of 1:2 transcode.

ffmpeg -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -i input.mp4 \ -c:a copy vf scale_npp=1280:720 -c:v h264_nvenc -b:v 5M output_720.mp4 \ -c:a copy -vf scale_npp=640:320 -c:v h264_nvenc -b:v 3M output_360.mp4

Using -vf "scale_npp=1280:720" will set scale_npp as filter for the decoded images

The interpolation algorithm can be defined for scale_npp as an additional argument. Cubic interpolation is used by default but other algorithms might give better results depending on scale factor and images. Using the super-sampling algorithm is recommended for best quality when downscaling. See below for an example:

ffmpeg -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -i input.mp4 -c:a copy vf scale_npp=1280:720:interp_algo=super -c:v h264_nvenc -b:v 5M output_720.mp4

Mixing CPU and GPU processing

Sometimes it might be necessary to mix CPU and GPU processing. For example you may need to decode on the CPU, because the format is unsupported on the GPU decoder, or because a filter is not available on the GPU. In those cases, you can’t use the -hwaccel cuvid or -hwaccel cuda flag. Instead, you need to manage uploading the data from system to GPU memory using the hwupload_cuda filter. In the example below, an H.264 stream is decoded on the GPU and downloaded to system memory since -hwaccel cuvid is not set. The fade filter is applied in system memory and the processed image uploaded to GPU memory using the hwupload_cuda filter. Finally, the image is scaled using scale_npp and encoded on the GPU.

ffmpeg -vsync 0 -c:v h264_cuvid -i input.264 -vf "fade,hwupload_cuda,scale_npp=1280:720" -c:v h264_nvenc output.264

Multi-GPU

Encoding and decoding work must be explicitly assigned to a GPU when using multiple GPUs in one system. GPUs are identified by their index number; by default all work is performed on the GPU with index 0. Use the following command to obtain a list of all NVIDIA GPUs in the system and their corresponding ID numbers:

ffmpeg -vsync 0 -i input.mp4 -c:v h264_nvenc -gpu list -f null 

Once you know the index, the -hwaccel_device index flag can be used to set the active GPU for decoding and encoding. In the example below the work will be executed on the gpu with index 1.

ffmpeg -vsync 0 -hwaccel cuvid -hwaccel_device 1 -hwaccel cuda -hwaccel_output_format cuda -i input.mp4 -c:a copy -c:v h264_nvenc -b:v 5M output.mp4

Optimizations

All encoder and decoder units should be utilized as much as possible for best throughput. nvidia-smi can be used to generate real-time information about NVENC, NVDEC and general GPU utilization.

nvidia-smi dmon
nvidia-smi -q -d UTILIZATION

Multiple actions may be taken depending on those results. If encoder utilization is low, it might be feasible to introduce another transcoding pipeline that uses CPU decode and NVENC encode to generate additional workload for the hardware encoder.

PCIe bandwidth might become the bottleneck when using CPU filters. If possible, filters should run on the GPU. Several CUDA filters exist in FFmpeg that can be used as templates to implement your own high-performance CUDA filter.

You can always track GPU utilization and memory transfers between host and device by profiling the ffmpeg application using the Nvidia Visual Profiler, part of the CUDA SDK. Simply launch “Run application to generate CPU and GPU timeline” and select the ffmpeg application with relevant CLI options. The application timeline is illustrated on fig. 5

ffmpeg in nvprof image
Figure 5: ffmpeg application timeline

Turquoise blocks show colorspace conversion done by CUDA kernels while yellow blocks show memory transfer between host and device. Using Visual profiler allows users to easily track operations done on GPU, such as CUDA-accelerated filters and data transfers to ensure no excessive memory copies are done.

Collect CPU-side perforamnce statistics by compiling ffmpeg with the --disable-stripping CLI option to enable performance profiling.

This prevents the compiler from stripping function names so that commodity profilers like Gprof or Visual Studio profiler can collect performance statistics. CPU load level should be low with hardware acceleration on and most time should be spent within the Video Code SDK API calls, which are marked as “External Code” as function names are stripped from driver libraries. Figure 6 shows an example screenshot.

ffmpeg profiling in Visual Studio image
Figure 6: ffmpeg performance profiling with Visual Studio

Further Reading

For more examples on how to use FFmpeg and a look at the advanced quality settings that are available please take a look at the “Using FFmpeg With NVIDIA GPU Hardware Acceleration” guide.

Conclusion

FFmpeg is a powerful and flexible open source video processing library with hardware accelerated decoding and encoding backends. It allows rapid video processing with full NVIDIA GPU hardware support in minutes. Commodity developer tools such as Gprof, Visual Profiler, and Microsoft Visual Studio may be used for fine performance analysis and tuning. Check out the ffmpeg sources and give it a go!

【原】用wget和ffmpeg抓取m3u8格式的视频

1.先下载后转码

以reuters.tv为例

1.手动解析出m3u8文件并下载到本地

2.利用wget下载视频片段

wget -c -r -np -i ignored.m3u8

3.将m3u8文件里的https://替换为 file ./

4.ffmpeg合并文件并用hevc转码,使用vaapi硬件加速

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 -f concat -safe 0 -i ignored.m3u8 -c:v hevc_vaapi output.mp4

 

2.直接下载

以CNTV 为例

视频地址:

https://tv.cctv.com/2020/05/21/VIDEtOJBS1JDMtj8hQAKachO200521.shtml

1.解析m3u8地址

https://newcntv.qcloudcdn.com/asp/hls/8000/0303000a/3/default/656069b44bb94a6c9128c14b4f100c1f/8000.m3u8

 

ffmpeg直接下载(使用“-c copy”,直接用原编码,不转码)

ffmpeg -i "https://hls.cntv.myalicdn.com/asp/hls/8000/0303000a/3/default/656069b44bb94a6c9128c14b4f100c1f/8000.m3u8" -c copy out.mp4

 

在Ubuntu上编译FFmpeg

FFmpeg是最流行的开源视频转码工具包,在Ubuntu上可以直接通过apt-get安装,但是默认的编码器不提供x264这些non-free的编码器,所以需要自己编译。

FFmpeg官方给出了详细的编译步骤:

https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

本文和官方指南相比,主要在于改进了以下几处:

  1. 编译的bin目录不放到某个用户目录下,而是放到/opt/bin下面;
  2. 不使用git/hg下载snapshot的源码版本,而是直接下载稳定版。

以下测试在Ubuntu Server 14.04下通过。

首先,用apt-get updateapt-get upgrade把系统升级到最新版,然后,安装以下软件包:

apt-get install autoconf automake build-essential libass-dev libfreetype6-dev  libtheora-dev libtool libvorbis-dev pkg-config texinfo zlib1g-dev unzip cmake yasm libx264-dev libmp3lame-dev libopus-dev openssl libssl-dev

FFmpeg依赖的几个软件包有个最低版本要求:

yasm >= 1.2.0 libx264-dev >= 0.118 libmp3lame-dev >= 3.98.3 libopus-dev >= 1.1

这几个包在Ubuntu 14.04上都符合FFmpeg的要求,所以可以直接用apt-get安装。如果是其它版本的Linux,就需要自己检查版本。

注意,因为这个FFmpeg是作为服务器使用的,所以不含ffplay的依赖包。如果想要在桌面使用,还需要安装以下软件包:

libsdl1.2-dev libva-dev libvdpau-dev libxcb1-dev libxcb-shm0-dev libxcb-xfixes0-dev

接下来需要下载并编译的是x265,fdk-aac,vpx和ffmpeg本身。

先创建如下目录结构:

/opt
  +- ffmpeg_sources/
  +- ffmpeg_build/
  +- bin/

然后,把源码包下载到/opt/ffmpeg_sources/目录下。

因为不采用snapshot版本,所以我们可以直接下载某个稳定版源码。下载地址如下:

x265: https://bitbucket.org/multicoreware/x265/downloads/x265_1.9.tar.gz

fdk-aac: https://github.com/mstorsjo/fdk-aac/archive/v0.1.4.zip

vpx: http://storage.googleapis.com/downloads.webmproject.org/releases/webm/libvpx-1.5.0.tar.bz2

ffmpeg: http://ffmpeg.org/releases/ffmpeg-3.0.tar.bz2

编译x265

依次执行以下命令:

cd /opt/ffmpeg_sources
tar zxvf x265_1.9.tar.gz
cd x265_1.9/build/linux
PATH="/opt/bin:$PATH" cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="/opt/ffmpeg_build" -DENABLE_SHARED:bool=off ../../source
make
make install
make distclean

编译fdk-aac

依次执行以下命令:

cd /opt/ffmpeg_sources
mv v0.1.4.zip fdk-aac-v0.1.4.zip
unzip fdk-aac-v0.1.4.zip
cd fdk-aac-0.1.4
autoreconf -fiv
./configure --prefix="/opt/ffmpeg_build" --disable-shared
make
make install
make distclean

编译vp8/vp9

依次执行以下命令:

cd /opt/ffmpeg_sources
tar xjvf libvpx-1.5.0.tar.bz2
cd libvpx-1.5.0
PATH="/opt/bin:$PATH" ./configure --prefix="/opt/ffmpeg_build" --disable-examples --disable-unit-tests
PATH="/opt/bin:$PATH" make
make install
make clean

编译ffmpeg

最后一步就是编译FFmpeg了:

cd /opt/ffmpeg_sources
unzip FFmpeg-release-3.0.zip
cd FFmpeg-release-3.0
PATH="/opt/bin:$PATH" PKG_CONFIG_PATH="/opt/ffmpeg_build/lib/pkgconfig" ./configure \
  --prefix="/opt/ffmpeg_build" \
  --pkg-config-flags="--static" \
  --extra-cflags="-I/opt/ffmpeg_build/include" \
  --extra-ldflags="-L/opt/ffmpeg_build/lib" \
  --bindir="/opt/bin" \
  --enable-gpl \
  --enable-libass \
  --enable-libfdk-aac \
  --enable-libfreetype \
  --enable-libmp3lame \
  --enable-libopus \
  --enable-libtheora \
  --enable-libvorbis \
  --enable-libvpx \
  --enable-libx264 \
  --enable-libx265 \
  --enable-nonfree \
  --enable-openssl
PATH="/opt/bin:$PATH" make
make install
make distclean
hash -r

编译时间很长很长,一切顺利的话,在/opt/bin目录下会编译出ffmpegffprobeffserver三个程序。最后,创建几个软连接,便于任意用户在任意目录下直接调用ffmpeg:

ln -s /opt/bin/ffmpeg /usr/bin/ffmpeg
ln -s /opt/bin/ffprobe /usr/bin/ffprobe
ln -s /opt/bin/ffserver /usr/bin/ffserver

ffmpeg是转码程序,ffprobe可以用来分析视频文件,ffserver可以实现流媒体服务器。

 

 

Ubuntu 20.04编译

预编译,安装必要库文件

sudo apt-get build-dep ffmpeg

安装支持文件:libaom,libx265,libvpx,fdk-aac

configure

 

./configure --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --enable-openssl --enable-vaapi --enable-cuda --enable-cuvid --enable-nvenc --enable-nonfree
make -j24

 

FFmpeg实用命令

原文:http://davidaq.com/tutorial/2014/11/20/ffmpeg-commands.html

 

FFmpeg是一个开源免费跨平台的视频和音频流方案,属于自由软件。 别看这东西只有几十Mb,但却是个能格式转换、剪辑、播放几乎无所不能的命令行软件。 就如格式工厂,其核心也是FFmpeg。 在专业领域常被部署在服务端,用以做云端视频相关服务。 如七牛云存储就是利用FFmpeg来完成各种格式转换的。 其官方网址为:FFmpeg.org。 在那里可以下载到各种主流电脑平台的FFmpeg程序。

FFmpeg主要包含四个程序:

  • ffmpeg 主要用于对媒体文件的内容进行操作,如格式转换等,是最主要的部件
  • ffplay 简易播放器,虽然没有什么UI,但是能播放各种格式的视频
  • ffprobe 用于探查媒体文件的属性,如meta标签等,可以选择输出JSON或XML格式
  • ffserver 流媒体服务器,不可多得的免费流媒体服务器软件,可用于架设视频直播

FFmpeg除了提供可运行程序,还提供一套libav多媒体处理C库,可集成到别的软件当中提供多媒体文件解码、编码等功能。

对于一些比较专业的命令,本文也不会过多叙述,因为那需要更多的多媒体文件基础知识才能理解。 另外注意,这里讲的是正统的FFmpeg,而不是Debian搞出来的分支LibAV,里面那个ffmpeg(Ubuntu内置的)。

视频篇

以下命令主要针对视频文件操作

格式转换

ffmpeg最常用功能就是格式转换,在这里要特别提的是,音、视频文件格式有两个容器格式(如mov、flv)与编码格式(如H.264)。 很多人知道前者,却不知道后者,二者的关系与异同可在别处查到,不在此赘述。 简单的格式转换如下:

ffmpeg -i input.flv output.mp4

上面的命令就把一个flv文件转换成了一个mp4文件,其中-i xxx.xxx指定的输入文件,单独写的文件名指定输出文件路径。

一般FFmpeg会根据文件格式选择最合适的容器格式与编码格式,也可以手动指定。 常见的用例是需要一个保留alpha通道的视频,通常会使用mov容器格式,png编码格式,但是FFmpeg会默认使用H.264编码格式(不支持透明)。 如此命令如下:

ffmpeg -i input.flv -c:v png output.mov

想要知道自己的FFmpeg都支持哪些容器格式,使用命令ffmpeg -formats。 看都支持哪些编码格式,使用命令ffmpeg -codecs

特别强调,如果处理文件不是图片,不要让输入与输出文件相同

使用vaapi硬件编解码转码

1.默认方式

ffmpeg -vaapi_device /dev/dri/renderD128 -i input.mp4 -vf 'format=nv12,hwupload' -c:v h264_vaapi output.mp4

2.输入输出均支持vaapi硬件编解码

ffmpeg -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device /dev/dri/renderD128 -i input.mp4 -c:v h264_vaapi output.mp4

3.输入文件不支持vaapi解码,使用vaapi编码

ffmpeg -init_hw_device vaapi=foo:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device foo -i input.mp4 -filter_hw_device foo -vf 'format=nv12|vaapi,hwupload' -c:v h264_vaapi output.mp4

尺寸变换

想把大而高清的视频,变成尺寸较小,文件大小也更小的视频也是个普遍用例。 下面这个命令就可以完成改变尺寸的任务,对图片文件也有效。

ffmpeg -i input.mp4 -s 640x360 output.mp4

上面的命令由-s 640x360定义了输出视频的画面尺寸会是640×360。

使用vaapi硬件resize

vf 'format=nv12,hwupload,scale_vaapi=w=1920:h=1080'

剪切

只想取视频的某一部分也有很方便的命令:

ffmpeg -i input.mp4 -ss 5 -t 10 output.mp4

上面的命令-ss 5指定从输入视频第5秒开始截取,-t 10指明最多截取10秒。 但是上面的命令可能会比较慢,更好的命令如下:

ffmpeg -ss 5 -i input.mp4 -t 10 -c:v copy -c:a copy output.mp4

上面的命令把-ss 5放到-i前面,与原来的区别是,这样会先跳转到第5秒在开始解码输入视频,而原来的会从开始解码,只是丢弃掉前5秒的结果。 而-c:v copy -c:a copy标示视频与音频的编码不发生改变,而是直接复制,这样会大大提升速度,因为这样就不需要完全解码视频(视频剪切也不需要完全解码)。

 

换一条例子

-ss:截取开始时间点, -t:要截取的视频长度(15秒)

ffmpeg -i yourvideoname.mp4 -ss 00:00:00 -codec copy -t 15 outputclip.mp4

图片序列与视频的互相转换

ffmpeg可以把一组图片转换成一个视频(可以把gif动画也当成一种视频格式),反之亦可。命令如下

ffmpeg -i %04d.jpg output.mp4
ffmpeg -i input.mp4 %04d.jpg

第一行命令是把0001.jpg、0002.jpg、0003.jpg等编码成output.mp4,第二行则是相反把input.mp4变成0001.jpg……。 %04d.jpg表示从1开始用0补全的4位整数为文件名的jpg文件序列。 如果想要序列文件名为hello_00001.png等等的话,就是hello_%05d.png

如果编码视频的时候还想加入声音,则如下这般添加一个输入文件:

ffmpeg -i input.mp3 -i %04d.jpg output.mp4

改变视频FPS

FFmpeg可以用于降低或提高视频的帧率,因为信息丢失不可逆法则,提高帧率只会简单地让某些帧的画面多重复一次或多次,所以提高帧率不会提高画质。

ffmpeg -i input.mp4 -r 30 output.mp4

上面的命令,不论原始视频帧率是多少,输出视频都会是30帧每秒。这种情况之下视频的时间轴不会变化,不会有慢动作或快动作的效果。

ffmpeg -r 30 -i input.mp4 output.mp4

上面这种条换顺序之后的写法比较有意思,-r 30放在输入文件之前表示影响的时输入文件,而非输出文件。 这样的命令表达的是,把输入文件当做30帧每秒,而忽略它的原始帧率。这样如果原来的视频FPS是25,被视作30之后,输出的视频会有快进的效果。 这个命令没有指定输出视频的FPS,默认会与输入文件保持一样,可以与本节第一个命令和在一起,写两个-r参数,第一个指定输入FPS,第二个指定 输出FPS即可既控制播放速度,又控制输出帧率。

H264视频首尾拼接

如果确定输入文件都是H264编码,且尺寸、帧率等都相同,先把源视频转换成用于直播的ts格式。 然后直接对多个ts文件进行文件级的拼接,然后在转换回到目标格式。这个过程中,不会发生格式转换,所以非常迅速。

ffmpeg -i q.mp4 -c copy -bsf h264_mp4toannexb q.ts
ffmpeg -i r.mp4 -c copy -bsf h264_mp4toannexb r.ts
ffmpeg -i "concat:q.ts|r.ts" -c copy -bsf aac_adtstoasc qr.mp4

音频篇

以下命令主要用于音频操作。有许多上面已经给出的视频操作,比如格式转换,剪切等也可适用于音频。大部分视频也都包含音频,所以下面的命令 往往可以与视频命令混合适用。

提取音乐中的封面图片

有些音乐文件包含专辑封面图片在里面,可以用如下命令简单取出。命令从字面意义来讲,就是把MP3格式转换成JPEG格式了。

ffmpeg -i input.mp3 cover.jpg

音乐有损压缩

在某些场合下,比如在给网站做背景音乐,或音乐网站提供预览版音乐时,会选择以牺牲音频质量为代价降低文件大小,让网络播放更顺畅。 一个典型的压缩命令如下:

ffmpeg -i input.mp3 -ac 1 -ar 32k -bit_rate:a 128k output.mp3

上面命令做了三件事情:

  • -ac 1 指定只保留一个声道,所有声道都融合成一个(这里有个FFmpeg的bug,输出音量会变小)。
  • -ar 32k 表示采样率改为32000,通常的高保真音频都是48K左右,这个数值变小,会裁剪掉高音部分,32K会裁掉不少高音,不过普通人如果没个对比,听不出什么问题。 如果音频文件不是音乐,而是人声内容(比如广播),则可以打手一挥设置成22k或16k(电话是16k)
  • -bit_rate:a 128k设置的时音频比特率,如果是-bit_rate:v就成了视频比特率,128K表示,输出文件大概每秒钟的内容会有16KB左右的文件大小,需要至少128kbps的网络才能流畅播放 128K算是比较理想的比特率,文件小,音频质量损失又不是特别明显(对于普通人)

苹果系统的问题

经过FFmpeg处理的音频文件,在苹果的系统(包括OSX、iOS)以及苹果的播放器(iTunes、QuickTime)上往往会显示错误的长度时间。 这是个FFmpeg潜在的Bug,不过可以通过添加参数规避:

ffmpeg -i input.mp3 -write_xing 0 .... output.mp3

这样输出的文件在苹果产品上就会表现正常。原先不正常的文件,亦可以使用这个参数再经过一次ffmpeg处理来解决长度问题。

ffplay

ffplay是FFmpeg家族中的一个媒体文件播放器,可以播放许多种格式,不过与其说它是个完整的播放器,不如说这是个DEMO程序, 用来演示如何使用FFmpeg提供的解码接口来做播放器。它的命令很简单,通常播放一个正常的媒体文件只需要如下

ffplay target.mp4

ffplay可以接受参数,相当于ffmpeg程序里-i之前的参数,用来改变输入文件的输入方式,比如通过改变FPS造成快进或慢镜头的效果。

ffprobe

ffprobe可以非常方便地用于检测媒体文件的一些隐藏信息,通常只有更深层次的专业人员才会用到。一个典型的命令如下:

ffprobe target.mp4 -show_format -show_streams -print_format json -loglevel fatal

 

Use -safe 0 and -protocol_whitelist file,http,https,tcp,tls arguments. Full example at very bottom.


I was trying to use FFmpeg’s concat demuxer like so:

# inputs.txt
file 'http://www.example1.com/video1.mp4'  
file 'https://www.example2.com/video2.mp4'  
ffmpeg -f "concat" -i "./inputs.txt" -codec "copy" "./concated.mp4"  

First I got the error:

[concat @ 0x00] Unsafe file name 'http://www.example1.com/video1.mp4'
./inputs.txt: Operation not permitted

This was solved by adding the -safe 0 argument. Then I got the error:

[http @ 0x00] Protocol not on whitelist 'file,crypto'!
[concat @ 0x00] Impossible to open 'http://www.example1.com/video1.mp4'
./inputs.txt: Invalid argument

I thought I would be able to solve this by simple adding -protocol_whitelist file,http,https but then the error became:

[tcp @ 0x00] Protocol not on whitelist 'file,http,https'!
[concat @ 0x00] Impossible to open 'http://www.example1.com/video1.mp4'
./inputs.txt: Invalid argument

I did not understand why my HTTP protocol input was still being rejected. http was clearly in the protocol whitelist. Then I noticed the small diference between the previous two errors. Look at the very first word in those errors. http vs tcp. I realized that the first word in brackets before the error was the protocol that was being rejected.

The solution is to also add tcp to the protocol whitelist as well (also tls if you want to support HTTPS). Here was my final command:

ffmpeg -f "concat" -safe "0" -protocol_whitelist "file,http,https,tcp,tls" -i "./inputs.txt" -codec "copy" "./concated.mp4"  

 

Using VAAPI’s hardware accelerated video encoding on Linux with Intel’s hardware on FFmpeg and libav

https://gist.github.com/Brainiarc7/95c9338a737aa36d9bb2931bed379219

Hello, brethren 🙂

As it turns out, the current version of FFmpeg (version 3.1 released earlier today) and libav (master branch) supports full H.264 and HEVC encode in VAAPI on supported hardware that works reliably well to be termed “production-ready”.

Assumptions:

Before taking on this manual, the author assumes that:

  1. The end-user can comfortably install and configure their Linux distribution of choice.
  2. The end user can install, upgrade, downgrade and resolve both conflicts and dependency resolution of packages on his/her distribution’s package manager.
  3. That the user is comfortable with the Linux terminal, and can navigate through it.
  4. Basic competence on the shell, such as reading man files, using a text editor of choice, manipulating file operations on the same, etc is assumed.

And as an indemnity clause, I, the author, will not be liable for any damage, implied or otherwise, to your files, hardware or the stability of your machine as a consequence to using these instructions to achieve a similar feat as described in this gist.

Implications:

It means that when you’re encoding content for use with your blogs or some fancy youtube download, you can do it much, much faster on hardware with lower processor utilization (so you can multi-task) , lesser heat output and, as a plus, is significantly faster (As tested on my end, ~8.7x for 1080p and ~4.2x for 4k encodes with reference media) compared to a pure, software-based approach as offered by libx264 and similar implementations, albeit at an acceptable quality compromise.

Here goes:

First, you will need to build ffmpeg (and libav,as per your preferences) with appropriate arguments. –enable-vaapi switch should be enough, though.

Here are my build options (Note that I load ffmpeg and libav via the module system):

FFmpeg’s module files are here, and as more versions are compiled, more modules will be added. Libav’s module files are here, and as more versions are compiled, more modules will be added.

FFmpeg‘s configuration switches used:

./configure --enable-nonfree --enable-gpl --enable-version3
--enable-libass --enable-libbluray --enable-libmp3lame
--enable-libopencv --enable-libopenjpeg --enable-libopus
--enable-libfaac --enable-libfdk-aac --enable-libtheora
--enable-libvpx --enable-libwebp --enable-opencl --enable-x11grab
--enable-opengl --cpu=native --enable-nvenc --enable-vaapi
--enable-vdpau  --enable-ladspa --enable-libass  --enable-libgsm
--enable-libschroedinger --enable-libsmbclient --enable-libsoxr
--enable-libspeex --enable-libssh --enable-libwavpack --enable-libxvid
--enable-libx264 --enable-libx265 --enable-netcdf  --enable-openal
--enable-openssl --enable-cuda --prefix=/apps/ffmpeg/git --enable-omx

Libav‘s configuration switches used:

./configure --prefix=/apps/libav/11.7 --enable-gpl --enable-version3
--enable-nonfree --enable-runtime-cpudetect --enable-gray
--enable-vaapi --enable-vdpau --enable-vda --enable-libmp3lame
--enable-libopenjpeg --enable-libopus --enable-libfaac
--enable-libfdk-aac --enable-libtheora --enable-libvpx
--enable-libwebp  --enable-x11grab  --cpu=native  --enable-vaapi
--enable-vdpau  --enable-libgsm --enable-libschroedinger
--enable-libspeex --enable-libwavpack --enable-libxvid
--enable-libx264 --enable-libx265 --enable-openssl --enable-nvenc
--enable-cuda --enable-omx

Then run make and make install to build and install the toolkits respectively.

Warning: These options are for reference only, a useful FFmpeg build will require you to install appropriate dependencies for some build options as suited to your environment and platform. Modify as needed. Also see the indemnity clause at the top of this document.

Here are the dependencies I had to install on my end (without acounting for the OpenMAX IL bellagio back-end):

sudo apt-get install yasm ladspa-sdk ladspa-foo-plugins ladspalist libass5 libass-dev libbluray-bdj libbluray-bin libbluray-dev libbluray-doc libbluray1 libmp3lame-dev \ libmp3lame-ocaml libmp3lame-ocaml-dev libmp3lame0 libsox-fmt-mp3 libopencv-* opencv-* python-cv-bridge python-image-geometry python-opencv python-opencv-apps gstreamer1.0-vaapi gstreamer1.0-vaapi-doc libopenjp2-* libopenjp2-7-dev libopenjp2-7-dbg libopenjp3d7 libopenjpeg-dev libopenjpeg-java libopenjpeg5 libopenjpeg5-dbg libopenjpip7 openjpeg-tools libopus-dbg libopus-dev libopus-doc libopus0 libtag1-dev libtag1-doc libtag1v5 libtagc0 libtagc0-dev libopus-ocaml libopus-ocaml-dev libopusfile-dev libopusfile-doc libopusfile0 libvorbis-java opus-tools opus-tools-dbg libfaac-dev libfaac0 fdkaac  libfdk-aac0 libfdk-aac0-dbg libfdk-aac-dev libtheora-dbg libtheora-dev libtheora-doc libtheora0 libtheora-bin libtheora-ocaml libtheora-ocaml-dev libvpx-dev libvpx-doc libvpx3 libvpx3-dbg libwebp-dev libwebp5 libwebpdemux1 libwebpmux1 opencl-headers mesa-vdpau-drivers libvdpau-va-gl1 vdpauinfo vdpau-va-driver libvdpau-doc libvdpau-dev libvdpau1 libvdpau1-dbg libgsm-tools libgsm0710-0 libgsm0710-dev libgsm0710mux3 libgsm1 libgsm1-dbg libgsm1-dev sox libsox-dev libsox-fmt-all libsox-fmt-alsa libsox-fmt-ao libsox-fmt-base libsox-fmt-mp3 libsox-fmt-oss libsox-fmt-pulse libsox2 libsoxr-dev libsoxr-lsr0 libschroedinger-dev libschroedinger-doc libschroedinger-ocaml libschroedinger-ocaml-dev libschroedinger-1.0-0 libsmbclient libsmbclient-dev  smbclient  libspeex-dev libspeex1 libspeexdsp-dev libspeexdsp1 libspeex-ocaml libspeex-ocaml-dev libspeex-dbg libssh-4 libssh-dev libssh-dbg libssh-doc  libssh-gcrypt-4 libssh2-1 libssh2-1-dev libwavpack-dev libwavpack1 libxvidcore-dev libxvidcore4  libx265-dev libx265-79 libx265-doc libx264-148 libx264-dev libnetcdf-* netcdf-* libopenal-* openal-info  openssl 

When done, you may then create and load the appropriate environment modules for both ffmpeg and libav as your choices go. Don’t load both at the same time, though 🙂 (Mark them as module conflicts to ensure that if this is set up on a cluster, library conflicts do not occur when users inadvertently load both of them by accident in the same session).

Now, we get to the interesting bits:

Encoding with VAAPI

You’ll notice that we pass several arguments to ffmpeg as indicated below:

ffmpeg -loglevel debug -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -i "input
file" -vf 'format=nv12,hwupload' -map 0:0 -map 0:1 -threads 8 -aspect
16:9 -y -f matroska -acodec copy -b:v 12500k -vcodec h264_vaapi
"output file"

Let’s break down these arguments to their meaning:

(a) .-loglevel tells ffmpeg to log ffmpeg events as debug output. This will be very verbose, and is completely optional. You can disregard this.

(b). -vaapi_device: This is important. You must select a valid VAAPI H/W context device to which you will upload textures to via hwupload, formatted in the NV12 colorspace. This points to a /dev/dri/render*_ file on your Linux system.

(c). -vf : This is an inbuilt ffmpeg option that allows you to specify codec options/arguments to be passed to our encoder, in this case, h264_vaapi (Remember, we built this when we passed –enable-vaapi at the configuration stage). Here, we tell ffmpeg to convert all textures to one colorspace, NV12 (As it’s the one accepted by Intel’s QuickSync hardware encoder) and to also use hwupload, an ffmpeg intrinsic, that tells the program to asynchronously copy the converted pixel data to VAAPI’s surfaces.

(d). – threads : Specifies the number of threads that FFmpeg should use. By default, use the number of logical processors available on your processor here. On Intel processors that support Hyperthreading, multiply the number of cores your processor has by 2.

(e). -f : Specifies the container format specification you can use. This can be Matroska, webm, mp4, etc. Take your pick (as per your container constraints).

(f). -acodec: Specifies the audio codec to use when transcoding the video’s audio stream. In the example given above, we use ffmpeg’s muxers to copy the audio stream as is, untouched.

(g). -vcodec: Selects the video encoder to use. In this case, we selected h264_vaapi, our key point of interest here.

(h).-hwaccel vaapi: This instructs ffmpeg to use VAAPI based hardware accelerated decode (for supported codecs, see platform limits), and it can drastically lower the processor load during the process. Note that you should only use this option if your hardware supports hardware-accelerated decoding via VAAPI for the source fornat being encoded.

(i). Using the vaapi_scaler in the video filters: It is possible to use Intel’s QuickSync hardware via VAAPI for resize and scaling (when up-or downscaling the input source to a higher or lower resolution), using a filter snippet such as the one shown below:

vf 'format=nv12,hwupload,scale_vaapi=w=1920:h=1080'

You may specify a different resolution by changing the dimensions in =w= and :h= to suit your needs.

See an example of this filter snippet used above in the two-pass example in FFmpeg below.

(j). -hwaccel_output_format : This option should be used every time you declare the -hwaccel method as vaapi , so that the decode stage takes place entirely in hardware. This option generates decode output directly on VAAPI hardware surfaces, speeding up decode performance significantly.

You may confirm supported decode formats on your setup by running vainfo:

vainfo

Sample output on a Haswell testbed:

libva info: VA-API version 0.39.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.39 (libva 1.7.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Haswell Mobile - 1.7.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            :	VAEntrypointVLD
      VAProfileMPEG2Simple            :	VAEntrypointEncSlice
      VAProfileMPEG2Main              :	VAEntrypointVLD
      VAProfileMPEG2Main              :	VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline:	VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:	VAEntrypointEncSlice
      VAProfileH264Main               :	VAEntrypointVLD
      VAProfileH264Main               :	VAEntrypointEncSlice
      VAProfileH264High               :	VAEntrypointVLD
      VAProfileH264High               :	VAEntrypointEncSlice
      VAProfileH264MultiviewHigh      :	VAEntrypointVLD
      VAProfileH264MultiviewHigh      :	VAEntrypointEncSlice
      VAProfileH264StereoHigh         :	VAEntrypointVLD
      VAProfileH264StereoHigh         :	VAEntrypointEncSlice
      VAProfileVC1Simple              :	VAEntrypointVLD
      VAProfileVC1Main                :	VAEntrypointVLD
      VAProfileVC1Advanced            :	VAEntrypointVLD
      VAProfileNone                   :	VAEntrypointVideoProc
      VAProfileJPEGBaseline           :	VAEntrypointVLD

Supported encode formats are appended with the VAEntrypointEncSlice fields, and all decode formats(s) for your SKU will be listed under the VAEntryPointVLD and VAEntrypointVideoProc fields.

To interpret the output above, we can learn that the Haswell SKU above supports VAAPI – based hardware-accelerated decode for H.264 Simple, Main and Stereo High profiles (I’d assume that the Stereo High profile infers to H.264’s Multi-view coding encode mode, useful for encoding 3D Blurays and similar media, implying feature parity with Windows-based implementations where MVC encodes and decodes are supported by Intel QuickSync. Need to test that sometime).

The other arguments are pretty standard to FFmpeg and need no introduction 🙂

You may also use extra options such as QP mode (for constant-rate quality encoding) with this codec in ffmpeg as shown:

ffmpeg -loglevel debug -vaapi_device /dev/dri/renderD128 -i "input file" -vf 'format=nv12,hwupload' -map 0:0 -map 0:1 -threads 8 -aspect 16:9 -y -f matroska -acodec copy -vcodec h264_vaapi -qp 19 -bf 2 "output file"

Here, you’ll notice that we’ve added a few extra options to the arguments passed to the selected video encoder, h264_vaapi, and they are as follows:

(a). -qp: This option selects Fixed QP of P frames, and is ignored if bit-rate is set instead. Particularly useful for CRF-based encodes where a constant quality is required without bit-rate constraints. For a standard reference, a QP value of ~18 gives an approximate visual quality value similar to lossless compression, and going higher (~51) will give you way worse visual quality.

(b). -bf: This option toggles the maximum number of B-frames (bi-directional) between P-(progressive) frames. You may pump this higher than the default (2) if your selected encoder profile is High or better. Recommended: Leave this at the default (2).

In my tests, it’s also possible to do two-pass encoding with this encoder (h264_vaapi) in ffmpeg, as illustrated in the example below:

ffmpeg -loglevel debug -hwaccel vaapi -hwaccel_output_format vaapi -i "input-file" -vaapi_device /dev/dri/renderD129  -vf 'format=nv12,hwupload,scale_vaapi=w=1920:h=1080' -pass 1 -qp:v 19 -b:v 10.5M -c:v h264_vaapi -bf 4 -threads 4 -aspect 16:9 -an -y -f mp4 "/dev/null" && ffmpeg -loglevel debug -hwaccel vaapi -hwaccel_output_format vaapi -i "phfx4k.mkv" -vaapi_device /dev/dri/renderD129 -vf 'format=nv12,hwupload,scale_vaapi=w=1920:h=1080' -pass 2 -acodec copy -c:v h264_vaapi -bf 4 -qp:v 19 -b:v 10.5M -threads 4 -aspect 16:9 -y -f mp4 "output.mp4"

Let’s break that down:

With ffmpeg (and libav also), you must specify both passes sequentially (-pass 1 and -pass 2) because ffmpeg does not reiterate over input files for multiple passes. Secondly, this allows the user to tune the two-pass encoding as he/she sees fit, for example, by skipping audio processing in the first pass (-an) and only copying/muxing the audio stream from the input file’s container specification into the output file’s container (-acodec copy), as illustrated in the examples above.

And now we move on to libav’s options for a similar encode:

avconv -v 55 -y -vaapi_device /dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -i input.mkv \
-c:a copy -vf 'format=nv12|vaapi,hwupload' -c:v h264_vaapi -bf 2 -b 12500k output.mkv

Let’s break down these arguments to their meaning:

(a) .-v : This defines avconv’s verbosity level. This one is completely optional, though its’ regarded as good practice to leave it enabled and set to a reasonable verbosity level as desired for troubleshooting and diagnostics purposes.

(b). -vaapi_device: This is important. You must select a valid VAAPI H/W context device to which you will upload textures to via hwupload, formatted in the NV12 colorspace. This points to a /dev/dri/render*_ file on your Linux system.

(c). -hwaccel: This option allows you to select the hardware – based accelerated decoding to use for the encode session. In our case above, we are picking vaapi as this has a positive impact on encoder performance. A nice freebie.

(d). -hwaccel_output_format : This option should be used every time you declare the -hwaccel method as vaapi , so that the decode stage takes place entirely in hardware. This option generates decode output directly on VAAPI hardware surfaces, speeding up decode performance significantly.

(e). -vf : This is an inbuilt libav option that allows you to specify video filter options to be passed to our encoder, in this case, h264_vaapi (Remember, we built this when we passed –enable-vaapi at the configuration stage). Here, we tell libav to convert all textures to one colorspace, NV12 (As it’s the one accepted by Intel’s QuickSync hardware encoder) and to also use hwupload, a libav intrinsic, that tells the program to asynchronously copy the converted pixel data to VAAPI’s surfaces. This argument also includes the hardware accelerated decode output format we requested earlier, raw VAAPI hardware surfaces.

(f). -bf : Specifies the bframe setting to use. Sane values for Intel ‘s Quick Sync encode hardware should be between 2 and 4. Test and report back.

(g). -c:a: Specifies the audio codec to use when transcoding the video’s audio stream. In the example given above, we use libav’s muxers to copy the audio stream as is, untouched.

(h). -c:v: Selects the video encoder to use. In this case, we selected h264_vaapi, our key point of interest here. (i). -b: Selects the video stream’s bitrate passed to the encoder, h264_vaapi.

You may see the original documentation on Libav’s website here on build instructions, using the alternate hevc_vaapi on supported hardware, encoder limitations, caveats, etc.

If all well according to plan, your video file should be encoded to H.264, muxed into the selected container and be done with.

See the screen-shot library here.

Extra information:

You can always view the build configuration of your Ffmpeg pipeline at any times by running:

For FFmpeg:

lin@mjanja:~$ ffmpeg -buildconf
ffmpeg version N-80785-g0fd76d7 Copyright (c) 2000-2016 the FFmpeg developers
  built with gcc 5.3.1 (Ubuntu 5.3.1-14ubuntu2.1) 20160413
  configuration: --enable-nonfree --enable-gpl --enable-version3 --enable-libass --enable-libbluray --enable-libmp3lame --enable-libopencv --enable-libopenjpeg --enable-libopus --enable-libfaac --enable-libfdk-aac --enable-libtheora --enable-libvpx --enable-libwebp --enable-opencl --enable-x11grab --enable-opengl --cpu=native --enable-nvenc --enable-vaapi --enable-vdpau --enable-ladspa --enable-libass --enable-libgsm --enable-libschroedinger --enable-libsmbclient --enable-libsoxr --enable-libspeex --enable-libssh --enable-libwavpack --enable-libxvid --enable-libx264 --enable-libx265 --enable-netcdf --enable-openal --enable-openssl --prefix=/apps/ffmpeg/git --enable-omx
  libavutil      55. 27.100 / 55. 27.100
  libavcodec     57. 48.101 / 57. 48.101
  libavformat    57. 40.101 / 57. 40.101
  libavdevice    57.  0.102 / 57.  0.102
  libavfilter     6. 46.102 /  6. 46.102
  libswscale      4.  1.100 /  4.  1.100
  libswresample   2.  1.100 /  2.  1.100
  libpostproc    54.  0.100 / 54.  0.100

  configuration:
    --enable-nonfree
    --enable-gpl
    --enable-version3
    --enable-libass
    --enable-libbluray
    --enable-libmp3lame
    --enable-libopencv
    --enable-libopenjpeg
    --enable-libopus
    --enable-libfaac
    --enable-libfdk-aac
    --enable-libtheora
    --enable-libvpx
    --enable-libwebp
    --enable-opencl
    --enable-x11grab
    --enable-opengl
    --cpu=native
    --enable-nvenc
    --enable-vaapi
    --enable-vdpau
    --enable-ladspa
    --enable-libass
    --enable-libgsm
    --enable-libschroedinger
    --enable-libsmbclient
    --enable-libsoxr
    --enable-libspeex
    --enable-libssh
    --enable-libwavpack
    --enable-libxvid
    --enable-libx264
    --enable-libx265
    --enable-netcdf
    --enable-openal
    --enable-openssl
    --prefix=/apps/ffmpeg/git
    --enable-omx

On help and documentation:

List all formats:

ffmpeg -formats

Display options specific to, and information about, a particular muxer:

ffmpeg -h muxer=matroska

Display options specific to, and information about, a particular demuxer:

ffmpeg -h demuxer=gif

Codecs (encoders and decoders):

List all codecs:

ffmpeg -codecs

List all encoders:

ffmpeg -encoders

List all decoders:

ffmpeg -decoders

Display options specific to, and information about, a particular encoder:

ffmpeg -h encoder=mpeg4

Display options specific to, and information about, a particular decoder:

ffmpeg -h decoder=aac

Reading the results

There is a key near the top of the output that describes each letter that precedes the name of the format, encoder, decoder, or codec:

$ ffmpeg -encoders
[…]
Encoders:
 V..... = Video
 A..... = Audio
 S..... = Subtitle
 .F.... = Frame-level multithreading
 ..S... = Slice-level multithreading
 ...X.. = Codec is experimental
 ....B. = Supports draw_horiz_band
 .....D = Supports direct rendering method 1
 ------
[…]
 V.S... mpeg4                MPEG-4 part 2

In this example V.S… indicates that the encoder mpeg4 is a Video encoder and supports Slice-level multithreading.

Extra notes for AMD hardware supporting VCE:

If you have a supported GCN+ AMD GPU running on Linux with the mesa driver stack, you may be able to use the AMD VCE Block via VAAPI with an example such as the one shown below:

DRI_PRIME=1 LIBVA_DRIVER_NAME=radeonsi ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -hwaccel_output_format vaapi \
-framerate 30 -video_size 1920x1200 -f x11grab -i :0.0 -f pulse -ac 2 -i 1 \
-vf 'format=nv12,hwupload' -threads 8 \
-vcodec h264_vaapi -bf 0 -acodec pcm_s16le output.mkv

Where we capture from the screen via x11grab and the audio from a pulseaudio device.

You must set the LIBVA_DRIVER_NAME and the DRI_PRIME=1 environment variables to radeonsi prior to using VAAPI on VCE, and ensure that the -vaapi_device points to the correct renderer.

Note that with AMD hardware, we generally disable B-Frame support as newer SKUs such as the RX 460/470/480 and their rebrands (Polaris-based) do not support B-Frames in H.264 encoding. See this issue on Github for more details.

 

Create a thumbnail image every X seconds of the video

https://trac.ffmpeg.org/wiki/Create%20a%20thumbnail%20image%20every%20X%20seconds%20of%20the%20video

-frames option

Output a single frame from the video into an image file:

ffmpeg -ss 00:00:14.435 -to 00:00:16 -i input.flv  -frames:v *** out03%d.png

This example will seek to the position of 0h:0m:14sec:435msec end to 00:00:16 and output each frame (-frames:v *** (***为释放数量,根据时间和帧率计算)) from that position into PNG file.

 

fps video filter

Output one image every second, named out1.pngout2.pngout3.png, etc.

ffmpeg -i input.flv -vf fps=1 out%d.png

指定片段指定帧率(20/s)释放

ffmpeg -i -seek 01:12:03.25 -to 01:15:12.36 input.flv -vf fps=20 out%d.png

 

Output one image every minute, named img001.jpgimg002.jpgimg003.jpg, etc. The %03d dictates that the ordinal number of each output image will be formatted using 3 digits.

ffmpeg -i myvideo.avi -vf fps=1/60 img%03d.jpg

Output one image every ten minutes:

ffmpeg -i test.flv -vf fps=1/600 thumb%04d.bmp

select video filter

Output one image for every I-frame:

ffmpeg -i input.flv -vf "select='eq(pict_type,PICT_TYPE_I)'" -vsync vfr thumb%04d.png

 

使用ffmpeg合并(连接)文件

Pipe friendly formats

原文:

https://www.nmm-hd.org/newbbs/viewtopic.php?t=1533

https://superuser.com/questions/277642/how-to-merge-audio-and-video-file-in-ffmpeg

https://trac.ffmpeg.org/wiki/Concatenate

关键词:concatenate;append;合并;连接
使用ffmpeg连接文件分两类,连接编码完全相同的文件;连接编码不同的文件

连接编码完全相同的文件
有两种方式,使用concat “分离器(demuxer)”和concat “协议(protocol)。demuxer比较自由,编码相同、但是多媒体文件容器不同也能连接。因此demuxer能处理各种容器,而protocol只能处理区区几种容器。老版本的ffmpeg只能用protocol,最近demuxer在ffmpeg中出现。

concat demuxer是FFmpeg 1.1添加进来的,文档见此
使用方法
创建一个 mylist.txt 文件,每行写一个想要连接的文件的路径,格式如下:

代码: 全选

#该行为注释
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'

可以用绝对路径,也可以用相对路径。然后使用stream copy或重编码:

代码: 全选

ffmpeg -f concat -safe 0 -i mylist.txt -c copy output

 

demuxer在“流”层面工作,与之不同的 是protocol在文件层面工作,因此只有特定格式的文件能连接(像mpg或mpeg transport stream文件,也可能有其他的),类似于UNIX类系统里的cat和windows系统里的copy。

代码: 全选

ffmpeg -i "concat:input1.mpg|input2.mpg|input3.mpg" -c copy output.mpg

如果不是mpg格式的文件呢?可以先用ffmpeg转成mpeg transport stream,再连接。举个例子,h264视频和aac音频的mp4:

代码: 全选

ffmpeg -i input1.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts
ffmpeg -i input2.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts
ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -c copy -bsf:a aac_adtstoasc output.mp4

这种方法会产生不少临时文件,如果你会用named pipe,原帖有方法可以一行解决。
所有mpeg编码的格式(H.264, MPEG4/divx/xvid, MPEG2; MP2, MP3, AAC)都可以转成mpeg transport stream,不过有时需要加一些额外的命令(具体的-bsf命令)。

连接不同编码的多媒体文件
concat滤镜
最近新版本的ffmpeg里有concat滤镜。更多信息参考concat滤镜文档。
使用方法用例子说明

代码: 全选

ffmpeg -i input1.mp4 -i input2.webm \
-filter_complex '[0:0] [0:1] [1:0] [1:1] concat=n=2:v=1:a=1 [v] [a]' \
-map '[v]' -map '[a]' <encoding options> output.mkv

-filter_complex这行的 ‘[0:0] [0:1] [1:0] [1:1] 是告诉ffmpeg将哪些码流送到concat滤镜里。这里,代表输入文件0(例子中的input1.mp4)的0号流和1号流、输入文件1(栗子中的input2.webm)的0号流和1号流。

concat=n=2:v=1:a=1 [v] [a]’ 就是调用concat滤镜。n=2告诉滤镜有两个输入文件;v=1告诉滤镜有1个视频流,a=1告诉滤镜有1个音频流。 [v]和[a]定义输出流的名称,ffmpeg的其他部分就知道concat的输出了。
需要注意的是,整个滤镜需要用单引号包围。

代码: 全选

-map '[v]' -map '[a]'

告诉ffmpeg用concat滤镜的输出流,而不是用原来文件的流。
注意:此滤镜和重新封装(流复制stream copying)不兼容,不能用-c copy。另外,不知道这种方式支不支持软字幕。
这个例子还告诉我们,输入文件格式不同concat也是支持的。ffmpeg能够解码的任何格式都可以操作,但是分辨率和一些其他的属性需要匹配。

Merging video and audio, with audio re-encoding

See this example, taken from this blog entry but updated for newer syntax. It should be something to the effect of:

ffmpeg -i video.mp4 -i audio.wav \
-c:v copy -c:a aac -strict experimental output.mp4

Here, we assume that the video file does not contain any audio stream yet, and that you want to have the same output format (here, MP4) as the input format.

The above command transcodes the audio, since MP4s cannot carry PCM audio streams. You can use any other desired audio codec if you want. See the AAC Encoding Guide for more info.

If your audio or video stream is longer, you can add the -shortest option so that ffmpeg will stop encoding once one file ends.

Copying the audio without re-encoding

If your output container can handle (almost) any codec – like MKV – then you can simply copy both audio and video streams:

ffmpeg -i video.mp4 -i audio.wav -c copy output.mkv

Replacing audio stream

If your input video already contains audio, and you want to replace it, you need to tell ffmpeg which audio stream to take:

ffmpeg -i video.mp4 -i audio.wav \
-c:v copy -c:a aac -strict experimental \
-map 0:v:0 -map 1:a:0 output.mp4

The map option makes ffmpeg only use the first video stream from the first input and the first audio stream from the second input for the output file.

 

Concatenating media files

If you have media files with exactly the same codec and codec parameters you can concatenate them as described in “Concatenation of files with same codecs“. If you have media with different codecs you can concatenate them as described in “Concatenation of files with different codecs” below.

Concatenation of files with same codecs

There are two methods within ffmpeg that can be used to concatenate files of the same type: the concat ”demuxer” and the concat ”protocol”. The demuxer is more flexible – it requires the same codecs, but different container formats can be used; and it can be used with any container formats, while the protocol only works with a select few containers. However, the concat protocol is available in older versions of ffmpeg, where the demuxer isn’t. The demuxer also requires that the inputs have a consistent bitrate setting, the concat protocol is more flexible in this regard.

Concat demuxer

The concat demuxer was added to FFmpeg 1.1. You can read about it in the documentation.

Instructions

Create a file mylist.txt with all the files you want to have concatenated in the following form (lines starting with a # are ignored):

# this is a comment
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'

Note that these can be either relative or absolute paths. Then you can stream copy or re-encode your files:

ffmpeg -f concat -safe 0 -i mylist.txt -c copy output

The -safe 0 above is not required if the paths are relative.

It is possible to generate this list file with a bash for loop, or using printf. Either of the following would generate a list file containing every *.wav in the working directory:

# with a bash for loop
for f in ./*.wav; do echo "file '$f'" >> mylist.txt; done
# or with printf
printf "file '%s'\n" ./*.wav > mylist.txt

On Windows Command-line:

(for %i in (*.wav) do @echo file '%i') > mylist.txt

If your shell supports process substitution (like Bash and Zsh), you can avoid explicitly creating a list file and do the whole thing in a single line. This would be impossible with the concat protocol (see below). Make sure to generate absolute paths here, since ffmpeg will resolve paths relative to the list file your shell may create in a directory such as “/proc/self/fd/”.

ffmpeg -f concat -safe 0 -i <(for f in ./*.wav; do echo "file '$PWD/$f'"; done) -c copy output.wav
ffmpeg -f concat -safe 0 -i <(printf "file '$PWD/%s'\n" ./*.wav) -c copy output.wav
ffmpeg -f concat -safe 0 -i <(find . -name '*.wav' -printf "file '$PWD/%p'\n") -c copy output.wav

You can also loop a video. This example will loop input.mkv 10 times:

for i in {1..10}; do printf "file '%s'\n" input.mkv >> mylist.txt; done
ffmpeg -f concat -i mylist.txt -c copy output.mkv

Concatenation becomes troublesome, if next clip for concatenation does not exist at the moment, because decoding won’t start until the whole list is read. However, it is possible to refer another list at the end of the current list:

#!/bin/bash

fn_concat_init() {
    echo "fn_concat_init"
    concat_pls=`mktemp -u -p . concat.XXXXXXXXXX.txt`
    concat_pls="${concat_pls#./}"
    echo "concat_pls=${concat_pls:?}"
    mkfifo "${concat_pls:?}"
    echo
}

fn_concat_feed() {
    echo "fn_concat_feed ${1:?}"
    {
        >&2 echo "removing ${concat_pls:?}"
        rm "${concat_pls:?}"
        concat_pls=
        >&2 fn_concat_init
        echo 'ffconcat version 1.0'
        echo "file '${1:?}'"
        echo "file '${concat_pls:?}'"
    } >"${concat_pls:?}"
    echo
}

fn_concat_end() {
    echo "fn_concat_end"
    {
        >&2 echo "removing ${concat_pls:?}"
        rm "${concat_pls:?}"
        # not writing header.
    } >"${concat_pls:?}"
    echo
}

fn_concat_init

echo "launching ffmpeg ... all.mkv"
timeout 60s ffmpeg -y -re -loglevel warning -i "${concat_pls:?}" -pix_fmt yuv422p all.mkv &

ffplaypid=$!


echo "generating some test data..."
i=0; for c in red yellow green blue; do
    ffmpeg -loglevel warning -y -f lavfi -i testsrc=s=720x576:r=12:d=4 -pix_fmt yuv422p -vf "drawbox=w=50:h=w:t=w:c=${c:?}" test$i.mkv
    fn_concat_feed test$i.mkv
    ((i++));
    echo
done
echo "done"

fn_concat_end

wait "${ffplaypid:?}"

echo "done encoding all.mkv"

Concat protocol

While the demuxer works at the stream level, the concat protocol works at the file level. Certain files (mpg and mpeg transport streams, possibly others) can be concatenated. This is analogous to using cat on UNIX-like systems or copy on Windows.

Instructions

ffmpeg -i "concat:input1.mpg|input2.mpg|input3.mpg" -c copy output.mpg

If you have MP4 files, these could be losslessly concatenated by first transcoding them to mpeg transport streams. With h.264 video and AAC audio, the following can be used:

ffmpeg -i input1.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts
ffmpeg -i input2.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts
ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -c copy -bsf:a aac_adtstoasc output.mp4

If you’re using a system that supports named pipes, you can use those to avoid creating intermediate files – this sends stderr (which ffmpeg sends all the written data to) to /dev/null, to avoid cluttering up the command-line:

mkfifo temp1 temp2
ffmpeg -i input1.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts temp1 2> /dev/null & \
ffmpeg -i input2.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts temp2 2> /dev/null & \
ffmpeg -f mpegts -i "concat:temp1|temp2" -c copy -bsf:a aac_adtstoasc output.mp4

All MPEG codecs (H.264, MPEG4/divx/xvid, MPEG2; MP2, MP3, AAC) are supported in the mpegts container format, though the commands above would require some alteration (the -bsf bitstream filters will have to be changed).

Concatenation of files with different codecs

Concat filter

The concat filter is available in recent versions of ffmpeg. See the concat filter documentation for more info.

Instructions

This is easiest to explain using an example:

ffmpeg -i input1.mp4 -i input2.webm \
-filter_complex "[0:v:0] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [v] [a]" \
-map "[v]" -map "[a]" <encoding options> output.mkv

On the -filter_complex line, the following:

[0:v:0] [0:a:0] [1:v:0] [1:a:0]

tells ffmpeg what streams to send to the concat filter; in this case, video stream 0 [0:v:0] and audio stream 0 [0:a:0] from input 0 (input1.mp4 in this example), and video stream 0 [1:v:0] and audio stream 0 [1:v:0] from input 1 (input2.webm).

concat=n=2:v=1:a=1 [v] [a]'

This is the concat filter itself. n=2 is telling the filter that there are two input files; v=1 is telling it that there will be one video stream; a=1 is telling it that there will be one audio stream. [v] and [a] are names for the output streams to allow the rest of the ffmpeg line to use the output of the concat filter.

Note that the single quotes around the whole filter section are required.

-map '[v]' -map '[a]'

This tells ffmpeg to use the results of the concat filter rather than the streams directly from the input files.

Note that filters are incompatible with stream copying; you can’t use -c copy with this method. Also, I’m not sure whether softsubs are supported.

As you can infer from this example, multiple types of input are supported, and anything readable by ffmpeg should work. The inputs have to be of the same frame size, and a handful of other attributes have to match.

Using an external script

With any vaguely-modern version of ffmpeg, the following script is made redundant by the advent the concat filter, which achieves the same result in a way that works across platforms. It is a clever workaround of ffmpeg’s then-limitations, but most people (i.e. anyone not stuck using an ancient version of ffmpeg for whatever reason) should probably use one of the methods listed above.

The following script can be used to concatenate multiple input media files (containing audio/video streams) into one output file (just like as if all the inputs were played in a playlist, one after another). It is based on this FAQ item: How can I join video files, which also contains other useful information.

If you find any bugs, feel free to correct the script, add yourself to the list of contributors and change the version string to reflect your change(s) or email the author with your patch, whatever you find more convenient.

Instructions

Save the script in a file named mmcat (or some other name), make it executable (chmod +x mmcat) and run it, using the syntax:

./mmcat <input1> <input2> <input3> ... <output>

If you get an error like this:

#/tmp/mcs_v_all: Operation not permitted

that could mean that you don’t have correct permissions set on /tmp directory (or whatever you set in TMP variable) or that decoding of your input media has failed for some reason. In this case it would be the best to turn on the logging (as described in the script’s comments)

Script

#!/bin/bash

################################################################################
#
# Script name: MultiMedia Concat Script (mmcat)
# Author: burek ([email protected])
# License: GNU/GPL, see http://www.gnu.org/copyleft/gpl.html
# Date: 2012-07-14
#
# This script concatenates (joins, merges) several audio/video inputs into one
# final output (just like as if all the inputs were played in a playlist, one
# after another).
#
# All input files must have at least one audio and at least one video stream.
# If not, you can easily add audio silence, using FFmpeg. Just search the
# internet for "ffmpeg add silence".
#
# The script makes use of FFmpeg tool (www.ffmpeg.org) and is free for use under
# the GPL license. The inspiration for this script came from this FAQ item:
# http://ffmpeg.org/faq.html#How-can-I-join-video-files_003f
#
# If you find any bugs, please send me an e-mail so I can fix it.
#
################################################################################
#
# General syntax: mmcat <input1> <input2> <input3> ... <output>
#
# For example: mmcat file1.flv file2.flv output.flv
# would create "output.flv" out of "file1.flv" and "file2.flv".
#
################################################################################

# change this to what you need !!!
EXTRA_OPTIONS='-vcodec libx264 -crf 23 -preset medium -acodec aac -strict experimental -ac 2 -ar 44100 -ab 128k'

################################################################################
#
# NO NEED TO TOUCH ANYTHING AFTER THIS LINE!
#
################################################################################

# the version of the script
VERSION=1.3

# location of temp folder
TMP=/tmp

################################################################################

echo "MultiMedia Concat Script v$VERSION (mmcat) - A script to concatenate multiple multimedia files."
echo "Based on FFmpeg - www.ffmpeg.org"
echo "Don't forget to edit this script and change EXTRA_OPTIONS"
echo ""

################################################################################
# syntax check (has to have at least 3 params: infile1, infile2, outfile
################################################################################
if [ -z $3 ]; then
	echo "Syntax: $0 <input1> <input2> <input3> ... <output>"
	exit 1
fi

################################################################################
# get all the command line parameters, except for the last one, which is output
################################################################################
# $first  - first parameter
# $last   - last parameter (output file)
# $inputs - all the inputs, except the first input, because 1st input is
#           handled separately
################################################################################
first=${@:1:1}
last=${@:$#:1}
len=$(($#-2))
inputs=${@:2:$len}

# remove all previous tmp fifos (if exist)
rm -f $TMP/mcs_*

################################################################################
# decode first input differently, because the video header does not have to be
# kept for each video input, only the header from the first video is needed
################################################################################
mkfifo $TMP/mcs_a1 $TMP/mcs_v1

ffmpeg -y -i $first -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 $TMP/mcs_a1 2>/dev/null </dev/null &
ffmpeg -y -i $first -an -f yuv4mpegpipe -vcodec rawvideo $TMP/mcs_v1 2>/dev/null </dev/null &

# if you need to log the output of decoding processes (usually not necessary)
# then replace the "2>/dev/null" in 2 lines above with your log file names, like this:
#ffmpeg -y -i $first -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 $TMP/mcs_a1 2>$TMP/log.a.1 </dev/null &
#ffmpeg -y -i $first -an -f yuv4mpegpipe -vcodec rawvideo $TMP/mcs_v1 2>$TMP/log.v.1 </dev/null &

################################################################################
# decode all the other inputs, remove first line of video (header) with tail
# $all_a and $all_v are lists of all a/v fifos, to be used by "cat" later on
################################################################################
all_a=$TMP/mcs_a1
all_v=$TMP/mcs_v1
i=2
for f in $inputs
do
	mkfifo $TMP/mcs_a$i $TMP/mcs_v$i

	ffmpeg -y -i $f -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 $TMP/mcs_a$i 2>/dev/null </dev/null &
	{ ffmpeg -y -i $f -an -f yuv4mpegpipe -vcodec rawvideo - 2>/dev/null </dev/null | tail -n +2 > $TMP/mcs_v$i ; } &

	# if you need to log the output of decoding processes (usually not necessary)
	# then replace the "2>/dev/null" in 2 lines above with your log file names, like this:
	#ffmpeg -y -i $f -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 $TMP/mcs_a$i 2>$TMP/log.a.$i </dev/null &
	#{ ffmpeg -y -i $f -an -f yuv4mpegpipe -vcodec rawvideo - 2>$TMP/log.v.$i </dev/null | tail -n +2 > $TMP/mcs_v$i ; } &

	all_a="$all_a $TMP/mcs_a$i"
	all_v="$all_v $TMP/mcs_v$i"
	let i++
done

################################################################################
# concatenate all raw audio/video inputs into one audio/video
################################################################################
mkfifo $TMP/mcs_a_all
mkfifo $TMP/mcs_v_all
cat $all_a > $TMP/mcs_a_all &
cat $all_v > $TMP/mcs_v_all &

################################################################################
# finally, encode the raw concatenated audio/video into something useful
################################################################################
ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i $TMP/mcs_a_all \
       -f yuv4mpegpipe -vcodec rawvideo -i $TMP/mcs_v_all \
	$EXTRA_OPTIONS \
	$last

################################################################################
# remove all fifos
################################################################################
rm -f $TMP/mcs_*

The script above can be modified to use the ‘-f avi’ insted of ‘-f yuv4mpegpipe’. Benefits:

  • unlike yuv4mpegpipe, just one pipe for both video and audio
  • unlike yuv4mpegpipe, matroska and flv, no need to skip header in second file using tail, because avi demuxer will skip it automatically
  • unlike mpegts, avi supports rawvideo and pcm

Pipe-friendly formats

Pipe friendly formats

ffmpeg 提取 视频,音频,字幕 方法

收藏代码

ffmpeg 提取 视频,音频,字幕 方法
(How to Extract Video, udio, Subtitle from Original Video?)

1.    提取视频 (Extract Video)

ffmpeg -i Life.of.Pi.has.subtitles.mkv -vcodec copy –an  videoNoAudioSubtitle.mp4

参考:http://www.cnblogs.com/wainiwann/p/4128154.html

2.    提取音频(Extract Audio)

ffmpeg -i Life.of.Pi.has.subtitles.mkv -vn -acodec copy audio.ac3

参考:http://stackoverflow.com/questions/9913032/ffmpeg-to-extract-audio-from-video

3.    提取字幕(Extract Subtitle)

ffmpeg -i Life.of.Pi.has.subtitles.mkv-map 0:s:0 sub1.srt

参考:http://superuser.com/questions/583393/extract-subtitle-from-video

如何用 ffmpeg 获取多音轨视频文件的各个音轨

1. 先用ffmpeg查看视频文件信息: 

  1. # ffmpeg -i a.MPG
  2. Input #0, mpeg, from ‘a.MPG’:
  3.   Duration: 00:00:32.32, start: 245.117611, bitrate: 8581 kb/s
  4.     Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720×480 [PAR 32:27 DAR 16:9], 9800 kb/s, 59.94 tbr, 90k tbn, 59.94 tbc
  5.     Stream #0.1[0x31]: Subtitle: dvdsub
  6.     Stream #0.2[0x81]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s
  7.     Stream #0.3[0x82]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s
  8.     Stream #0.4[0x80]: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s
  9.     Stream #0.5[0x83]: Audio: ac3, 48000 Hz, stereo, s16, 160 kb/s
  10.     Stream #0.6[0x84]: Audio: ac3, 48000 Hz, stereo, s16, 160 kb/s
  11.     Stream #0.7[0x85]: Audio: ac3, 48000 Hz, stereo, s16, 192 kb/s
  12.     Stream #0.8[0x2d]: Subtitle: dvdsub
  13.     Stream #0.9[0x2e]: Subtitle: dvdsub
  14.     Stream #0.10[0x2f]: Subtitle: dvdsub
  15.     Stream #0.11[0x24]: Subtitle: dvdsub
  16.     Stream #0.12[0x30]: Subtitle: dvdsub
  17.     Stream #0.13[0x2a]: Subtitle: dvdsub
  18.     Stream #0.14[0x2b]: Subtitle: dvdsub
  19.     Stream #0.15[0x2c]: Subtitle: dvdsub
  20.     Stream #0.16[0x23]: Subtitle: dvdsub

2. 转制音频文件 
可以看到Audio的Stream有2-7个,分别获取这几个audio音轨的命令是:

Python代码  收藏代码
  1. ffmpeg -i a.MPG -map 0:2 a.2.wav
  2. ffmpeg -i a.MPG -map 0:3 a.3.wav
  3. ffmpeg -i a.MPG -map 0:4 a.4.wav
  4. ffmpeg -i a.MPG -map 0:7 a.7.wav

a.%d.wav (2-7) 即是输出的几个音轨的音频文件。

 

 

There is no option yet in ffmpeg to automatically extract all streams into an appropriate container, but it is certainly possible to do manually. Default stream selection only chooses one stream per stream type, so you have to manually map each stream.

1. Get input info

Using ffmpeg or ffprobe you can get the info in each individual stream, and there is a wide variety of formats (xml, json, cvs, etc) available to fit your needs.

ffmpeg example

ffmpeg -i input.mkv

The resulting output (I cut out some extra stuff, the stream numbers and format info are what is important):

Input #0, matroska,webm, from 'input.mkv':
  Metadata:
  Duration: 00:00:05.00, start: 0.000000, bitrate: 106 kb/s
    Stream #0:0: Video: h264 (High 4:4:4 Predictive), yuv444p, 320x240 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 1k tbn, 50 tbc (default)
    Stream #0:1: Audio: vorbis, 44100 Hz, mono, fltp (default)
    Stream #0:2: Audio: vorbis, 44100 Hz, mono, fltp (default)
    Stream #0:3: Audio: vorbis, 44100 Hz, mono, fltp (default)
    Stream #0:4: Subtitle: ass (default)

ffprobe example

ffprobe -v error -show_entries stream=index,codec_name,codec_type input.mkv

The resulting output:

[STREAM]
index=0
codec_name=h264
codec_type=video
[/STREAM]
[STREAM]
index=1
codec_name=vorbis
codec_type=audio
[/STREAM]
[STREAM]
index=2
codec_name=vorbis
codec_type=audio
[/STREAM]
[STREAM]
index=3
codec_name=vorbis
codec_type=audio
[/STREAM]
[STREAM]
index=4
codec_name=ass
codec_type=subtitle
[/STREAM]

2. Extract the streams

Using the info from one of the commands above:

ffmpeg -i input.mkv \
-map 0:v -c copy video.mkv \
-map 0:a:0 -c copy audio0.oga \
-map 0:a:1 -c copy audio1.oga \
-map 0:a:2 -c copy audio2.oga \
-map 0:s -c copy subtitles.ass

In this case, the example above is the same as:

ffmpeg -i input.mkv \
-map 0:0 -c copy video.mkv \
-map 0:1 -c copy audio0.oga \
-map 0:2 -c copy audio1.oga \
-map 0:3 -c copy audio2.oga \
-map 0:4 -c copy subtitles.ass