Skip to content
Snippets Groups Projects
ffmpeg.texi 60.3 KiB
Newer Older
  • Learn to ignore specific revisions
  • \input texinfo @c -*- texinfo -*-
    
    @documentencoding UTF-8
    
    @settitle ffmpeg Documentation
    
    @center @titlefont{ffmpeg Documentation}
    
    ffmpeg [@var{global_options}] @{[@var{input_file_options}] -i @file{input_file}@} ... @{[@var{output_file_options}] @file{output_file}@} ...
    
    @chapter Description
    @c man begin DESCRIPTION
    
    @command{ffmpeg} is a very fast video and audio converter that can also grab from
    
    a live audio/video source. It can also convert between arbitrary sample
    rates and resize video on the fly with a high quality polyphase filter.
    
    @command{ffmpeg} reads from an arbitrary number of input "files" (which can be regular
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    files, pipes, network streams, grabbing devices, etc.), specified by the
    @code{-i} option, and writes to an arbitrary number of output "files", which are
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    specified by a plain output filename. Anything found on the command line which
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    cannot be interpreted as an option is considered to be an output filename.
    
    
    Each input or output file can, in principle, contain any number of streams of
    different types (video/audio/subtitle/attachment/data). The allowed number and/or
    types of streams may be limited by the container format. Selecting which
    streams from which inputs will go into which output is either done automatically
    or with the @code{-map} option (see the Stream selection chapter).
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    
    To refer to input files in options, you must use their indices (0-based). E.g.
    
    the first input file is @code{0}, the second is @code{1}, etc. Similarly, streams
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    within a file are referred to by their indices. E.g. @code{2:3} refers to the
    
    fourth stream in the third input file. Also see the Stream specifiers chapter.
    
    As a general rule, options are applied to the next specified
    file. Therefore, order is important, and you can have the same
    option on the command line multiple times. Each occurrence is
    then applied to the next input or output file.
    
    Exceptions from this rule are the global options (e.g. verbosity level),
    which should be specified first.
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    Do not mix input and output files -- first specify all input files, then all
    output files. Also do not mix options which belong to different files. All
    options apply ONLY to the next input or output file and are reset between files.
    
    
    To set the video bitrate of the output file to 64 kbit/s:
    
    rogerdpack's avatar
    rogerdpack committed
    ffmpeg -i input.avi -b:v 64k -bufsize 64k output.avi
    
    @item
    To force the frame rate of the output file to 24 fps:
    
    @example
    ffmpeg -i input.avi -r 24 output.avi
    @end example
    
    
    @item
    To force the frame rate of the input file (valid for raw formats only)
    
    to 1 fps and the frame rate of the output file to 24 fps:
    @example
    ffmpeg -r 1 -i input.m2v -r 24 output.avi
    @end example
    
    
    The format option may be needed for raw input files.
    
    @c man end DESCRIPTION
    
    
    @chapter Detailed description
    @c man begin DETAILED DESCRIPTION
    
    
    The transcoding process in @command{ffmpeg} for each output can be described by
    
     _______              ______________
    |       |            |              |
    | input |  demuxer   | encoded data |   decoder
    | file  | ---------> | packets      | -----+
    |_______|            |______________|      |
                                               v
                                           _________
                                          |         |
                                          | decoded |
                                          | frames  |
    
                                          |_________|
     ________             ______________       |
    
    |        |           |              |      |
    | output | <-------- | encoded data | <----+
    | file   |   muxer   | packets      |   encoder
    |________|           |______________|
    
    
    @command{ffmpeg} calls the libavformat library (containing demuxers) to read
    
    input files and get packets containing encoded data from them. When there are
    
    multiple input files, @command{ffmpeg} tries to keep them synchronized by
    
    tracking lowest timestamp on any active input stream.
    
    Encoded packets are then passed to the decoder (unless streamcopy is selected
    for the stream, see further for a description). The decoder produces
    uncompressed frames (raw video/PCM audio/...) which can be processed further by
    
    filtering (see next section). After filtering, the frames are passed to the
    encoder, which encodes them and outputs encoded packets. Finally those are
    
    passed to the muxer, which writes the encoded packets to the output file.
    
    @section Filtering
    
    Before encoding, @command{ffmpeg} can process raw audio and video frames using
    
    filters from the libavfilter library. Several chained filters form a filter
    
    graph. @command{ffmpeg} distinguishes between two types of filtergraphs:
    
    simple and complex.
    
    @subsection Simple filtergraphs
    Simple filtergraphs are those that have exactly one input and output, both of
    the same type. In the above diagram they can be represented by simply inserting
    an additional step between decoding and encoding:
    
    
     _________                        ______________
    |         |                      |              |
    | decoded |                      | encoded data |
    
    | frames  |\                   _ | packets      |
    |_________| \                  /||______________|
    
                 \   __________   /
    
      simple     _\||          | /  encoder
      filtergraph   | filtered |/
    
                    | frames   |
                    |__________|
    
    
    Simple filtergraphs are configured with the per-stream @option{-filter} option
    (with @option{-vf} and @option{-af} aliases for video and audio respectively).
    A simple filtergraph for video can look for example like this:
    
    
     _______        _____________        _______        ________
    |       |      |             |      |       |      |        |
    | input | ---> | deinterlace | ---> | scale | ---> | output |
    |_______|      |_____________|      |_______|      |________|
    
    
    Note that some filters change frame properties but not frame contents. E.g. the
    @code{fps} filter in the example above changes number of frames, but does not
    touch the frame contents. Another example is the @code{setpts} filter, which
    only sets timestamps and otherwise passes the frames unchanged.
    
    @subsection Complex filtergraphs
    Complex filtergraphs are those which cannot be described as simply a linear
    
    processing chain applied to one stream. This is the case, for example, when the graph has
    
    more than one input and/or output, or when output stream type is different from
    input. They can be represented with the following diagram:
    
    
     _________
    |         |
    | input 0 |\                    __________
    |_________| \                  |          |
                 \   _________    /| output 0 |
                  \ |         |  / |__________|
     _________     \| complex | /
    |         |     |         |/
    | input 1 |---->| filter  |\
    |_________|     |         | \   __________
                   /| graph   |  \ |          |
                  / |         |   \| output 1 |
     _________   /  |_________|    |__________|
    |         | /
    | input 2 |/
    |_________|
    
    
    
    Complex filtergraphs are configured with the @option{-filter_complex} option.
    
    Note that this option is global, since a complex filtergraph, by its nature,
    
    cannot be unambiguously associated with a single stream or file.
    
    
    The @option{-lavfi} option is equivalent to @option{-filter_complex}.
    
    
    A trivial example of a complex filtergraph is the @code{overlay} filter, which
    has two video inputs and one video output, containing one video overlaid on top
    of the other. Its audio counterpart is the @code{amix} filter.
    
    @section Stream copy
    Stream copy is a mode selected by supplying the @code{copy} parameter to the
    
    @option{-codec} option. It makes @command{ffmpeg} omit the decoding and encoding
    
    step for the specified stream, so it does only demuxing and muxing. It is useful
    for changing the container format or modifying container-level metadata. The
    
    diagram above will, in this case, simplify to this:
    
     _______              ______________            ________
    |       |            |              |          |        |
    | input |  demuxer   | encoded data |  muxer   | output |
    | file  | ---------> | packets      | -------> | file   |
    |_______|            |______________|          |________|
    
    
    
    Since there is no decoding or encoding, it is very fast and there is no quality
    
    loss. However, it might not work in some cases because of many factors. Applying
    
    filters is obviously also impossible, since filters work on uncompressed data.
    
    @c man end DETAILED DESCRIPTION
    
    
    @chapter Stream selection
    @c man begin STREAM SELECTION
    
    
    By default, @command{ffmpeg} includes only one stream of each type (video, audio, subtitle)
    
    present in the input files and adds them to each output file.  It picks the
    
    "best" of each based upon the following criteria: for video, it is the stream
    with the highest resolution, for audio, it is the stream with the most channels, for
    subtitles, it is the first subtitle stream. In the case where several streams of
    the same type rate equally, the stream with the lowest index is chosen.
    
    You can disable some of those defaults by using the @code{-vn/-an/-sn} options. For
    
    full manual control, use the @code{-map} option, which disables the defaults just
    described.
    
    @c man end STREAM SELECTION
    
    
    @include fftools-common-opts.texi
    
    @item -f @var{fmt} (@emph{input/output})
    
    Force input or output file format. The format is normally auto detected for input
    
    files and guessed from the file extension for output files, so this option is not
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    
    
    @item -i @var{filename} (@emph{input})
    
    @item -y (@emph{global})
    
    Overwrite output files without asking.
    
    @item -n (@emph{global})
    
    Do not overwrite output files, and exit immediately if a specified
    output file already exists.
    
    @item -stream_loop @var{number} (@emph{input})
    
    Set number of times input stream shall be looped. Loop 0 means no loop,
    loop -1 means infinite loop.
    
    @item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
    @itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
    Select an encoder (when used before an output file) or a decoder (when used
    before an input file) for one or more streams. @var{codec} is the name of a
    decoder/encoder or a special value @code{copy} (output only) to indicate that
    
    the stream is not to be re-encoded.
    
    
    For example
    @example
    ffmpeg -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT
    @end example
    encodes all video streams with libx264 and copies all audio streams.
    
    For each stream, the last matching @code{c} option is applied, so
    @example
    ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
    @end example
    will copy all the streams except the second video, which will be encoded with
    libx264, and the 138th audio, which will be encoded with libvorbis.
    
    
    @item -t @var{duration} (@emph{input/output})
    When used as an input option (before @code{-i}), limit the @var{duration} of
    data read from the input file.
    
    When used as an output option (before an output filename), stop writing the
    output after its duration reaches @var{duration}.
    
    
    @var{duration} must be a time duration specification,
    see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
    
    -to and -t are mutually exclusive and -t has priority.
    
    @item -to @var{position} (@emph{output})
    Stop writing the output at @var{position}.
    
    @var{position} must be a time duration specification,
    see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
    
    
    -to and -t are mutually exclusive and -t has priority.
    
    
    @item -fs @var{limit_size} (@emph{output})
    
    Set the file size limit, expressed in bytes.
    
    @item -ss @var{position} (@emph{input/output})
    When used as an input option (before @code{-i}), seeks in this input file to
    
    @var{position}. Note that in most formats it is not possible to seek exactly,
    so @command{ffmpeg} will seek to the closest seek point before @var{position}.
    
    When transcoding and @option{-accurate_seek} is enabled (the default), this
    extra segment between the seek point and @var{position} will be decoded and
    discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
    will be preserved.
    
    When used as an output option (before an output filename), decodes but discards
    input until the timestamps reach @var{position}.
    
    @var{position} must be a time duration specification,
    see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
    
    @item -sseof @var{position} (@emph{input/output})
    
    
    Like the @code{-ss} option but relative to the "end of file". That is negative
    
    values are earlier in the file, 0 is at EOF.
    
    
    @item -itsoffset @var{offset} (@emph{input})
    
    @var{offset} must be a time duration specification,
    see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
    
    The offset is added to the timestamps of the input files. Specifying
    a positive offset means that the corresponding streams are delayed by
    the time duration specified in @var{offset}.
    
    @item -timestamp @var{date} (@emph{output})
    
    Set the recording timestamp in the container.
    
    @var{date} must be a date specification,
    
    see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
    
    @item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
    
    Stefano Sabatini's avatar
    Stefano Sabatini committed
    Set a metadata key/value pair.
    
    An optional @var{metadata_specifier} may be given to set metadata
    on streams or chapters. See @code{-map_metadata} documentation for
    details.
    
    This option overrides metadata set with @code{-map_metadata}. It is
    also possible to delete metadata by using an empty value.
    
    
    For example, for setting the title in the output file:
    
    ffmpeg -i in.avi -metadata title="my title" out.flv
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    To set the language of the first audio stream:
    
    ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
    
    @end example
    
    @item -target @var{type} (@emph{output})
    
    Specify target file type (@code{vcd}, @code{svcd}, @code{dvd}, @code{dv},
    @code{dv50}). @var{type} may be prefixed with @code{pal-}, @code{ntsc-} or
    @code{film-} to use the corresponding standard. All the format options
    (bitrate, codecs, buffer sizes) are then set automatically. You can just type:
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    
    @example
    ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
    @end example
    
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    Nevertheless you can specify additional options as long as you know
    they do not conflict with the standard, as in:
    
    
    @example
    ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
    @end example
    
    
    @item -dframes @var{number} (@emph{output})
    
    Set the number of data frames to output. This is an alias for @code{-frames:d}.
    
    @item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
    Stop writing to the stream after @var{framecount} frames.
    
    @item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
    @itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
    
    Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
    
    codec-dependent.
    
    If @var{qscale} is used without a @var{stream_specifier} then it applies only
    to the video stream, this is to maintain compatibility with previous behavior
    and as specifying the same codec specific value to 2 different codecs that is
    audio and video generally is not what is intended when no stream_specifier is
    used.
    
    @item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
    Create the filtergraph specified by @var{filtergraph} and use it to
    
    @var{filtergraph} is a description of the filtergraph to apply to
    
    the stream, and must have a single input and a single output of the
    
    same type of the stream. In the filtergraph, the input is associated
    
    to the label @code{in}, and the output to the label @code{out}. See
    the ffmpeg-filters manual for more information about the filtergraph
    syntax.
    
    See the @ref{filter_complex_option,,-filter_complex option} if you
    
    want to create filtergraphs with multiple inputs and/or outputs.
    
    @item -filter_script[:@var{stream_specifier}] @var{filename} (@emph{output,per-stream})
    This option is similar to @option{-filter}, the only difference is that its
    argument is the name of the file from which a filtergraph description is to be
    read.
    
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    @item -pre[:@var{stream_specifier}] @var{preset_name} (@emph{output,per-stream})
    Specify the preset for matching stream(s).
    
    @item -stats (@emph{global})
    
    Print encoding progress/statistics. It is on by default, to explicitly
    disable it you need to specify @code{-nostats}.
    
    @item -progress @var{url} (@emph{global})
    Send program-friendly progress information to @var{url}.
    
    Progress information is written approximately every second and at the end of
    the encoding process. It is made of "@var{key}=@var{value}" lines. @var{key}
    consists of only alphanumeric characters. The last key of a sequence of
    progress information is always "progress".
    
    
    @item -stdin
    Enable interaction on standard input. On by default unless standard input is
    
    used as an input. To explicitly disable interaction you need to specify
    @code{-nostdin}.
    
    Disabling interaction on standard input is useful, for example, if
    ffmpeg is in the background process group. Roughly the same result can
    be achieved with @code{ffmpeg ... < /dev/null} but it requires a
    shell.
    
    @item -debug_ts (@emph{global})
    Print timestamp information. It is off by default. This option is
    mostly useful for testing and debugging purposes, and the output
    format may change from one version to another, so it should not be
    employed by portable scripts.
    
    See also the option @code{-fdebug ts}.
    
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    @item -attach @var{filename} (@emph{output})
    Add an attachment to the output file. This is supported by a few formats
    like Matroska for e.g. fonts used in rendering subtitles. Attachments
    are implemented as a specific type of stream, so this option will add
    a new stream to the file. It is then possible to use per-stream options
    on this stream in the usual way. Attachment streams created with this
    option will be created after all the other streams (i.e. those created
    with @code{-map} or automatic mappings).
    
    Note that for Matroska you also have to set the mimetype metadata tag:
    @example
    ffmpeg -i INPUT -attach DejaVuSans.ttf -metadata:s:2 mimetype=application/x-truetype-font out.mkv
    @end example
    (assuming that the attachment stream will be third in the output file).
    
    @item -dump_attachment[:@var{stream_specifier}] @var{filename} (@emph{input,per-stream})
    Extract the matching attachment stream into a file named @var{filename}. If
    @var{filename} is empty, then the value of the @code{filename} metadata tag
    will be used.
    
    E.g. to extract the first attachment to a file named 'out.ttf':
    @example
    
    ffmpeg -dump_attachment:t:0 out.ttf -i INPUT
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    @end example
    To extract all attachments to files determined by the @code{filename} tag:
    @example
    
    ffmpeg -dump_attachment:t "" -i INPUT
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    @end example
    
    Technical note -- attachments are implemented as codec extradata, so this
    option can actually be used to extract extradata from any stream, not just
    attachments.
    
    @item -noautorotate
    Disable automatically rotating video based on file metadata.
    
    
    @item -vframes @var{number} (@emph{output})
    
    Set the number of video frames to output. This is an alias for @code{-frames:v}.
    
    @item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
    
    Set frame rate (Hz value, fraction or abbreviation).
    
    As an input option, ignore any timestamps stored in the file and instead
    generate timestamps assuming constant frame rate @var{fps}.
    
    This is not the same as the @option{-framerate} option used for some input formats
    like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
    If in doubt use @option{-framerate} instead of the input option @option{-r}.
    
    
    As an output option, duplicate or drop input frames to achieve constant output
    
    frame rate @var{fps}.
    
    @item -s[:@var{stream_specifier}] @var{size} (@emph{input/output,per-stream})
    
    Set frame size.
    
    As an input option, this is a shortcut for the @option{video_size} private
    option, recognized by some demuxers for which the frame size is either not
    stored in the file or is configurable -- e.g. raw video or video grabbers.
    
    As an output option, this inserts the @code{scale} video filter to the
    @emph{end} of the corresponding filtergraph. Please use the @code{scale} filter
    directly to insert it at the beginning or some other place.
    
    
    The format is @samp{wxh} (default - same as source).
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    
    
    @item -aspect[:@var{stream_specifier}] @var{aspect} (@emph{output,per-stream})
    
    Set the video display aspect ratio specified by @var{aspect}.
    
    @var{aspect} can be a floating point number string, or a string of the
    form @var{num}:@var{den}, where @var{num} and @var{den} are the
    numerator and denominator of the aspect ratio. For example "4:3",
    "16:9", "1.3333", and "1.7777" are valid argument values.
    
    
    If used together with @option{-vcodec copy}, it will affect the aspect ratio
    stored at container level, but not the aspect ratio stored in encoded
    frames, if it exists.
    
    
    @item -vn (@emph{output})
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    Disable video recording.
    
    @item -vcodec @var{codec} (@emph{output})
    Set the video codec. This is an alias for @code{-codec:v}.
    
    
    @item -pass[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
    
    Select the pass number (1 or 2). It is used to do two-pass
    video encoding. The statistics of the video are recorded in the first
    pass into a log file (see also the option -passlogfile),
    and in the second pass that log file is used to generate the video
    at the exact requested bitrate.
    
    On pass 1, you may just deactivate audio and set output to null,
    examples for Windows and Unix:
    @example
    
    ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL
    ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null
    
    @item -passlogfile[:@var{stream_specifier}] @var{prefix} (@emph{output,per-stream})
    
    Set two-pass log file name prefix to @var{prefix}, the default file name
    prefix is ``ffmpeg2pass''. The complete file name will be
    @file{PREFIX-N.log}, where N is a number specific to the output
    
    @item -vf @var{filtergraph} (@emph{output})
    Create the filtergraph specified by @var{filtergraph} and use it to
    
    This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
    
    @section Advanced Video options
    
    @item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
    Set pixel format. Use @code{-pix_fmts} to show all the supported
    
    If the selected pixel format can not be selected, ffmpeg will print a
    warning and select the best pixel format supported by the encoder.
    If @var{pix_fmt} is prefixed by a @code{+}, ffmpeg will exit with an error
    if the requested pixel format can not be selected, and automatic conversions
    
    inside filtergraphs are disabled.
    
    If @var{pix_fmt} is a single @code{+}, ffmpeg selects the same pixel format
    as the input (or graph output) and automatic conversions are disabled.
    
    
    @item -sws_flags @var{flags} (@emph{input/output})
    
    @item -rc_override[:@var{stream_specifier}] @var{override} (@emph{output,per-stream})
    
    Rate control override for specific intervals, formatted as "int,int,int"
    
    list separated with slashes. Two first values are the beginning and
    end frame numbers, last one is quantizer to use if positive, or quality
    factor if negative.
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
    Use this option if your input file is interlaced and you want
    to keep the interlaced format for minimum losses.
    The alternative is to deinterlace the input stream with
    @option{-deinterlace}, but deinterlacing introduces losses.
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    @item -psnr
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    Calculate PSNR of compressed frames.
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    @item -vstats
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    Dump video coding statistics to @file{vstats_HHMMSS.log}.
    
    @item -vstats_file @var{file}
    
    Dump video coding statistics to @var{file}.
    
    @item -top[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
    
    top=1/bottom=0/auto=-1 field first
    
    @item -dc @var{precision}
    
    @item -vtag @var{fourcc/tag} (@emph{output})
    Force video tag/fourcc. This is an alias for @code{-tag:v}.
    @item -qphist (@emph{global})
    Show QP histogram
    
    @item -vbsf @var{bitstream_filter}
    
    Deprecated see -bsf
    
    @item -force_key_frames[:@var{stream_specifier}] @var{time}[,@var{time}...] (@emph{output,per-stream})
    
    @item -force_key_frames[:@var{stream_specifier}] expr:@var{expr} (@emph{output,per-stream})
    
    Force key frames at the specified timestamps, more precisely at the first
    frames after each specified time.
    
    
    If the argument is prefixed with @code{expr:}, the string @var{expr}
    is interpreted like an expression and is evaluated for each frame. A
    key frame is forced in case the evaluation is non-zero.
    
    
    If one of the times is "@code{chapters}[@var{delta}]", it is expanded into
    the time of the beginning of all chapters in the file, shifted by
    @var{delta}, expressed as a time in seconds.
    
    This option can be useful to ensure that a seek point is present at a
    chapter mark or any other designated place in the output file.
    
    
    For example, to insert a key frame at 5 minutes, plus key frames 0.1 second
    before the beginning of every chapter:
    @example
    -force_key_frames 0:05:00,chapters-0.1
    @end example
    
    The expression in @var{expr} can contain the following constants:
    @table @option
    @item n
    the number of current processed frame, starting from 0
    @item n_forced
    the number of forced frames
    @item prev_forced_n
    the number of the previous forced frame, it is @code{NAN} when no
    keyframe was forced yet
    @item prev_forced_t
    the time of the previous forced frame, it is @code{NAN} when no
    keyframe was forced yet
    @item t
    the time of the current processed frame
    @end table
    
    For example to force a key frame every 5 seconds, you can specify:
    @example
    -force_key_frames expr:gte(t,n_forced*5)
    @end example
    
    To force a key frame 5 seconds after the time of the last forced one,
    starting from second 13:
    @example
    -force_key_frames expr:if(isnan(prev_forced_t),gte(t,13),gte(t,prev_forced_t+5))
    @end example
    
    Note that forcing too many keyframes is very harmful for the lookahead
    algorithms of certain encoders: using fixed-GOP options or similar
    would be more efficient.
    
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    @item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
    When doing stream copy, copy also non-key frames found at the
    beginning.
    
    
    @item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
    Use hardware acceleration to decode the matching stream(s). The allowed values
    of @var{hwaccel} are:
    @table @option
    @item none
    Do not use any hardware acceleration (the default).
    
    @item auto
    Automatically select the hardware acceleration method.
    
    @item vda
    Use Apple VDA hardware acceleration.
    
    
    @item vdpau
    Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
    
    
    @item dxva2
    Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
    
    
    @item qsv
    Use the Intel QuickSync Video acceleration for video transcoding.
    
    Unlike most other values, this option does not enable accelerated decoding (that
    is used automatically whenever a qsv decoder is selected), but accelerated
    transcoding, without copying the frames into the system memory.
    
    For it to work, both the decoder and the encoder must support QSV acceleration
    and no filters must be used.
    
    @end table
    
    This option has no effect if the selected hwaccel is not available or not
    supported by the chosen decoder.
    
    Note that most acceleration methods are intended for playback and will not be
    
    faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
    
    will usually need to copy the decoded frames from the GPU memory into the system
    memory, resulting in further performance loss. This option is thus mainly
    useful for testing.
    
    @item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
    Select a device to use for hardware acceleration.
    
    This option only makes sense when the @option{-hwaccel} option is also
    specified. Its exact meaning depends on the specific hardware acceleration
    method chosen.
    
    
    @table @option
    @item vdpau
    For VDPAU, this option specifies the X11 display/screen to use. If this option
    is not specified, the value of the @var{DISPLAY} environment variable is used
    
    
    @item dxva2
    For DXVA2, this option should contain the number of the display adapter to use.
    If this option is not specified, the default adapter is used.
    
    
    @item qsv
    For QSV, this option corresponds to the valus of MFX_IMPL_* . Allowed values
    are:
    @table @option
    @item auto
    @item sw
    @item hw
    @item auto_any
    @item hw_any
    @item hw2
    @item hw3
    @item hw4
    @end table
    
    
    @item -hwaccels
    List all hardware acceleration methods supported in this build of ffmpeg.
    
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    @end table
    
    @section Audio Options
    
    @table @option
    
    @item -aframes @var{number} (@emph{output})
    
    Set the number of audio frames to output. This is an alias for @code{-frames:a}.
    
    @item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
    
    Set the audio sampling frequency. For output streams it is set by
    default to the frequency of the corresponding input stream. For input
    streams this option only makes sense for audio grabbing devices and raw
    demuxers and is mapped to the corresponding demuxer options.
    
    @item -aq @var{q} (@emph{output})
    Set the audio quality (codec-specific, VBR). This is an alias for -q:a.
    @item -ac[:@var{stream_specifier}] @var{channels} (@emph{input/output,per-stream})
    
    Set the number of audio channels. For output streams it is set by
    default to the number of input audio channels. For input streams
    this option only makes sense for audio grabbing devices and raw demuxers
    and is mapped to the corresponding demuxer options.
    
    @item -an (@emph{output})
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    Disable audio recording.
    
    @item -acodec @var{codec} (@emph{input/output})
    Set the audio codec. This is an alias for @code{-codec:a}.
    @item -sample_fmt[:@var{stream_specifier}] @var{sample_fmt} (@emph{output,per-stream})
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    Set the audio sample format. Use @code{-sample_fmts} to get a list
    
    of supported sample formats.
    
    @item -af @var{filtergraph} (@emph{output})
    Create the filtergraph specified by @var{filtergraph} and use it to
    
    filter the stream.
    
    This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
    
    @section Advanced Audio options
    
    @item -atag @var{fourcc/tag} (@emph{output})
    Force audio tag/fourcc. This is an alias for @code{-tag:a}.
    
    @item -absf @var{bitstream_filter}
    
    Deprecated, see -bsf
    
    @item -guess_layout_max @var{channels} (@emph{input,per-stream})
    If some input channel layout is not known, try to guess only if it
    corresponds to at most the specified number of channels. For example, 2
    tells to @command{ffmpeg} to recognize 1 channel as mono and 2 channels as
    stereo but not 6 channels as 5.1. The default is to always try to guess. Use
    0 to disable all guessing.
    
    @section Subtitle options
    
    @item -scodec @var{codec} (@emph{input/output})
    Set the subtitle codec. This is an alias for @code{-codec:s}.
    @item -sn (@emph{output})
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    Disable subtitle recording.
    
    @item -sbsf @var{bitstream_filter}
    
    Deprecated, see -bsf
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    @end table
    
    
    @section Advanced Subtitle options
    
    
    @table @option
    
    @item -fix_sub_duration
    Fix subtitles durations. For each subtitle, wait for the next packet in the
    same stream and adjust the duration of the first to avoid overlap. This is
    necessary with some subtitles codecs, especially DVB subtitles, because the
    duration in the original packet is only a rough estimate and the end is
    actually marked by an empty subtitle frame. Failing to use this option when
    necessary can result in exaggerated durations or muxing failures due to
    non-monotonic timestamps.
    
    Note that this option will delay the output of all data until the next
    subtitle packet is decoded: it may increase memory consumption and latency a
    lot.
    
    
    @item -canvas_size @var{size}
    Set the size of the canvas used to render subtitles.
    
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    @section Advanced options
    
    @table @option
    
    @item -map [-]@var{input_file_id}[:@var{stream_specifier}][,@var{sync_file_id}[:@var{stream_specifier}]] | @var{[linklabel]} (@emph{output})
    
    Designate one or more input streams as a source for the output file. Each input
    
    stream is identified by the input file index @var{input_file_id} and
    the input stream index @var{input_stream_id} within the input
    
    file. Both indices start at 0. If specified,
    @var{sync_file_id}:@var{stream_specifier} sets which input stream
    
    is used as a presentation sync reference.
    
    
    The first @code{-map} option on the command line specifies the
    
    source for output stream 0, the second @code{-map} option specifies
    the source for output stream 1, etc.
    
    
    A @code{-} character before the stream identifier creates a "negative" mapping.
    It disables matching streams from already created mappings.
    
    
    An alternative @var{[linklabel]} form will map outputs from complex filter
    graphs (see the @option{-filter_complex} option) to the output file.
    @var{linklabel} must correspond to a defined output link label in the graph.
    
    
    For example, to map ALL streams from the first input file to output
    @example
    ffmpeg -i INPUT -map 0 output
    @end example
    
    
    For example, if you have two audio streams in the first input file,
    
    these streams are identified by "0:0" and "0:1". You can use
    @code{-map} to select which streams to place in an output file. For
    
    ffmpeg -i INPUT -map 0:1 out.wav
    
    will map the input stream in @file{INPUT} identified by "0:1" to
    
    the (single) output stream in @file{out.wav}.
    
    For example, to select the stream with index 2 from input file
    
    @file{a.mov} (specified by the identifier "0:2"), and stream with
    index 6 from input @file{b.mov} (specified by the identifier "1:6"),
    
    and copy them to the output file @file{out.mov}:
    @example
    
    ffmpeg -i a.mov -i b.mov -c copy -map 0:2 -map 1:6 out.mov
    
    To select all video and the third audio stream from an input file:
    @example
    ffmpeg -i INPUT -map 0:v -map 0:a:2 OUTPUT
    @end example
    
    To map all the streams except the second audio, use negative mappings
    @example
    ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
    @end example
    
    To pick the English audio stream:
    @example
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
    
    Note that using this option disables the default mappings for this output file.
    
    
    @item -ignore_unknown
    Ignore input streams with unknown type instead of failing if copying
    such streams is attempted.
    
    @item -copy_unknown
    Allow input streams with unknown type to be copied instead of failing if copying
    such streams is attempted.
    
    
    @item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
    Map an audio channel from a given input to an output. If
    
    @var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
    
    be mapped on all the audio streams.
    
    Using "-1" instead of
    @var{input_file_id}.@var{stream_specifier}.@var{channel_id} will map a muted
    channel.
    
    For example, assuming @var{INPUT} is a stereo audio file, you can switch the
    two audio channels with the following command:
    @example
    ffmpeg -i INPUT -map_channel 0.0.1 -map_channel 0.0.0 OUTPUT
    @end example
    
    If you want to mute the first channel and keep the second:
    @example
    ffmpeg -i INPUT -map_channel -1 -map_channel 0.0.1 OUTPUT
    @end example
    
    The order of the "-map_channel" option specifies the order of the channels in
    the output stream. The output channel layout is guessed from the number of
    channels mapped (mono if one "-map_channel", stereo if two, etc.). Using "-ac"
    in combination of "-map_channel" makes the channel gain levels to be updated if
    
    input and output channel layouts don't match (for instance two "-map_channel"
    options and "-ac 6").
    
    You can also extract each channel of an input to specific outputs; the following
    command extracts two channels of the @var{INPUT} audio stream (file 0, stream 0)
    to the respective @var{OUTPUT_CH0} and @var{OUTPUT_CH1} outputs:
    
    @example
    ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1
    @end example
    
    
    The following example splits the channels of a stereo input into two separate
    streams, which are put into the same output file:
    
    @example
    ffmpeg -i stereo.wav -map 0:0 -map 0:0 -map_channel 0.0.0:0.0 -map_channel 0.0.1:0.1 -y out.ogg
    @end example
    
    
    Note that currently each output stream can only contain channels from a single
    input stream; you can't for example use "-map_channel" to pick multiple input
    audio channels contained in different streams (from the same or different files)
    and merge them into a single output stream. It is therefore not currently
    possible, for example, to turn two separate mono streams into a single stereo
    
    stream. However splitting a stereo stream into two single channel mono streams
    
    If you need this feature, a possible workaround is to use the @emph{amerge}
    filter. For example, if you need to merge a media (here @file{input.mkv}) with 2
    mono audio streams into one single stereo channel audio stream (and keep the
    
    video stream), you can use the following command:
    
    ffmpeg -i input.mkv -filter_complex "[0:1] [0:2] amerge" -c:a pcm_s16le -c:v copy output.mkv
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    @item -map_metadata[:@var{metadata_spec_out}] @var{infile}[:@var{metadata_spec_in}] (@emph{output,per-metadata})
    
    Set metadata information of the next output file from @var{infile}. Note that
    those are file indices (zero-based), not filenames.
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    Optional @var{metadata_spec_in/out} parameters specify, which metadata to copy.
    A metadata specifier can have the following forms:
    @table @option
    @item @var{g}
    global metadata, i.e. metadata that applies to the whole file
    
    @item @var{s}[:@var{stream_spec}]
    per-stream metadata. @var{stream_spec} is a stream specifier as described
    in the @ref{Stream specifiers} chapter. In an input metadata specifier, the first
    matching stream is copied from. In an output metadata specifier, all matching
    streams are copied to.
    
    @item @var{c}:@var{chapter_index}
    per-chapter metadata. @var{chapter_index} is the zero-based chapter index.
    
    @item @var{p}:@var{program_index}
    per-program metadata. @var{program_index} is the zero-based program index.
    @end table
    If metadata specifier is omitted, it defaults to global.
    
    By default, global metadata is copied from the first input file,
    
    per-stream and per-chapter metadata is copied along with streams/chapters. These
    default mappings are disabled by creating any mapping of the relevant type. A negative
    file index can be used to create a dummy mapping that just disables automatic copying.
    
    
    For example to copy metadata from the first stream of the input file to global metadata
    of the output file:
    @example
    
    ffmpeg -i in.ogg -map_metadata 0:s:0 out.mp3
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    
    To do the reverse, i.e. copy global metadata to all audio streams:
    @example
    ffmpeg -i in.mkv -map_metadata:s:a 0:g out.mkv
    @end example
    Note that simple @code{0} would work as well in this example, since global
    metadata is assumed by default.
    
    
    @item -map_chapters @var{input_file_index} (@emph{output})
    Copy chapters from input file with index @var{input_file_index} to the next
    output file. If no chapter mapping is specified, then chapters are copied from
    the first input file with at least one chapter. Use a negative file index to
    disable any chapter copying.
    
    @item -benchmark (@emph{global})
    
    Show benchmarking information at the end of an encode.
    Shows CPU time used and maximum memory consumption.
    Maximum memory consumption is not supported on all systems,
    it will usually display as 0 if not supported.
    
    @item -benchmark_all (@emph{global})
    Show benchmarking information during the encode.
    Shows CPU time used in various steps (audio/video encode/decode).
    
    @item -timelimit @var{duration} (@emph{global})
    Exit after ffmpeg has been running for @var{duration} seconds.
    
    @item -dump (@emph{global})