Skip to content
Snippets Groups Projects
filters.texi 105 KiB
Newer Older
  • Learn to ignore specific revisions
  • @chapter Filtergraph description
    @c man begin FILTERGRAPH DESCRIPTION
    
    A filtergraph is a directed graph of connected filters. It can contain
    cycles, and there can be multiple links between a pair of
    filters. Each link has one input pad on one side connecting it to one
    filter from which it takes its input, and one output pad on the other
    side connecting it to the one filter accepting its output.
    
    Each filter in a filtergraph is an instance of a filter class
    registered in the application, which defines the features and the
    number of input and output pads of the filter.
    
    A filter with no input pads is called a "source", a filter with no
    output pads is called a "sink".
    
    
    @anchor{Filtergraph syntax}
    
    @section Filtergraph syntax
    
    
    A filtergraph can be represented using a textual representation, which is
    recognized by the @option{-filter}/@option{-vf} and @option{-filter_complex}
    
    options in @command{ffmpeg} and @option{-vf} in @command{ffplay}, and by the
    
    @code{avfilter_graph_parse()}/@code{avfilter_graph_parse2()} function defined in
    
    @file{libavfilter/avfiltergraph.h}.
    
    
    A filterchain consists of a sequence of connected filters, each one
    connected to the previous one in the sequence. A filterchain is
    represented by a list of ","-separated filter descriptions.
    
    A filtergraph consists of a sequence of filterchains. A sequence of
    filterchains is represented by a list of ";"-separated filterchain
    descriptions.
    
    A filter is represented by a string of the form:
    [@var{in_link_1}]...[@var{in_link_N}]@var{filter_name}=@var{arguments}[@var{out_link_1}]...[@var{out_link_M}]
    
    @var{filter_name} is the name of the filter class of which the
    described filter is an instance of, and has to be the name of one of
    the filter classes registered in the program.
    The name of the filter class is optionally followed by a string
    "=@var{arguments}".
    
    @var{arguments} is a string which contains the parameters used to
    initialize the filter instance, and are described in the filter
    descriptions below.
    
    The list of arguments can be quoted using the character "'" as initial
    and ending mark, and the character '\' for escaping the characters
    within the quoted text; otherwise the argument string is considered
    terminated when the next special character (belonging to the set
    "[]=;,") is encountered.
    
    The name and arguments of the filter are optionally preceded and
    followed by a list of link labels.
    A link label allows to name a link and associate it to a filter output
    or input pad. The preceding labels @var{in_link_1}
    ... @var{in_link_N}, are associated to the filter input pads,
    the following labels @var{out_link_1} ... @var{out_link_M}, are
    associated to the output pads.
    
    When two link labels with the same name are found in the
    filtergraph, a link between the corresponding input and output pad is
    created.
    
    If an output pad is not labelled, it is linked by default to the first
    unlabelled input pad of the next filter in the filterchain.
    For example in the filterchain:
    @example
    nullsrc, split[L1], [L2]overlay, nullsink
    @end example
    the split filter instance has two output pads, and the overlay filter
    instance two input pads. The first output pad of split is labelled
    "L1", the first input pad of overlay is labelled "L2", and the second
    output pad of split is linked to the second input pad of overlay,
    which are both unlabelled.
    
    In a complete filterchain all the unlabelled filter input and output
    pads must be connected. A filtergraph is considered valid if all the
    filter input and output pads of all the filterchains are connected.
    
    
    Libavfilter will automatically insert scale filters where format
    conversion is required. It is possible to specify swscale flags
    for those automatically inserted scalers by prepending
    @code{sws_flags=@var{flags};}
    to the filtergraph description.
    
    
    Follows a BNF description for the filtergraph syntax:
    @example
    @var{NAME}             ::= sequence of alphanumeric characters and '_'
    @var{LINKLABEL}        ::= "[" @var{NAME} "]"
    @var{LINKLABELS}       ::= @var{LINKLABEL} [@var{LINKLABELS}]
    @var{FILTER_ARGUMENTS} ::= sequence of chars (eventually quoted)
    @var{FILTER}           ::= [@var{LINKNAMES}] @var{NAME} ["=" @var{ARGUMENTS}] [@var{LINKNAMES}]
    @var{FILTERCHAIN}      ::= @var{FILTER} [,@var{FILTERCHAIN}]
    
    @var{FILTERGRAPH}      ::= [sws_flags=@var{flags};] @var{FILTERCHAIN} [;@var{FILTERGRAPH}]
    
    @end example
    
    @c man end FILTERGRAPH DESCRIPTION
    
    
    @chapter Audio Filters
    @c man begin AUDIO FILTERS
    
    
    When you configure your FFmpeg build, you can disable any of the
    
    existing filters using @code{--disable-filters}.
    
    The configure output will show the audio filters included in your
    build.
    
    Below is a description of the currently available audio filters.
    
    
    @section aconvert
    
    Convert the input audio format to the specified formats.
    
    The filter accepts a string of the form:
    
    "@var{sample_format}:@var{channel_layout}".
    
    @var{sample_format} specifies the sample format, and can be a string or the
    corresponding numeric value defined in @file{libavutil/samplefmt.h}. Use 'p'
    suffix for a planar sample format.
    
    
    @var{channel_layout} specifies the channel layout, and can be a string
    
    or the corresponding number value defined in @file{libavutil/audioconvert.h}.
    
    
    The special parameter "auto", signifies that the filter will
    automatically select the output format depending on the output filter.
    
    Some examples follow.
    
    @itemize
    @item
    
    Convert input to float, planar, stereo:
    
    aconvert=fltp:stereo
    
    @end example
    
    @item
    
    Convert input to unsigned 8-bit, automatically select out channel layout:
    
    aconvert=u8:auto
    
    @end example
    @end itemize
    
    
    @section aformat
    
    Convert the input audio to one of the specified formats. The framework will
    negotiate the most appropriate format to minimize conversions.
    
    The filter accepts the following named parameters:
    @table @option
    
    @item sample_fmts
    A comma-separated list of requested sample formats.
    
    @item sample_rates
    A comma-separated list of requested sample rates.
    
    @item channel_layouts
    A comma-separated list of requested channel layouts.
    
    @end table
    
    If a parameter is omitted, all values are allowed.
    
    For example to force the output to either unsigned 8-bit or signed 16-bit stereo:
    @example
    aformat=sample_fmts\=u8\,s16:channel_layouts\=stereo
    @end example
    
    
    @section amerge
    
    
    Merge two or more audio streams into a single multi-channel stream.
    
    The filter accepts the following named options:
    
    @table @option
    
    @item inputs
    Set the number of inputs. Default is 2.
    
    @end table
    
    
    If the channel layouts of the inputs are disjoint, and therefore compatible,
    the channel layout of the output will be set accordingly and the channels
    will be reordered as necessary. If the channel layouts of the inputs are not
    disjoint, the output will have all the channels of the first input then all
    the channels of the second input, in that order, and the channel layout of
    the output will be the default value corresponding to the total number of
    channels.
    
    For example, if the first input is in 2.1 (FL+FR+LF) and the second input
    is FC+BL+BR, then the output will be in 5.1, with the channels in the
    following order: a1, a2, b1, a3, b2, b3 (a1 is the first channel of the
    first input, b1 is the first channel of the second input).
    
    On the other hand, if both input are in stereo, the output channels will be
    in the default order: a1, a2, b1, b2, and the channel layout will be
    arbitrarily set to 4.0, which may or may not be the expected value.
    
    
    All inputs must have the same sample rate, and format.
    
    
    If inputs do not have the same duration, the output will stop with the
    shortest.
    
    Example: merge two mono files into a stereo stream:
    @example
    amovie=left.wav [l] ; amovie=right.mp3 [r] ; [l] [r] amerge
    @end example
    
    
    Example: multiple merges:
    
    @example
    ffmpeg -f lavfi -i "
    amovie=input.mkv:si=0 [a0];
    amovie=input.mkv:si=1 [a1];
    amovie=input.mkv:si=2 [a2];
    amovie=input.mkv:si=3 [a3];
    amovie=input.mkv:si=4 [a4];
    amovie=input.mkv:si=5 [a5];
    
    [a0][a1][a2][a3][a4][a5] amerge=inputs=6" -c:a pcm_s16le output.mkv
    
    @section amix
    
    Mixes multiple audio inputs into a single output.
    
    For example
    @example
    
    ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT
    
    @end example
    will mix 3 input audio streams to a single output with the same duration as the
    first input and a dropout transition time of 3 seconds.
    
    The filter accepts the following named parameters:
    @table @option
    
    @item inputs
    Number of inputs. If unspecified, it defaults to 2.
    
    @item duration
    How to determine the end-of-stream.
    @table @option
    
    @item longest
    Duration of longest input. (default)
    
    @item shortest
    Duration of shortest input.
    
    @item first
    Duration of first input.
    
    @end table
    
    @item dropout_transition
    Transition time, in seconds, for volume renormalization when an input
    stream ends. The default value is 2 seconds.
    
    @end table
    
    
    @section anull
    
    Pass the audio source unchanged to the output.
    
    
    @section aresample
    
    Resample the input audio to the specified sample rate.
    
    The filter accepts exactly one parameter, the output sample rate. If not
    specified then the filter will automatically convert between its input
    and output sample rates.
    
    For example, to resample the input audio to 44100Hz:
    @example
    aresample=44100
    @end example
    
    
    @section ashowinfo
    
    Show a line containing various information for each input audio frame.
    The input audio is not modified.
    
    The shown line contains a sequence of key/value pairs of the form
    @var{key}:@var{value}.
    
    A description of each shown parameter follows:
    
    @table @option
    @item n
    sequential number of the input frame, starting from 0
    
    @item pts
    presentation TimeStamp of the input frame, expressed as a number of
    time base units. The time base unit depends on the filter input pad, and
    is usually 1/@var{sample_rate}.
    
    @item pts_time
    presentation TimeStamp of the input frame, expressed as a number of
    seconds
    
    @item pos
    position of the frame in the input stream, -1 if this information in
    
    unavailable and/or meaningless (for example in case of synthetic audio)
    
    
    @item fmt
    sample format name
    
    @item chlayout
    channel layout description
    
    @item nb_samples
    number of samples (per each channel) contained in the filtered frame
    
    @item rate
    sample rate for the audio frame
    
    @item checksum
    
    Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame
    
    
    @item plane_checksum
    
    Adler-32 checksum (printed in hexadecimal) for each input frame plane,
    expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3} @var{c4} @var{c5}
    @var{c6} @var{c7}]"
    
    @section asplit
    
    
    Split input audio into several identical outputs.
    
    The filter accepts a single parameter which specifies the number of outputs. If
    unspecified, it defaults to 2.
    
    
    For example:
    @example
    
    @end example
    
    
    will create two separate outputs from the same input.
    
    To create 3 or more outputs, you need to specify the number of
    outputs, like in:
    @example
    [in] asplit=3 [out0][out1][out2]
    @end example
    
    @example
    ffmpeg -i INPUT -filter_complex asplit=5 OUTPUT
    
    @end example
    will create 5 copies of the input audio.
    
    
    @section astreamsync
    
    Forward two audio streams and control the order the buffers are forwarded.
    
    The argument to the filter is an expression deciding which stream should be
    forwarded next: if the result is negative, the first stream is forwarded; if
    the result is positive or zero, the second stream is forwarded. It can use
    the following variables:
    
    @table @var
    @item b1 b2
    number of buffers forwarded so far on each stream
    @item s1 s2
    number of samples forwarded so far on each stream
    @item t1 t2
    current timestamp of each stream
    @end table
    
    The default value is @code{t1-t2}, which means to always forward the stream
    that has a smaller timestamp.
    
    Example: stress-test @code{amerge} by randomly sending buffers on the wrong
    input, while avoiding too much of a desynchronization:
    @example
    amovie=file.ogg [a] ; amovie=file.mp3 [b] ;
    [a] [b] astreamsync=(2*random(1))-1+tanh(5*(t1-t2)) [a2] [b2] ;
    [a2] [b2] amerge
    @end example
    
    
    @section earwax
    
    Make audio easier to listen to on headphones.
    
    This filter adds `cues' to 44.1kHz stereo (i.e. audio CD format) audio
    so that when listened to on headphones the stereo image is moved from
    inside your head (standard for headphones) to outside and in front of
    the listener (standard for speakers).
    
    Ported from SoX.
    
    
    @section pan
    
    Mix channels with specific gain levels. The filter accepts the output
    channel layout followed by a set of channels definitions.
    
    
    This filter is also designed to remap efficiently the channels of an audio
    stream.
    
    
    The filter accepts parameters of the form:
    "@var{l}:@var{outdef}:@var{outdef}:..."
    
    @table @option
    @item l
    output channel layout or number of channels
    
    @item outdef
    output channel specification, of the form:
    "@var{out_name}=[@var{gain}*]@var{in_name}[+[@var{gain}*]@var{in_name}...]"
    
    @item out_name
    output channel to define, either a channel name (FL, FR, etc.) or a channel
    number (c0, c1, etc.)
    
    @item gain
    multiplicative coefficient for the channel, 1 leaving the volume unchanged
    
    @item in_name
    input channel to use, see out_name for details; it is not possible to mix
    named and numbered input channels
    @end table
    
    If the `=' in a channel specification is replaced by `<', then the gains for
    that specification will be renormalized so that the total is 1, thus
    avoiding clipping noise.
    
    
    @subsection Mixing examples
    
    
    For example, if you want to down-mix from stereo to mono, but with a bigger
    factor for the left channel:
    @example
    pan=1:c0=0.9*c0+0.1*c1
    @end example
    
    A customized down-mix to stereo that works automatically for 3-, 4-, 5- and
    7-channels surround:
    @example
    pan=stereo: FL < FL + 0.5*FC + 0.6*BL + 0.6*SL : FR < FR + 0.5*FC + 0.6*BR + 0.6*SR
    @end example
    
    
    Note that @command{ffmpeg} integrates a default down-mix (and up-mix) system
    
    that should be preferred (see "-ac" option) unless you have very specific
    needs.
    
    
    @subsection Remapping examples
    
    The channel remapping will be effective if, and only if:
    
    @itemize
    @item gain coefficients are zeroes or ones,
    @item only one input per channel output,
    @end itemize
    
    If all these conditions are satisfied, the filter will notify the user ("Pure
    channel mapping detected"), and use an optimized and lossless method to do the
    remapping.
    
    For example, if you have a 5.1 source and want a stereo audio stream by
    dropping the extra channels:
    @example
    pan="stereo: c0=FL : c1=FR"
    @end example
    
    Given the same source, you can also switch front left and front right channels
    and keep the input channel layout:
    @example
    pan="5.1: c0=c1 : c1=c0 : c2=c2 : c3=c3 : c4=c4 : c5=c5"
    @end example
    
    If the input is a stereo audio stream, you can mute the front left channel (and
    still keep the stereo channel layout) with:
    @example
    pan="stereo:c1=c1"
    @end example
    
    Still with a stereo audio stream input, you can copy the right channel in both
    front left and right:
    @example
    pan="stereo: c0=FR : c1=FR"
    @end example
    
    
    @section silencedetect
    
    Detect silence in an audio stream.
    
    This filter logs a message when it detects that the input audio volume is less
    or equal to a noise tolerance value for a duration greater or equal to the
    minimum detected noise duration.
    
    The printed times and duration are expressed in seconds.
    
    @table @option
    @item duration, d
    Set silence duration until notification (default is 2 seconds).
    
    @item noise, n
    Set noise tolerance. Can be specified in dB (in case "dB" is appended to the
    specified value) or amplitude ratio. Default is -60dB, or 0.001.
    @end table
    
    Detect 5 seconds of silence with -50dB noise tolerance:
    @example
    silencedetect=n=-50dB:d=5
    @end example
    
    Complete example with @command{ffmpeg} to detect silence with 0.0001 noise
    tolerance in @file{silence.mp3}:
    @example
    ffmpeg -f lavfi -i amovie=silence.mp3,silencedetect=noise=0.0001 -f null -
    @end example
    
    
    @section volume
    
    Adjust the input audio volume.
    
    The filter accepts exactly one parameter @var{vol}, which expresses
    
    how the audio volume will be increased or decreased.
    
    
    Output values are clipped to the maximum value.
    
    
    If @var{vol} is expressed as a decimal number, the output audio
    
    volume is given by the relation:
    @example
    @var{output_volume} = @var{vol} * @var{input_volume}
    @end example
    
    If @var{vol} is expressed as a decimal number followed by the string
    "dB", the value represents the requested change in decibels of the
    input audio power, and the output audio volume is given by the
    relation:
    @example
    @var{output_volume} = 10^(@var{vol}/20) * @var{input_volume}
    @end example
    
    Otherwise @var{vol} is considered an expression and its evaluated
    value is used for computing the output audio volume according to the
    first relation.
    
    Default value for @var{vol} is 1.0.
    
    @subsection Examples
    
    @itemize
    @item
    Half the input audio volume:
    @example
    volume=0.5
    @end example
    
    The above example is equivalent to:
    @example
    volume=1/2
    @end example
    
    @item
    Decrease input audio power by 12 decibels:
    @example
    volume=-12dB
    @end example
    @end itemize
    
    
    @section asyncts
    Synchronize audio data with timestamps by squeezing/stretching it and/or
    dropping samples/adding silence when needed.
    
    The filter accepts the following named parameters:
    @table @option
    
    @item compensate
    Enable stretching/squeezing the data to make it match the timestamps.
    
    @item min_delta
    Minimum difference between timestamps and audio data (in seconds) to trigger
    adding/dropping samples.
    
    @item max_comp
    Maximum compensation in samples per second.
    
    @end table
    
    
    @section channelsplit
    Split each channel in input audio stream into a separate output stream.
    
    This filter accepts the following named parameters:
    @table @option
    @item channel_layout
    Channel layout of the input stream. Default is "stereo".
    @end table
    
    For example, assuming a stereo input MP3 file
    @example
    
    ffmpeg -i in.mp3 -filter_complex channelsplit out.mkv
    
    @end example
    will create an output Matroska file with two audio streams, one containing only
    the left channel and the other the right channel.
    
    To split a 5.1 WAV file into per-channel files
    @example
    
    ffmpeg -i in.wav -filter_complex
    
    'channelsplit=channel_layout=5.1[FL][FR][FC][LFE][SL][SR]'
    -map '[FL]' front_left.wav -map '[FR]' front_right.wav -map '[FC]'
    front_center.wav -map '[LFE]' lfe.wav -map '[SL]' side_left.wav -map '[SR]'
    side_right.wav
    @end example
    
    
    @section resample
    Convert the audio sample format, sample rate and channel layout. This filter is
    
    not meant to be used directly.
    
    @c man end AUDIO FILTERS
    
    
    @chapter Audio Sources
    @c man begin AUDIO SOURCES
    
    Below is a description of the currently available audio sources.
    
    
    @section abuffer
    
    Buffer audio frames, and make them available to the filter chain.
    
    This source is mainly intended for a programmatic use, in particular
    through the interface defined in @file{libavfilter/asrc_abuffer.h}.
    
    It accepts the following mandatory parameters:
    
    @var{sample_rate}:@var{sample_fmt}:@var{channel_layout}
    
    
    @table @option
    
    @item sample_rate
    The sample rate of the incoming audio buffers.
    
    @item sample_fmt
    The sample format of the incoming audio buffers.
    Either a sample format name or its corresponging integer representation from
    the enum AVSampleFormat in @file{libavutil/samplefmt.h}
    
    @item channel_layout
    The channel layout of the incoming audio buffers.
    Either a channel layout name from channel_layout_map in
    @file{libavutil/audioconvert.c} or its corresponding integer representation
    from the AV_CH_LAYOUT_* macros in @file{libavutil/audioconvert.h}
    
    @end table
    
    For example:
    @example
    
    abuffer=44100:s16p:stereo
    
    @end example
    
    will instruct the source to accept planar 16bit signed stereo at 44100Hz.
    
    Since the sample format with name "s16p" corresponds to the number
    6 and the "stereo" channel layout corresponds to the value 0x3, this is
    
    equivalent to:
    @example
    
    abuffer=44100:6:0x3
    
    @section aevalsrc
    
    Generate an audio signal specified by an expression.
    
    This source accepts in input one or more expressions (one for each
    channel), which are evaluated and used to generate a corresponding
    audio signal.
    
    It accepts the syntax: @var{exprs}[::@var{options}].
    @var{exprs} is a list of expressions separated by ":", one for each
    
    separate channel. In case the @var{channel_layout} is not
    specified, the selected channel layout depends on the number of
    provided expressions.
    
    
    @var{options} is an optional sequence of @var{key}=@var{value} pairs,
    separated by ":".
    
    The description of the accepted options follows.
    
    @table @option
    
    
    @item channel_layout, c
    Set the channel layout. The number of channels in the specified layout
    must be equal to the number of specified expressions.
    
    
    @item duration, d
    Set the minimum duration of the sourced audio. See the function
    @code{av_parse_time()} for the accepted format.
    Note that the resulting duration may be greater than the specified
    duration, as the generated audio is always cut at the end of a
    complete frame.
    
    If not specified, or the expressed duration is negative, the audio is
    supposed to be generated forever.
    
    
    @item nb_samples, n
    Set the number of samples per channel per each output frame,
    default to 1024.
    
    @item sample_rate, s
    Specify the sample rate, default to 44100.
    @end table
    
    Each expression in @var{exprs} can contain the following constants:
    
    @table @option
    @item n
    number of the evaluated sample, starting from 0
    
    @item t
    time of the evaluated sample expressed in seconds, starting from 0
    
    @item s
    sample rate
    
    @end table
    
    @subsection Examples
    
    @itemize
    
    @item
    Generate silence:
    @example
    aevalsrc=0
    @end example
    
    @item
    
    
    Generate a sin signal with frequency of 440 Hz, set sample rate to
    
    8000 Hz:
    @example
    aevalsrc="sin(440*2*PI*t)::s=8000"
    @end example
    
    
    @item
    Generate a two channels signal, specify the channel layout (Front
    Center + Back Center) explicitly:
    @example
    aevalsrc="sin(420*2*PI*t):cos(430*2*PI*t)::c=FC|BC"
    @end example
    
    
    @item
    Generate white noise:
    @example
    aevalsrc="-2+random(0)"
    @end example
    
    @item
    Generate an amplitude modulated signal:
    @example
    aevalsrc="sin(10*2*PI*t)*sin(880*2*PI*t)"
    @end example
    
    @item
    Generate 2.5 Hz binaural beats on a 360 Hz carrier:
    @example
    
    aevalsrc="0.1*sin(2*PI*(360-2.5/2)*t) : 0.1*sin(2*PI*(360+2.5/2)*t)"
    
    @end example
    
    @end itemize
    
    
    @section amovie
    
    Read an audio stream from a movie container.
    
    It accepts the syntax: @var{movie_name}[:@var{options}] where
    @var{movie_name} is the name of the resource to read (not necessarily
    a file but also a device or a stream accessed through some protocol),
    and @var{options} is an optional sequence of @var{key}=@var{value}
    pairs, separated by ":".
    
    The description of the accepted options follows.
    
    @table @option
    
    @item format_name, f
    Specify the format assumed for the movie to read, and can be either
    the name of a container or an input device. If not specified the
    format is guessed from @var{movie_name} or by probing.
    
    @item seek_point, sp
    Specify the seek point in seconds, the frames will be output
    starting from this seek point, the parameter is evaluated with
    @code{av_strtod} so the numerical value may be suffixed by an IS
    postfix. Default value is "0".
    
    @item stream_index, si
    Specify the index of the audio stream to read. If the value is -1,
    the best suited audio stream will be automatically selected. Default
    value is "-1".
    
    @end table
    
    
    Null audio source, return unprocessed audio frames. It is mainly useful
    as a template and to be employed in analysis / debugging tools, or as
    the source for filters which ignore the input data (for example the sox
    synth filter).
    
    It accepts an optional sequence of @var{key}=@var{value} pairs,
    separated by ":".
    
    The description of the accepted options follows.
    
    @table @option
    
    @item sample_rate, s
    Specify the sample rate, and defaults to 44100.
    
    @item channel_layout, cl
    
    Specify the channel layout, and can be either an integer or a string
    representing a channel layout. The default value of @var{channel_layout}
    is "stereo".
    
    
    Check the channel_layout_map definition in
    
    @file{libavcodec/audioconvert.c} for the mapping between strings and
    channel layout values.
    
    
    @item nb_samples, n
    Set the number of samples per requested frames.
    
    
    
    Follow some examples:
    @example
    
    #  set the sample rate to 48000 Hz and the channel layout to AV_CH_LAYOUT_MONO.
    anullsrc=r=48000:cl=4
    
    @section abuffer
    Buffer audio frames, and make them available to the filter chain.
    
    This source is not intended to be part of user-supplied graph descriptions but
    for insertion by calling programs through the interface defined in
    @file{libavfilter/buffersrc.h}.
    
    It accepts the following named parameters:
    @table @option
    
    @item time_base
    Timebase which will be used for timestamps of submitted frames. It must be
    either a floating-point number or in @var{numerator}/@var{denominator} form.
    
    @item sample_rate
    Audio sample rate.
    
    @item sample_fmt
    Name of the sample format, as returned by @code{av_get_sample_fmt_name()}.
    
    @item channel_layout
    Channel layout of the audio data, in the form that can be accepted by
    @code{av_get_channel_layout()}.
    @end table
    
    All the parameters need to be explicitly defined.
    
    
    @c man end AUDIO SOURCES
    
    
    @chapter Audio Sinks
    @c man begin AUDIO SINKS
    
    Below is a description of the currently available audio sinks.
    
    
    @section abuffersink
    
    Buffer audio frames, and make them available to the end of filter chain.
    
    This sink is mainly intended for programmatic use, in particular
    
    through the interface defined in @file{libavfilter/buffersink.h}.
    
    It requires a pointer to an AVABufferSinkContext structure, which
    defines the incoming buffers' formats, to be passed as the opaque
    parameter to @code{avfilter_init_filter} for initialization.
    
    @section anullsink
    
    Null audio sink, do absolutely nothing with the input audio. It is
    mainly useful as a template and to be employed in analysis / debugging
    tools.
    
    
    @section abuffersink
    This sink is intended for programmatic use. Frames that arrive on this sink can
    be retrieved by the calling program using the interface defined in
    @file{libavfilter/buffersink.h}.
    
    This filter accepts no parameters.
    
    
    @chapter Video Filters
    @c man begin VIDEO FILTERS
    
    
    When you configure your FFmpeg build, you can disable any of the
    
    existing filters using @code{--disable-filters}.
    
    The configure output will show the video filters included in your
    build.
    
    Below is a description of the currently available video filters.
    
    
    @section ass
    
    Draw ASS (Advanced Substation Alpha) subtitles on top of input video
    using the libass library.
    
    To enable compilation of this filter you need to configure FFmpeg with
    @code{--enable-libass}.
    
    
    This filter accepts the syntax: @var{ass_filename}[:@var{options}],
    where @var{ass_filename} is the filename of the ASS file to read, and
    @var{options} is an optional sequence of @var{key}=@var{value} pairs,
    separated by ":".
    
    A description of the accepted options follows.
    
    @table @option
    
    @item original_size
    Specifies the size of the original video, the video for which the ASS file
    was composed. Due to a misdesign in ASS aspect ratio arithmetic, this is
    necessary to correctly scale the fonts if the aspect ratio has been changed.
    
    @end table
    
    
    For example, to render the file @file{sub.ass} on top of the input
    video, use the command:
    @example
    ass=sub.ass
    @end example
    
    Stefano Sabatini's avatar
    Stefano Sabatini committed
    @section bbox
    
    Compute the bounding box for the non-black pixels in the input frame
    luminance plane.
    
    This filter computes the bounding box containing all the pixels with a
    luminance value greater than the minimum allowed value.
    The parameters describing the bounding box are printed on the filter
    log.
    
    
    @section blackdetect
    
    Detect video intervals that are (almost) completely black. Can be
    useful to detect chapter transitions, commercials, or invalid
    recordings. Output lines contains the time for the start, end and
    duration of the detected black interval expressed in seconds.
    
    In order to display the output lines, you need to set the loglevel at
    least to the AV_LOG_INFO value.
    
    This filter accepts a list of options in the form of
    @var{key}=@var{value} pairs separated by ":". A description of the
    accepted options follows.
    
    @table @option
    @item black_min_duration, d
    Set the minimum detected black duration expressed in seconds. It must
    be a non-negative floating point number.
    
    Default value is 2.0.
    
    @item picture_black_ratio_th, pic_th
    Set the threshold for considering a picture "black".
    Express the minimum value for the ratio:
    @example
    @var{nb_black_pixels} / @var{nb_pixels}
    @end example
    
    for which a picture is considered black.
    Default value is 0.98.
    
    @item pixel_black_th, pix_th
    Set the threshold for considering a pixel "black".
    
    The threshold expresses the maximum pixel luminance value for which a
    pixel is considered "black". The provided value is scaled according to
    the following equation:
    @example
    @var{absolute_threshold} = @var{luminance_minimum_value} + @var{pixel_black_th} * @var{luminance_range_size}
    @end example
    
    @var{luminance_range_size} and @var{luminance_minimum_value} depend on
    the input video format, the range is [0-255] for YUV full-range
    formats and [16-235] for YUV non full-range formats.
    
    Default value is 0.10.
    @end table
    
    The following example sets the maximum pixel threshold to the minimum
    value, and detects only black intervals of 2 or more seconds:
    @example
    blackdetect=d=2:pix_th=0.00
    @end example
    
    @section blackframe
    
    Detect frames that are (almost) completely black. Can be useful to
    detect chapter transitions or commercials. Output lines consist of
    the frame number of the detected frame, the percentage of blackness,
    the position in the file if known or -1 and the timestamp in seconds.
    
    In order to display the output lines, you need to set the loglevel at