Skip to content
Snippets Groups Projects
filters.texi 83.8 KiB
Newer Older
  • Learn to ignore specific revisions
  • @chapter Filtergraph description
    @c man begin FILTERGRAPH DESCRIPTION
    
    A filtergraph is a directed graph of connected filters. It can contain
    cycles, and there can be multiple links between a pair of
    filters. Each link has one input pad on one side connecting it to one
    filter from which it takes its input, and one output pad on the other
    
    side connecting it to one filter accepting its output.
    
    
    Each filter in a filtergraph is an instance of a filter class
    registered in the application, which defines the features and the
    number of input and output pads of the filter.
    
    
    A filter with no input pads is called a "source", and a filter with no
    
    output pads is called a "sink".
    
    
    @anchor{Filtergraph syntax}
    
    @section Filtergraph syntax
    
    
    A filtergraph has a textual representation, which is
    
    recognized by the @option{-filter}/@option{-vf} and @option{-filter_complex}
    options in @command{avconv} and @option{-vf} in @command{avplay}, and by the
    
    @code{avfilter_graph_parse()}/@code{avfilter_graph_parse2()} functions defined in
    
    @file{libavfilter/avfilter.h}.
    
    
    A filterchain consists of a sequence of connected filters, each one
    connected to the previous one in the sequence. A filterchain is
    represented by a list of ","-separated filter descriptions.
    
    A filtergraph consists of a sequence of filterchains. A sequence of
    filterchains is represented by a list of ";"-separated filterchain
    descriptions.
    
    A filter is represented by a string of the form:
    [@var{in_link_1}]...[@var{in_link_N}]@var{filter_name}=@var{arguments}[@var{out_link_1}]...[@var{out_link_M}]
    
    @var{filter_name} is the name of the filter class of which the
    described filter is an instance of, and has to be the name of one of
    the filter classes registered in the program.
    The name of the filter class is optionally followed by a string
    "=@var{arguments}".
    
    @var{arguments} is a string which contains the parameters used to
    
    initialize the filter instance. It may have one of two forms:
    
    @itemize
    
    @item
    A ':'-separated list of @var{key=value} pairs.
    
    @item
    A ':'-separated list of @var{value}. In this case, the keys are assumed to be
    the option names in the order they are declared. E.g. the @code{fade} filter
    declares three options in this order -- @option{type}, @option{start_frame} and
    @option{nb_frames}. Then the parameter list @var{in:0:30} means that the value
    @var{in} is assigned to the option @option{type}, @var{0} to
    @option{start_frame} and @var{30} to @option{nb_frames}.
    
    @end itemize
    
    If the option value itself is a list of items (e.g. the @code{format} filter
    takes a list of pixel formats), the items in the list are usually separated by
    '|'.
    
    
    The list of arguments can be quoted using the character "'" as initial
    and ending mark, and the character '\' for escaping the characters
    within the quoted text; otherwise the argument string is considered
    terminated when the next special character (belonging to the set
    "[]=;,") is encountered.
    
    The name and arguments of the filter are optionally preceded and
    followed by a list of link labels.
    A link label allows to name a link and associate it to a filter output
    or input pad. The preceding labels @var{in_link_1}
    ... @var{in_link_N}, are associated to the filter input pads,
    the following labels @var{out_link_1} ... @var{out_link_M}, are
    associated to the output pads.
    
    When two link labels with the same name are found in the
    filtergraph, a link between the corresponding input and output pad is
    created.
    
    If an output pad is not labelled, it is linked by default to the first
    unlabelled input pad of the next filter in the filterchain.
    
    For example in the filterchain
    
    @example
    nullsrc, split[L1], [L2]overlay, nullsink
    @end example
    the split filter instance has two output pads, and the overlay filter
    instance two input pads. The first output pad of split is labelled
    "L1", the first input pad of overlay is labelled "L2", and the second
    output pad of split is linked to the second input pad of overlay,
    which are both unlabelled.
    
    In a complete filterchain all the unlabelled filter input and output
    pads must be connected. A filtergraph is considered valid if all the
    filter input and output pads of all the filterchains are connected.
    
    
    Libavfilter will automatically insert @ref{scale} filters where format
    
    conversion is required. It is possible to specify swscale flags
    for those automatically inserted scalers by prepending
    @code{sws_flags=@var{flags};}
    to the filtergraph description.
    
    
    Here is a BNF description of the filtergraph syntax:
    
    @example
    @var{NAME}             ::= sequence of alphanumeric characters and '_'
    @var{LINKLABEL}        ::= "[" @var{NAME} "]"
    @var{LINKLABELS}       ::= @var{LINKLABEL} [@var{LINKLABELS}]
    
    @var{FILTER_ARGUMENTS} ::= sequence of chars (possibly quoted)
    
    @var{FILTER}           ::= [@var{LINKLABELS}] @var{NAME} ["=" @var{FILTER_ARGUMENTS}] [@var{LINKLABELS}]
    
    @var{FILTERCHAIN}      ::= @var{FILTER} [,@var{FILTERCHAIN}]
    
    @var{FILTERGRAPH}      ::= [sws_flags=@var{flags};] @var{FILTERCHAIN} [;@var{FILTERGRAPH}]
    
    @end example
    
    @c man end FILTERGRAPH DESCRIPTION
    
    
    @chapter Audio Filters
    @c man begin AUDIO FILTERS
    
    
    When you configure your Libav build, you can disable any of the
    
    existing filters using --disable-filters.
    The configure output will show the audio filters included in your
    build.
    
    Below is a description of the currently available audio filters.
    
    
    Anton Khirnov's avatar
    Anton Khirnov committed
    @section aformat
    
    Convert the input audio to one of the specified formats. The framework will
    negotiate the most appropriate format to minimize conversions.
    
    
    It accepts the following parameters:
    
    Anton Khirnov's avatar
    Anton Khirnov committed
    @table @option
    
    @item sample_fmts
    
    A '|'-separated list of requested sample formats.
    
    Anton Khirnov's avatar
    Anton Khirnov committed
    
    @item sample_rates
    
    A '|'-separated list of requested sample rates.
    
    Anton Khirnov's avatar
    Anton Khirnov committed
    
    @item channel_layouts
    
    A '|'-separated list of requested channel layouts.
    
    Anton Khirnov's avatar
    Anton Khirnov committed
    
    @end table
    
    If a parameter is omitted, all values are allowed.
    
    
    Force the output to either unsigned 8-bit or signed 16-bit stereo
    
    Anton Khirnov's avatar
    Anton Khirnov committed
    @example
    
    aformat=sample_fmts=u8|s16:channel_layouts=stereo
    
    Anton Khirnov's avatar
    Anton Khirnov committed
    @end example
    
    
    @section amix
    
    Mixes multiple audio inputs into a single output.
    
    For example
    @example
    avconv -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT
    @end example
    will mix 3 input audio streams to a single output with the same duration as the
    first input and a dropout transition time of 3 seconds.
    
    
    It accepts the following parameters:
    
    @table @option
    
    @item inputs
    
    The number of inputs. If unspecified, it defaults to 2.
    
    
    @item duration
    How to determine the end-of-stream.
    @table @option
    
    @item longest
    
    The duration of the longest input. (default)
    
    
    @item shortest
    
    The duration of the shortest input.
    
    
    @item first
    
    The duration of the first input.
    
    
    @end table
    
    @item dropout_transition
    
    The transition time, in seconds, for volume renormalization when an input
    
    stream ends. The default value is 2 seconds.
    
    @end table
    
    
    @section anull
    
    Pass the audio source unchanged to the output.
    
    
    @section asetpts
    
    Change the PTS (presentation timestamp) of the input audio frames.
    
    
    It accepts the following parameters:
    
    
    @table @option
    
    @item expr
    The expression which is evaluated for each frame to construct its timestamp.
    
    @end table
    
    The expression is evaluated through the eval API and can contain the following
    constants:
    
    @table @option
    
    @item FRAME_RATE
    frame rate, only defined for constant frame-rate video
    
    
    @item PTS
    the presentation timestamp in input
    
    
    @item E, PI, PHI
    These are approximated values for the mathematical constants e
    (Euler's number), pi (Greek pi), and phi (the golden ratio).
    
    The number of audio samples passed through the filter so far, starting at 0.
    
    The number of audio samples in the current frame.
    
    The audio sample rate.
    
    
    @item STARTPTS
    
    The PTS of the first frame.
    
    
    @item PREV_INPTS
    
    The previous input PTS.
    
    
    @item PREV_OUTPTS
    
    The previous output PTS.
    
    
    @item RTCTIME
    
    The wallclock (RTC) time in microseconds.
    
    
    @item RTCSTART
    
    The wallclock (RTC) time at the start of the movie in microseconds.
    
    Some examples:
    
    # Start counting PTS from zero
    
    asetpts=expr=PTS-STARTPTS
    
    
    # Generate timestamps by counting samples
    
    asetpts=expr=N/SR/TB
    
    
    # Generate timestamps from a "live source" and rebase onto the current timebase
    
    asetpts='(RTCTIME - RTCSTART) / (TB * 1000000)"
    @end example
    
    
    @section asettb
    
    Set the timebase to use for the output frames timestamps.
    It is mainly useful for testing timebase configuration.
    
    This filter accepts the following parameters:
    
    @table @option
    
    @item expr
    The expression which is evaluated into the output timebase.
    
    @end table
    
    The expression can contain the constants @var{PI}, @var{E}, @var{PHI}, @var{AVTB} (the
    default timebase), @var{intb} (the input timebase), and @var{sr} (the sample rate,
    audio only).
    
    The default value for the input is @var{intb}.
    
    Some examples:
    
    @example
    # Set the timebase to 1/25:
    settb=1/25
    
    # Set the timebase to 1/10:
    settb=0.1
    
    # Set the timebase to 1001/1000:
    settb=1+0.001
    
    # Set the timebase to 2*intb:
    settb=2*intb
    
    # Set the default timebase value:
    settb=AVTB
    
    # Set the timebase to twice the sample rate:
    asettb=sr*2
    @end example
    
    @section ashowinfo
    
    Show a line containing various information for each input audio frame.
    The input audio is not modified.
    
    The shown line contains a sequence of key/value pairs of the form
    @var{key}:@var{value}.
    
    
    It accepts the following parameters:
    
    
    @table @option
    @item n
    
    The (sequential) number of the input frame, starting from 0.
    
    
    @item pts
    
    The presentation timestamp of the input frame, in time base units; the time base
    
    depends on the filter input pad, and is usually 1/@var{sample_rate}.
    
    @item pts_time
    
    The presentation timestamp of the input frame in seconds.
    
    
    @item fmt
    
    The sample format.
    
    
    @item chlayout
    
    The channel layout.
    
    
    @item rate
    
    The sample rate for the audio frame.
    
    
    @item nb_samples
    
    The number of samples (per channel) in the frame.
    
    
    @item checksum
    
    The Adler-32 checksum (printed in hexadecimal) of the audio data. For planar
    audio, the data is treated as if all the planes were concatenated.
    
    
    @item plane_checksums
    A list of Adler-32 checksums for each data plane.
    @end table
    
    
    @section asplit
    
    Split input audio into several identical outputs.
    
    
    It accepts a single parameter, which specifies the number of outputs. If
    
    unspecified, it defaults to 2.
    
    
    @example
    avconv -i INPUT -filter_complex asplit=5 OUTPUT
    @end example
    will create 5 copies of the input audio.
    
    
    @section asyncts
    Synchronize audio data with timestamps by squeezing/stretching it and/or
    dropping samples/adding silence when needed.
    
    
    It accepts the following parameters:
    
    @table @option
    
    @item compensate
    
    Enable stretching/squeezing the data to make it match the timestamps. Disabled
    by default. When disabled, time gaps are covered with silence.
    
    
    @item min_delta
    
    The minimum difference between timestamps and audio data (in seconds) to trigger
    adding/dropping samples. The default value is 0.1. If you get an imperfect
    sync with this filter, try setting this parameter to 0.
    
    
    @item max_comp
    
    The maximum compensation in samples per second. Only relevant with compensate=1.
    The default value is 500.
    
    @item first_pts
    
    Assume that the first PTS should be this value. The time base is 1 / sample
    rate. This allows for padding/trimming at the start of the stream. By default,
    no assumption is made about the first frame's expected PTS, so no padding or
    
    trimming is done. For example, this could be set to 0 to pad the beginning with
    
    silence if an audio stream starts after the video stream or to trim any samples
    
    with a negative PTS due to encoder delay.
    
    @end table
    
    
    @section atrim
    Trim the input so that the output contains one continuous subpart of the input.
    
    
    It accepts the following parameters:
    
    @table @option
    @item start
    
    Timestamp (in seconds) of the start of the section to keep. I.e. the audio
    sample with the timestamp @var{start} will be the first sample in the output.
    
    
    @item end
    Timestamp (in seconds) of the first audio sample that will be dropped. I.e. the
    audio sample immediately preceding the one with the timestamp @var{end} will be
    the last sample in the output.
    
    @item start_pts
    Same as @var{start}, except this option sets the start timestamp in samples
    instead of seconds.
    
    @item end_pts
    Same as @var{end}, except this option sets the end timestamp in samples instead
    of seconds.
    
    @item duration
    
    The maximum duration of the output in seconds.
    
    
    @item start_sample
    
    The number of the first sample that should be output.
    
    
    @item end_sample
    
    The number of the first sample that should be dropped.
    
    @end table
    
    Note that the first two sets of the start/end options and the @option{duration}
    option look at the frame timestamp, while the _sample options simply count the
    samples that pass through the filter. So start/end_pts and start/end_sample will
    give different results when the timestamps are wrong, inexact or do not start at
    zero. Also note that this filter does not modify the timestamps. If you wish
    
    to have the output timestamps start at zero, insert the asetpts filter after the
    
    atrim filter.
    
    If multiple start or end options are set, this filter tries to be greedy and
    keep all samples that match at least one of the specified constraints. To keep
    only the part that matches all the constraints at once, chain multiple atrim
    filters.
    
    The defaults are such that all the input is kept. So it is possible to set e.g.
    just the end values to keep everything before the specified time.
    
    Examples:
    @itemize
    @item
    
    Drop everything except the second minute of input:
    
    @example
    avconv -i INPUT -af atrim=60:120
    @end example
    
    @item
    
    Keep only the first 1000 samples:
    
    @example
    avconv -i INPUT -af atrim=end_sample=1000
    @end example
    
    @end itemize
    
    
    @section bs2b
    Bauer stereo to binaural transformation, which improves headphone listening of
    stereo audio records.
    
    It accepts the following parameters:
    @table @option
    
    @item profile
    Pre-defined crossfeed level.
    @table @option
    
    @item default
    Default level (fcut=700, feed=50).
    
    @item cmoy
    Chu Moy circuit (fcut=700, feed=60).
    
    @item jmeier
    Jan Meier circuit (fcut=650, feed=95).
    
    @end table
    
    @item fcut
    Cut frequency (in Hz).
    
    @item feed
    Feed level (in Hz).
    
    @end table
    
    
    @section channelsplit
    
    Split each channel from an input audio stream into a separate output stream.
    
    It accepts the following parameters:
    
    @table @option
    @item channel_layout
    
    The channel layout of the input stream. The default is "stereo".
    
    For example, assuming a stereo input MP3 file,
    
    @example
    avconv -i in.mp3 -filter_complex channelsplit out.mkv
    @end example
    will create an output Matroska file with two audio streams, one containing only
    the left channel and the other the right channel.
    
    
    Split a 5.1 WAV file into per-channel files:
    
    @example
    avconv -i in.wav -filter_complex
    'channelsplit=channel_layout=5.1[FL][FR][FC][LFE][SL][SR]'
    -map '[FL]' front_left.wav -map '[FR]' front_right.wav -map '[FC]'
    front_center.wav -map '[LFE]' lfe.wav -map '[SL]' side_left.wav -map '[SR]'
    side_right.wav
    @end example
    
    
    @section channelmap
    Remap input channels to new locations.
    
    
    It accepts the following parameters:
    
    @table @option
    @item channel_layout
    
    The channel layout of the output stream.
    
    Map channels from input to output. The argument is a '|'-separated list of
    
    mappings, each in the @code{@var{in_channel}-@var{out_channel}} or
    @var{in_channel} form. @var{in_channel} can be either the name of the input
    channel (e.g. FL for front left) or its index in the input channel layout.
    @var{out_channel} is the name of the output channel or its index in the output
    channel layout. If @var{out_channel} is not given then it is implicitly an
    index, starting with zero and increasing by one for each mapping.
    @end table
    
    If no mapping is present, the filter will implicitly map input channels to
    
    output channels, preserving indices.
    
    For example, assuming a 5.1+downmix input MOV file,
    
    avconv -i in.mov -filter 'channelmap=map=DL-FL|DR-FR' out.wav
    
    @end example
    will create an output WAV file tagged as stereo from the downmix channels of
    the input.
    
    To fix a 5.1 WAV improperly encoded in AAC's native channel order
    @example
    
    avconv -i in.wav -filter 'channelmap=1|2|0|5|3|4:5.1' out.wav
    
    @section compand
    
    Compress or expand the audio's dynamic range.
    
    It accepts the following parameters:
    
    
    @table @option
    
    @item attacks
    @item decays
    
    A list of times in seconds for each channel over which the instantaneous level
    
    of the input signal is averaged to determine its volume. @var{attacks} refers to
    increase of volume and @var{decays} refers to decrease of volume. For most
    situations, the attack time (response to the audio getting louder) should be
    
    shorter than the decay time, because the human ear is more sensitive to sudden
    
    loud audio than sudden soft audio. A typical value for attack is 0.3 seconds and
    a typical value for decay is 0.8 seconds.
    
    @item points
    
    A list of points for the transfer function, specified in dB relative to the
    
    maximum possible signal amplitude. Each key points list must be defined using
    the following syntax: @code{x0/y0|x1/y1|x2/y2|....}
    
    The input values must be in strictly increasing order but the transfer function
    does not have to be monotonically rising. The point @code{0/0} is assumed but
    may be overridden (by @code{0/out-dBn}). Typical values for the transfer
    function are @code{-70/-70|-60/-20}.
    
    @item soft-knee
    
    Set the curve radius in dB for all joints. It defaults to 0.01.
    
    Set the additional gain in dB to be applied at all points on the transfer
    function. This allows for easy adjustment of the overall gain.
    It defaults to 0.
    
    
    @item volume
    
    Set an initial volume, in dB, to be assumed for each channel when filtering
    starts. This permits the user to supply a nominal level initially, so that, for
    
    example, a very large gain is not applied to initial signal levels before the
    companding has begun to operate. A typical value for audio which is initially
    
    quiet is -90 dB. It defaults to 0.
    
    Set a delay, in seconds. The input audio is analyzed immediately, but audio is
    
    delayed before being fed to the volume adjuster. Specifying a delay
    approximately equal to the attack/decay times allows the filter to effectively
    
    operate in predictive rather than reactive mode. It defaults to 0.
    
    
    @end table
    
    @subsection Examples
    
    @itemize
    @item
    
    Make music with both quiet and loud passages suitable for listening to in a
    noisy environment:
    
    @example
    compand=.3|.3:1|1:-90/-60|-60/-40|-40/-30|-20/-20:6:0:-90:0.2
    @end example
    
    @item
    
    A noise gate for when the noise is at a lower level than the signal:
    
    @example
    compand=.1|.1:.2|.2:-900/-900|-50.1/-900|-50/-50:.01:0:-90:.1
    @end example
    
    @item
    Here is another noise gate, this time for when the noise is at a higher level
    than the signal (making it, in some ways, similar to squelch):
    @example
    compand=.1|.1:.1|.1:-45.1/-45.1|-45/-900|0/-900:.01:45:-90:.1
    @end example
    @end itemize
    
    
    @section join
    Join multiple input streams into one multi-channel stream.
    
    
    It accepts the following parameters:
    
    @table @option
    
    @item inputs
    
    The number of input streams. It defaults to 2.
    
    
    @item channel_layout
    
    The desired output channel layout. It defaults to stereo.
    
    Map channels from inputs to output. The argument is a '|'-separated list of
    
    mappings, each in the @code{@var{input_idx}.@var{in_channel}-@var{out_channel}}
    form. @var{input_idx} is the 0-based index of the input stream. @var{in_channel}
    
    Anton Khirnov's avatar
    Anton Khirnov committed
    can be either the name of the input channel (e.g. FL for front left) or its
    
    index in the specified input stream. @var{out_channel} is the name of the output
    channel.
    @end table
    
    
    The filter will attempt to guess the mappings when they are not specified
    
    explicitly. It does so by first trying to find an unused matching input channel
    and if that fails it picks the first unused input channel.
    
    
    Join 3 inputs (with properly set channel layouts):
    
    @example
    avconv -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex join=inputs=3 OUTPUT
    @end example
    
    
    Build a 5.1 output from 6 single-channel streams:
    
    @example
    avconv -i fl -i fr -i fc -i sl -i sr -i lfe -filter_complex
    
    'join=inputs=6:channel_layout=5.1:map=0.0-FL|1.0-FR|2.0-FC|3.0-SL|4.0-SR|5.0-LFE'
    
    out
    @end example
    
    
    @section resample
    
    Convert the audio sample format, sample rate and channel layout. It is
    not meant to be used directly; it is inserted automatically by libavfilter
    
    whenever conversion is needed. Use the @var{aformat} filter to force a specific
    conversion.
    
    
    Justin Ruggles's avatar
    Justin Ruggles committed
    @section volume
    
    Adjust the input audio volume.
    
    
    It accepts the following parameters:
    
    Justin Ruggles's avatar
    Justin Ruggles committed
    @table @option
    
    @item volume
    
    This expresses how the audio volume will be increased or decreased.
    
    Justin Ruggles's avatar
    Justin Ruggles committed
    
    Output values are clipped to the maximum value.
    
    The output audio volume is given by the relation:
    @example
    @var{output_volume} = @var{volume} * @var{input_volume}
    @end example
    
    
    The default value for @var{volume} is 1.0.
    
    Justin Ruggles's avatar
    Justin Ruggles committed
    
    @item precision
    
    This parameter represents the mathematical precision.
    
    It determines which input sample formats will be allowed, which affects the
    
    Justin Ruggles's avatar
    Justin Ruggles committed
    precision of the volume scaling.
    
    @table @option
    @item fixed
    
    8-bit fixed-point; this limits input sample format to U8, S16, and S32.
    
    Justin Ruggles's avatar
    Justin Ruggles committed
    @item float
    
    32-bit floating-point; this limits input sample format to FLT. (default)
    
    Justin Ruggles's avatar
    Justin Ruggles committed
    @item double
    
    64-bit floating-point; this limits input sample format to DBL.
    
    Justin Ruggles's avatar
    Justin Ruggles committed
    @end table
    
    Choose the behaviour on encountering ReplayGain side data in input frames.
    
    
    @table @option
    @item drop
    Remove ReplayGain side data, ignoring its contents (the default).
    
    @item ignore
    Ignore ReplayGain side data, but leave it in the frame.
    
    @item track
    
    Prefer the track gain, if present.
    
    Prefer the album gain, if present.
    
    
    @item replaygain_preamp
    Pre-amplification gain in dB to apply to the selected replaygain gain.
    
    Default value for @var{replaygain_preamp} is 0.0.
    
    
    @item replaygain_noclip
    Prevent clipping by limiting the gain applied.
    
    Default value for @var{replaygain_noclip} is 1.
    
    
    Justin Ruggles's avatar
    Justin Ruggles committed
    @end table
    
    @subsection Examples
    
    @itemize
    @item
    Halve the input audio volume:
    @example
    volume=volume=0.5
    volume=volume=1/2
    volume=volume=-6.0206dB
    @end example
    
    @item
    Increase input audio power by 6 decibels using fixed-point precision:
    @example
    volume=volume=6dB:precision=fixed
    @end example
    @end itemize
    
    
    @c man end AUDIO FILTERS
    
    
    @chapter Audio Sources
    @c man begin AUDIO SOURCES
    
    Below is a description of the currently available audio sources.
    
    @section anullsrc
    
    
    The null audio source; it never returns audio frames. It is mainly useful as a
    template and for use in analysis / debugging tools.
    
    It accepts, as an optional parameter, a string of the form
    
    @var{sample_rate}:@var{channel_layout}.
    
    
    @var{sample_rate} specifies the sample rate, and defaults to 44100.
    
    @var{channel_layout} specifies the channel layout, and can be either an
    
    integer or a string representing a channel layout. The default value
    of @var{channel_layout} is 3, which corresponds to CH_LAYOUT_STEREO.
    
    Check the channel_layout_map definition in
    
    @file{libavutil/channel_layout.c} for the mapping between strings and
    
    Some examples:
    
    # Set the sample rate to 48000 Hz and the channel layout to CH_LAYOUT_MONO
    
    # The same as above
    
    anullsrc=48000:mono
    @end example
    
    
    @section abuffer
    Buffer audio frames, and make them available to the filter chain.
    
    
    This source is not intended to be part of user-supplied graph descriptions; it
    is for insertion by calling programs, through the interface defined in
    
    @file{libavfilter/buffersrc.h}.
    
    
    It accepts the following parameters:
    
    @table @option
    
    @item time_base
    
    The timebase which will be used for timestamps of submitted frames. It must be
    
    either a floating-point number or in @var{numerator}/@var{denominator} form.
    
    @item sample_rate
    
    The audio sample rate.
    
    
    @item sample_fmt
    
    The name of the sample format, as returned by @code{av_get_sample_fmt_name()}.
    
    
    @item channel_layout
    
    The channel layout of the audio data, in the form that can be accepted by
    
    @code{av_get_channel_layout()}.
    @end table
    
    All the parameters need to be explicitly defined.
    
    
    @c man end AUDIO SOURCES
    
    
    @chapter Audio Sinks
    @c man begin AUDIO SINKS
    
    Below is a description of the currently available audio sinks.
    
    @section anullsink
    
    
    Null audio sink; do absolutely nothing with the input audio. It is
    mainly useful as a template and for use in analysis / debugging
    
    @section abuffersink
    This sink is intended for programmatic use. Frames that arrive on this sink can
    
    be retrieved by the calling program, using the interface defined in
    
    @file{libavfilter/buffersink.h}.
    
    
    It does not accept any parameters.
    
    @chapter Video Filters
    @c man begin VIDEO FILTERS
    
    
    When you configure your Libav build, you can disable any of the
    
    existing filters using --disable-filters.
    The configure output will show the video filters included in your
    build.
    
    Below is a description of the currently available video filters.
    
    
    @section blackframe
    
    Detect frames that are (almost) completely black. Can be useful to
    detect chapter transitions or commercials. Output lines consist of
    the frame number of the detected frame, the percentage of blackness,
    the position in the file if known or -1 and the timestamp in seconds.
    
    In order to display the output lines, you need to set the loglevel at
    least to the AV_LOG_INFO value.
    
    
    It accepts the following parameters:
    
    The percentage of the pixels that have to be below the threshold; it defaults to
    
    The threshold below which a pixel value is considered black; it defaults to 32.
    
    Apply a boxblur algorithm to the input video.
    
    It accepts the following parameters:
    
    
    @table @option
    
    @item luma_radius
    @item luma_power
    @item chroma_radius
    @item chroma_power
    @item alpha_radius
    @item alpha_power
    
    @end table
    
    The chroma and alpha parameters are optional. If not specified, they default
    
    to the corresponding values set for @var{luma_radius} and
    @var{luma_power}.
    
    @var{luma_radius}, @var{chroma_radius}, and @var{alpha_radius} represent
    the radius in pixels of the box used for blurring the corresponding
    input plane. They are expressions, and can contain the following
    constants:
    @table @option
    @item w, h
    
    The input width and height in pixels.
    
    The input chroma image width and height in pixels.
    
    The horizontal and vertical chroma subsample values. For example, for the
    pixel format "yuv422p", @var{hsub} is 2 and @var{vsub} is 1.
    
    @end table
    
    The radius must be a non-negative number, and must not be greater than
    the value of the expression @code{min(w,h)/2} for the luma and alpha planes,
    and of @code{min(cw,ch)/2} for the chroma planes.
    
    @var{luma_power}, @var{chroma_power}, and @var{alpha_power} represent
    how many times the boxblur filter is applied to the corresponding
    plane.
    
    
    Some examples:
    
    Apply a boxblur filter with the luma, chroma, and alpha radii
    
    boxblur=luma_radius=2:luma_power=1
    
    Set the luma radius to 2, and alpha and chroma radius to 0:
    
    @example
    boxblur=2:1:0:0:0:0
    @end example
    
    @item
    
    Set the luma and chroma radii to a fraction of the video dimension:
    
    boxblur=luma_radius=min(h\,w)/10:luma_power=1:chroma_radius=min(cw\,ch)/10:chroma_power=1
    
    Copy the input source unchanged to the output. This is mainly useful for
    
    Crop the input video to given dimensions.
    
    
    It accepts the following parameters:
    
    The width of the output video.
    
    The height of the output video.
    
    The horizontal position, in the input video, of the left edge of the output
    video.
    
    The vertical position, in the input video, of the top edge of the output video.
    
    The parameters are expressions containing the following constants:
    
    @table @option
    @item E, PI, PHI
    
    These are approximated values for the mathematical constants e
    (Euler's number), pi (Greek pi), and phi (the golden ratio).
    
    The computed values for @var{x} and @var{y}. They are evaluated for
    
    each new frame.
    
    @item in_w, in_h
    
    The input width and height.
    
    These are the same as @var{in_w} and @var{in_h}.
    
    The output (cropped) width and height.
    
    These are the same as @var{out_w} and @var{out_h}.
    
    The number of the input frame, starting from 0.
    
    The timestamp expressed in seconds. It's NAN if the input timestamp is unknown.
    
    The @var{out_w} and @var{out_h} parameters specify the expressions for
    
    the width and height of the output (cropped) video. They are only
    evaluated during the configuration of the filter.
    
    The default value of @var{out_w} is "in_w", and the default value of
    @var{out_h} is "in_h".
    
    The expression for @var{out_w} may depend on the value of @var{out_h},
    and the expression for @var{out_h} may depend on @var{out_w}, but they
    cannot depend on @var{x} and @var{y}, as @var{x} and @var{y} are
    evaluated after @var{out_w} and @var{out_h}.
    
    The @var{x} and @var{y} parameters specify the expressions for the
    position of the top-left corner of the output (non-cropped) area. They
    are evaluated for each frame. If the evaluated value is not valid, it
    is approximated to the nearest valid value.
    
    The default value of @var{x} is "(in_w-out_w)/2", and the default
    value for @var{y} is "(in_h-out_h)/2", which set the cropped area at