Skip to content
Snippets Groups Projects
faq.texi 23.2 KiB
Newer Older
  • Learn to ignore specific revisions
  • \input texinfo @c -*- texinfo -*-
    
    @documentencoding UTF-8
    
    @settitle FFmpeg FAQ
    
    @center @titlefont{FFmpeg FAQ}
    
    @end titlepage
    
    
    @section Why doesn't FFmpeg support feature [xyz]?
    
    Because no one has taken on that task yet. FFmpeg development is
    
    driven by the tasks that are important to the individual developers.
    If there is a feature that is important to you, the best way to get
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    it implemented is to undertake the task yourself or sponsor a developer.
    
    @section FFmpeg does not support codec XXX. Can you include a Windows DLL loader to support it?
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    No. Windows DLLs are not portable, bloated and often slow.
    
    Moreover FFmpeg strives to support all codecs natively.
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    A DLL loader is not conducive to that goal.
    
    Mike Melanson's avatar
    Mike Melanson committed
    @section I cannot read this file although this format seems to be supported by ffmpeg.
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    Even if ffmpeg can read the container format, it may not support all its
    
    codecs. Please consult the supported codec list in the ffmpeg
    documentation.
    
    
    @section Which codecs are supported by Windows?
    
    Windows does not support standard formats like MPEG very well, unless you
    
    install some additional codecs.
    
    
    The following list of video codecs should work on most Windows systems:
    @table @option
    @item msmpeg4v2
    .avi/.asf
    @item msmpeg4
    .asf only
    @item wmv1
    .asf only
    @item wmv2
    .asf only
    @item mpeg4
    
    Only if you have some MPEG-4 codec like ffdshow or Xvid installed.
    
    @item mpeg1video
    
    .mpg only
    @end table
    Note, ASF files often have .wmv or .wma extensions in Windows. It should also
    be mentioned that Microsoft claims a patent on the ASF format, and may sue
    or threaten users who create ASF files with non-Microsoft software. It is
    strongly advised to avoid ASF where possible.
    
    The following list of audio codecs should work on most Windows systems:
    @table @option
    @item adpcm_ima_wav
    @item adpcm_ms
    
    @item pcm_s16le
    
    @item libmp3lame
    
    If some MP3 codec like LAME is installed.
    
    @chapter Compilation
    
    @section @code{error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'}
    
    This is a bug in gcc. Do not report it to us. Instead, please report it to
    the gcc developers. Note that we will not add workarounds for gcc bugs.
    
    
    Also note that (some of) the gcc developers believe this is not a bug or
    not a bug they should fix:
    @url{http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203}.
    Then again, some of them do not know the difference between an undecidable
    problem and an NP-hard problem...
    
    @section I have installed this library with my distro's package manager. Why does @command{configure} not see it?
    
    Distributions usually split libraries in several packages. The main package
    contains the files necessary to run programs using the library. The
    development package contains the files necessary to build programs using the
    library. Sometimes, docs and/or data are in a separate package too.
    
    To build FFmpeg, you need to install the development package. It is usually
    called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the
    build is finished, but be sure to keep the main package.
    
    
    @section How do I make @command{pkg-config} find my libraries?
    
    Somewhere along with your libraries, there is a @file{.pc} file (or several)
    in a @file{pkgconfig} directory. You need to set environment variables to
    point @command{pkg-config} to these files.
    
    If you need to @emph{add} directories to @command{pkg-config}'s search list
    (typical use case: library installed separately), add it to
    @code{$PKG_CONFIG_PATH}:
    
    @example
    export PKG_CONFIG_PATH=/opt/x264/lib/pkgconfig:/opt/opus/lib/pkgconfig
    @end example
    
    If you need to @emph{replace} @command{pkg-config}'s search list
    (typical use case: cross-compiling), set it in
    @code{$PKG_CONFIG_LIBDIR}:
    
    @example
    export PKG_CONFIG_LIBDIR=/home/me/cross/usr/lib/pkgconfig:/home/me/cross/usr/local/lib/pkgconfig
    @end example
    
    If you need to know the library's internal dependencies (typical use: static
    linking), add the @code{--static} option to @command{pkg-config}:
    
    @example
    ./configure --pkg-config-flags=--static
    @end example
    
    @section How do I use @command{pkg-config} when cross-compiling?
    
    The best way is to install @command{pkg-config} in your cross-compilation
    environment. It will automatically use the cross-compilation libraries.
    
    You can also use @command{pkg-config} from the host environment by
    specifying explicitly @code{--pkg-config=pkg-config} to @command{configure}.
    In that case, you must point @command{pkg-config} to the correct directories
    using the @code{PKG_CONFIG_LIBDIR}, as explained in the previous entry.
    
    As an intermediate solution, you can place in your cross-compilation
    environment a script that calls the host @command{pkg-config} with
    @code{PKG_CONFIG_LIBDIR} set. That script can look like that:
    
    @example
    #!/bin/sh
    PKG_CONFIG_LIBDIR=/path/to/cross/lib/pkgconfig
    export PKG_CONFIG_LIBDIR
    exec /usr/bin/pkg-config "$@@"
    @end example
    
    
    @section ffmpeg does not work; what is wrong?
    
    Try a @code{make distclean} in the ffmpeg source directory before the build.
    
    If this does not help see
    
    (@url{https://ffmpeg.org/bugreports.html}).
    
    @section How do I encode single pictures into movies?
    
    Víctor Paesa's avatar
    Víctor Paesa committed
    First, rename your pictures to follow a numerical sequence.
    For example, img1.jpg, img2.jpg, img3.jpg,...
    Then you may run:
    
    ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
    
    Víctor Paesa's avatar
    Víctor Paesa committed
    Notice that @samp{%d} is replaced by the image number.
    
    @file{img%03d.jpg} means the sequence @file{img001.jpg}, @file{img002.jpg}, etc.
    
    Use the @option{-start_number} option to declare a starting number for
    the sequence. This is useful if your sequence does not start with
    @file{img001.jpg} but is still in a numerical order. The following
    example will start with @file{img100.jpg}:
    
    @example
    
    ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg
    
    @end example
    
    If you have large number of pictures to rename, you can use the
    following command to ease the burden. The command, using the bourne
    shell syntax, symbolically links all files in the current directory
    that match @code{*jpg} to the @file{/tmp} directory in the sequence of
    @file{img001.jpg}, @file{img002.jpg} and so on.
    
    @example
    
    x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
    
    @end example
    
    If you want to sequence them by oldest modified first, substitute
    @code{$(ls -r -t *jpg)} in place of @code{*jpg}.
    
    Then run:
    
    @example
    
    ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
    
    Víctor Paesa's avatar
    Víctor Paesa committed
    The same logic is used for any image format that ffmpeg reads.
    
    You can also use @command{cat} to pipe images to ffmpeg:
    
    @example
    
    cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
    
    @end example
    
    
    @section How do I encode movie to single pictures?
    
    ffmpeg -i movie.mpg movie%d.jpg
    
    The @file{movie.mpg} used as input will be converted to
    @file{movie1.jpg}, @file{movie2.jpg}, etc...
    
    
    Instead of relying on file format self-recognition, you may also use
    @table @option
    
    @item -c:v ppm
    @item -c:v png
    @item -c:v mjpeg
    
    @end table
    to force the encoding.
    
    Applying that to the previous example:
    @example
    
    ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg
    
    @end example
    
    Beware that there is no "jpeg" codec. Use "mjpeg" instead.
    
    
    @section Why do I see a slight quality degradation with multithreaded MPEG* encoding?
    
    
    For multithreaded MPEG* encoding, the encoded slices must be independent,
    
    Mike Melanson's avatar
    Mike Melanson committed
    otherwise thread n would practically have to wait for n-1 to finish, so it's
    quite logical that there is a small reduction of quality. This is not a bug.
    
    @section How can I read from the standard input or write to the standard output?
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    
    
    Use @file{-} as file name.
    
    Fabrice Bellard's avatar
    Fabrice Bellard committed
    
    
    @section -f jpeg doesn't work.
    
    Try '-f image2 test%d.jpg'.
    
    root's avatar
    root committed
    @section Why can I not change the frame rate?
    
    root's avatar
    root committed
    Some codecs, like MPEG-1/2, only allow a small number of fixed frame rates.
    
    Choose a different codec with the -c:v command line option.
    
    @section How do I encode Xvid or DivX video with ffmpeg?
    
    Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4
    
    standard (note that there are many other coding formats that use this
    
    same standard). Thus, use '-c:v mpeg4' to encode in these formats. The
    
    default fourcc stored in an MPEG-4-coded file will be 'FMP4'. If you want
    
    a different fourcc, use the '-vtag' option. E.g., '-vtag xvid' will
    force the fourcc 'xvid' to be stored as the video fourcc rather than the
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    @section Which are good parameters for encoding high quality MPEG-4?
    
    '-mbd rd -flags +mv4+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2',
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd'.
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    @section Which are good parameters for encoding high quality MPEG-1/MPEG-2?
    
    '-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2'
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    but beware the '-g 100' might cause problems with some decoders.
    Things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd.
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    @section Interlaced video looks very bad when encoded with ffmpeg, what is wrong?
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    You should use '-flags +ilme+ildct' and maybe '-flags +alt' for interlaced
    material, and try '-top 0/1' if the result looks really messed-up.
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    @section How can I read DirectShow files?
    
    If you have built FFmpeg with @code{./configure --enable-avisynth}
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    (only possible on MinGW/Cygwin platforms),
    then you may use any file that DirectShow can read as input.
    
    
    Just create an "input.avs" text file with this single line ...
    @example
    
    DirectShowSource("C:\path to your file\yourfile.asf")
    
    @end example
    
    ... and then feed that text file to ffmpeg:
    
    ffmpeg -i input.avs
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    For ANY other help on AviSynth, please visit the
    @uref{http://www.avisynth.org/, AviSynth homepage}.
    
    @section How can I join video files?
    
    
    To "join" video files is quite ambiguous. The following list explains the
    different kinds of "joining" and points out how those are addressed in
    FFmpeg. To join video files may mean:
    
    @itemize
    
    @item
    To put them one after the other: this is called to @emph{concatenate} them
    (in short: concat) and is addressed
    @ref{How can I concatenate video files, in this very faq}.
    
    @item
    To put them together in the same file, to let the user choose between the
    different versions (example: different audio languages): this is called to
    @emph{multiplex} them together (in short: mux), and is done by simply
    invoking ffmpeg with several @option{-i} options.
    
    @item
    For audio, to put all channels together in a single stream (example: two
    mono streams into one stereo stream): this is sometimes called to
    @emph{merge} them, and can be done using the
    
    @url{https://ffmpeg.org/ffmpeg-filters.html#amerge, @code{amerge}} filter.
    
    
    @item
    For audio, to play one on top of the other: this is called to @emph{mix}
    them, and can be done by first merging them into a single stream and then
    
    using the @url{https://ffmpeg.org/ffmpeg-filters.html#pan, @code{pan}} filter to mix
    
    the channels at will.
    
    @item
    For video, to display both together, side by side or one on top of a part of
    the other; it can be done using the
    
    @url{https://ffmpeg.org/ffmpeg-filters.html#overlay, @code{overlay}} video filter.
    
    
    @end itemize
    
    @anchor{How can I concatenate video files}
    @section How can I concatenate video files?
    
    
    There are several solutions, depending on the exact circumstances.
    
    
    @subsection Concatenating using the concat @emph{filter}
    
    FFmpeg has a @url{https://ffmpeg.org/ffmpeg-filters.html#concat,
    
    @code{concat}} filter designed specifically for that, with examples in the
    documentation. This operation is recommended if you need to re-encode.
    
    @subsection Concatenating using the concat @emph{demuxer}
    
    
    FFmpeg has a @url{https://www.ffmpeg.org/ffmpeg-formats.html#concat,
    
    @code{concat}} demuxer which you can use when you want to avoid a re-encode and
    your format doesn't support file level concatenation.
    
    @subsection Concatenating using the concat @emph{protocol} (file level)
    
    FFmpeg has a @url{https://ffmpeg.org/ffmpeg-protocols.html#concat,
    
    @code{concat}} protocol designed specifically for that, with examples in the
    documentation.
    
    
    A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate
    
    video by merely concatenating the files containing them.
    
    
    Hence you may concatenate your multimedia files by first transcoding them to
    these privileged formats, then using the humble @code{cat} command (or the
    
    equally humble @code{copy} under Windows), and finally transcoding back to your
    
    format of choice.
    
    @example
    
    ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg
    ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg
    
    cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg
    
    ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
    
    @end example
    
    
    Additionally, you can use the @code{concat} protocol instead of @code{cat} or
    @code{copy} which will avoid creation of a potentially huge intermediate file.
    
    @example
    ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg
    ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg
    ffmpeg -i concat:"intermediate1.mpg|intermediate2.mpg" -c copy intermediate_all.mpg
    ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi
    @end example
    
    Note that you may need to escape the character "|" which is special for many
    shells.
    
    Another option is usage of named pipes, should your platform support it:
    
    
    @example
    mkfifo intermediate1.mpg
    mkfifo intermediate2.mpg
    
    ffmpeg -i input1.avi -qscale:v 1 -y intermediate1.mpg < /dev/null &
    ffmpeg -i input2.avi -qscale:v 1 -y intermediate2.mpg < /dev/null &
    
    cat intermediate1.mpg intermediate2.mpg |\
    
    ffmpeg -f mpeg -i - -c:v mpeg4 -acodec libmp3lame output.avi
    
    @end example
    
    
    @subsection Concatenating using raw audio and video
    
    
    Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also
    allow concatenation, and the transcoding step is almost lossless.
    
    When using multiple yuv4mpegpipe(s), the first line needs to be discarded
    from all but the first stream. This can be accomplished by piping through
    @code{tail} as seen below. Note that when piping through @code{tail} you
    must use command grouping, @code{@{  ;@}}, to background properly.
    
    
    For example, let's say we want to concatenate two FLV files into an
    output.flv file:
    
    
    @example
    mkfifo temp1.a
    mkfifo temp1.v
    mkfifo temp2.a
    mkfifo temp2.v
    mkfifo all.a
    mkfifo all.v
    ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null &
    ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null &
    ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null &
    
    @{ ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; @} &
    
    cat temp1.a temp2.a > all.a &
    cat temp1.v temp2.v > all.v &
    ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \
           -f yuv4mpegpipe -i all.v \
    
           -y output.flv
    
    rm temp[12].[av] all.[av]
    @end example
    
    
    @section Using @option{-f lavfi}, audio becomes mono for no apparent reason.
    
    Use @option{-dumpgraph -} to find out exactly where the channel layout is
    lost.
    
    
    Most likely, it is through @code{auto-inserted aresample}. Try to understand
    
    why the converting filter was needed at that place.
    
    
    Nicolas George's avatar
    Nicolas George committed
    Just before the output is a likely place, as @option{-f lavfi} currently
    
    only support packed S16.
    
    
    Then insert the correct @code{aformat} explicitly in the filtergraph,
    
    specifying the exact format.
    
    @example
    
    aformat=sample_fmts=s16:channel_layouts=stereo
    
    @section Why does FFmpeg not see the subtitles in my VOB file?
    
    VOB and a few other formats do not have a global header that describes
    everything present in the file. Instead, applications are supposed to scan
    the file to see what it contains. Since VOB files are frequently large, only
    the beginning is scanned. If the subtitles happen only later in the file,
    
    Lou Logan's avatar
    Lou Logan committed
    they will not be initially detected.
    
    
    Some applications, including the @code{ffmpeg} command-line tool, can only
    work with streams that were detected during the initial scan; streams that
    are detected later are ignored.
    
    The size of the initial scan is controlled by two options: @code{probesize}
    (default ~5 Mo) and @code{analyzeduration} (default 5,000,000 µs = 5 s). For
    the subtitle stream to be detected, both values must be large enough.
    
    
    @section Why was the @command{ffmpeg} @option{-sameq} option removed? What to use instead?
    
    
    The @option{-sameq} option meant "same quantizer", and made sense only in a
    very limited set of cases. Unfortunately, a lot of people mistook it for
    "same quality" and used it in places where it did not make sense: it had
    roughly the expected visible effect, but achieved it in a very inefficient
    way.
    
    Each encoder has its own set of options to set the quality-vs-size balance,
    use the options for the encoder you are using to set the quality level to a
    point acceptable for your tastes. The most common options to do that are
    @option{-qscale} and @option{-qmax}, but you should peruse the documentation
    of the encoder you chose.
    
    
    @section I have a stretched video, why does scaling does not fix it?
    
    A lot of video codecs and formats can store the @emph{aspect ratio} of the
    video: this is the ratio between the width and the height of either the full
    image (DAR, display aspect ratio) or individual pixels (SAR, sample aspect
    ratio). For example, EGA screens at resolution 640×350 had 4:3 DAR and 35:48
    SAR.
    
    Most still image processing work with square pixels, i.e. 1:1 SAR, but a lot
    of video standards, especially from the analogic-numeric transition era, use
    non-square pixels.
    
    Most processing filters in FFmpeg handle the aspect ratio to avoid
    stretching the image: cropping adjusts the DAR to keep the SAR constant,
    scaling adjusts the SAR to keep the DAR constant.
    
    If you want to stretch, or “unstretch”, the image, you need to override the
    information with the
    
    @url{https://ffmpeg.org/ffmpeg-filters.html#setdar_002c-setsar, @code{setdar or setsar filters}}.
    
    
    Do not forget to examine carefully the original video to check whether the
    stretching comes from the image or from the aspect ratio information.
    
    For example, to fix a badly encoded EGA capture, use the following commands,
    either the first one to upscale to square pixels or the second one to set
    the correct aspect ratio or the third one to avoid transcoding (may not work
    depending on the format / codec / player / phase of the moon):
    
    @example
    ffmpeg -i ega_screen.nut -vf scale=640:480,setsar=1 ega_screen_scaled.nut
    ffmpeg -i ega_screen.nut -vf setdar=4/3 ega_screen_anamorphic.nut
    ffmpeg -i ega_screen.nut -aspect 4/3 -c copy ega_screen_overridden.nut
    @end example
    
    
    @chapter Development
    
    
    @section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?
    
    Yes. Check the @file{doc/examples} directory in the source
    repository, also available online at:
    @url{https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples}.
    
    Examples are also installed by default, usually in
    @code{$PREFIX/share/ffmpeg/examples}.
    
    Also you may read the Developers Guide of the FFmpeg documentation. Alternatively,
    
    examine the source code for one of the many open source projects that
    
    already incorporate FFmpeg at (@url{projects.html}).
    
    @section Can you support my C compiler XXX?
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    It depends. If your compiler is C99-compliant, then patches to support
    it are likely to be welcome if they do not pollute the source code
    with @code{#ifdef}s related to the compiler.
    
    @section Is Microsoft Visual C++ supported?
    
    Yes. Please see the @uref{platform.html, Microsoft Visual C++}
    section in the FFmpeg documentation.
    
    @section Can you add automake, libtool or autoconf support?
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    No. These tools are too bloated and they complicate the build.
    
    @section Why not rewrite FFmpeg in object-oriented C++?
    
    FFmpeg is already organized in a highly modular manner and does not need to
    
    be rewritten in a formal object language. Further, many of the developers
    
    Mike Melanson's avatar
    Mike Melanson committed
    favor straight C; it works for them. For more arguments on this matter,
    
    read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}.
    
    @section Why are the ffmpeg programs devoid of debugging symbols?
    
    The build process creates @command{ffmpeg_g}, @command{ffplay_g}, etc. which
    contain full debug information. Those binaries are stripped to create
    @command{ffmpeg}, @command{ffplay}, etc. If you need the debug information, use
    the *_g versions.
    
    @section I do not like the LGPL, can I contribute code under the GPL instead?
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    Yes, as long as the code is optional and can easily and cleanly be placed
    
    root's avatar
    root committed
    under #if CONFIG_GPL without breaking anything. So, for example, a new codec
    
    or filter would be OK under GPL while a bug fix to LGPL code would not.
    
    @section I'm using FFmpeg from within my C application but the linker complains about missing symbols from the libraries themselves.
    
    FFmpeg builds static libraries by default. In static libraries, dependencies
    are not handled. That has two consequences. First, you must specify the
    libraries in dependency order: @code{-lavdevice} must come before
    @code{-lavformat}, @code{-lavutil} must come after everything else, etc.
    Second, external libraries that are used in FFmpeg have to be specified too.
    
    An easy way to get the full list of required libraries in dependency order
    is to use @code{pkg-config}.
    
    @example
    
    c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec)
    
    @end example
    
    See @file{doc/example/Makefile} and @file{doc/example/pc-uninstalled} for
    more details.
    
    
    @section I'm using FFmpeg from within my C++ application but the linker complains about missing symbols which seem to be available.
    
    FFmpeg is a pure C project, so to use the libraries within your C++ application
    
    you need to explicitly state that you are using a C library. You can do this by
    
    encompassing your FFmpeg includes using @code{extern "C"}.
    
    
    See @url{http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3}
    
    
    @section I'm using libavutil from within my C++ application but the compiler complains about 'UINT64_C' was not declared in this scope
    
    
    FFmpeg is a pure C project using C99 math features, in order to enable C++
    
    to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS
    
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    @section I have a file in memory / a API different from *open/*read/ libc how do I use it with libavformat?
    
    You have to create a custom AVIOContext using @code{avio_alloc_context},
    
    see @file{libavformat/aviobuf.c} in FFmpeg and @file{libmpdemux/demux_lavf.c} in MPlayer or MPlayer2 sources.
    
    Diego Biurrun's avatar
    Diego Biurrun committed
    @section Where is the documentation about ffv1, msmpeg4, asv1, 4xm?
    
    see @url{https://www.ffmpeg.org/~michael/}
    
    @section How do I feed H.263-RTP (and other codecs in RTP) to libavcodec?
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    
    
    Luca Barbato's avatar
    Luca Barbato committed
    Even if peculiar since it is network oriented, RTP is a container like any
    
    other. You have to @emph{demux} RTP before feeding the payload to libavcodec.
    In this specific case please look at RFC 4629 to see how it should be done.
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    
    
    root's avatar
    root committed
    @section AVStream.r_frame_rate is wrong, it is much larger than the frame rate.
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    
    
    @code{r_frame_rate} is NOT the average frame rate, it is the smallest frame rate
    
    that can accurately represent all timestamps. So no, it is not
    wrong if it is larger than the average!
    
    For example, if you have mixed 25 and 30 fps content, then @code{r_frame_rate}
    will be 150 (it is the least common multiple).
    If you are looking for the average frame rate, see @code{AVStream.avg_frame_rate}.
    
    Michael Niedermayer's avatar
    Michael Niedermayer committed
    
    
    @section Why is @code{make fate} not running all tests?
    
    Make sure you have the fate-suite samples and the @code{SAMPLES} Make variable
    or @code{FATE_SAMPLES} environment variable or the @code{--samples}
    @command{configure} option is set to the right path.
    
    @section Why is @code{make fate} not finding the samples?
    
    Do you happen to have a @code{~} character in the samples path to indicate a
    home directory? The value is used in ways where the shell cannot expand it,
    causing FATE to not find files. Just replace @code{~} by the full path.