xref: /third_party/ffmpeg/doc/ffmpeg.texi (revision cabdff1a)
1\input texinfo @c -*- texinfo -*-
2@documentencoding UTF-8
3
4@settitle ffmpeg Documentation
5@titlepage
6@center @titlefont{ffmpeg Documentation}
7@end titlepage
8
9@top
10
11@contents
12
13@chapter Synopsis
14
15ffmpeg [@var{global_options}] @{[@var{input_file_options}] -i @file{input_url}@} ... @{[@var{output_file_options}] @file{output_url}@} ...
16
17@chapter Description
18@c man begin DESCRIPTION
19
20@command{ffmpeg} is a very fast video and audio converter that can also grab from
21a live audio/video source. It can also convert between arbitrary sample
22rates and resize video on the fly with a high quality polyphase filter.
23
24@command{ffmpeg} reads from an arbitrary number of input "files" (which can be regular
25files, pipes, network streams, grabbing devices, etc.), specified by the
26@code{-i} option, and writes to an arbitrary number of output "files", which are
27specified by a plain output url. Anything found on the command line which
28cannot be interpreted as an option is considered to be an output url.
29
30Each input or output url can, in principle, contain any number of streams of
31different types (video/audio/subtitle/attachment/data). The allowed number and/or
32types of streams may be limited by the container format. Selecting which
33streams from which inputs will go into which output is either done automatically
34or with the @code{-map} option (see the Stream selection chapter).
35
36To refer to input files in options, you must use their indices (0-based). E.g.
37the first input file is @code{0}, the second is @code{1}, etc. Similarly, streams
38within a file are referred to by their indices. E.g. @code{2:3} refers to the
39fourth stream in the third input file. Also see the Stream specifiers chapter.
40
41As a general rule, options are applied to the next specified
42file. Therefore, order is important, and you can have the same
43option on the command line multiple times. Each occurrence is
44then applied to the next input or output file.
45Exceptions from this rule are the global options (e.g. verbosity level),
46which should be specified first.
47
48Do not mix input and output files -- first specify all input files, then all
49output files. Also do not mix options which belong to different files. All
50options apply ONLY to the next input or output file and are reset between files.
51
52@itemize
53@item
54To set the video bitrate of the output file to 64 kbit/s:
55@example
56ffmpeg -i input.avi -b:v 64k -bufsize 64k output.avi
57@end example
58
59@item
60To force the frame rate of the output file to 24 fps:
61@example
62ffmpeg -i input.avi -r 24 output.avi
63@end example
64
65@item
66To force the frame rate of the input file (valid for raw formats only)
67to 1 fps and the frame rate of the output file to 24 fps:
68@example
69ffmpeg -r 1 -i input.m2v -r 24 output.avi
70@end example
71@end itemize
72
73The format option may be needed for raw input files.
74
75@c man end DESCRIPTION
76
77@chapter Detailed description
78@c man begin DETAILED DESCRIPTION
79
80The transcoding process in @command{ffmpeg} for each output can be described by
81the following diagram:
82
83@verbatim
84 _______              ______________
85|       |            |              |
86| input |  demuxer   | encoded data |   decoder
87| file  | ---------> | packets      | -----+
88|_______|            |______________|      |
89                                           v
90                                       _________
91                                      |         |
92                                      | decoded |
93                                      | frames  |
94                                      |_________|
95 ________             ______________       |
96|        |           |              |      |
97| output | <-------- | encoded data | <----+
98| file   |   muxer   | packets      |   encoder
99|________|           |______________|
100
101
102@end verbatim
103
104@command{ffmpeg} calls the libavformat library (containing demuxers) to read
105input files and get packets containing encoded data from them. When there are
106multiple input files, @command{ffmpeg} tries to keep them synchronized by
107tracking lowest timestamp on any active input stream.
108
109Encoded packets are then passed to the decoder (unless streamcopy is selected
110for the stream, see further for a description). The decoder produces
111uncompressed frames (raw video/PCM audio/...) which can be processed further by
112filtering (see next section). After filtering, the frames are passed to the
113encoder, which encodes them and outputs encoded packets. Finally those are
114passed to the muxer, which writes the encoded packets to the output file.
115
116@section Filtering
117Before encoding, @command{ffmpeg} can process raw audio and video frames using
118filters from the libavfilter library. Several chained filters form a filter
119graph. @command{ffmpeg} distinguishes between two types of filtergraphs:
120simple and complex.
121
122@subsection Simple filtergraphs
123Simple filtergraphs are those that have exactly one input and output, both of
124the same type. In the above diagram they can be represented by simply inserting
125an additional step between decoding and encoding:
126
127@verbatim
128 _________                        ______________
129|         |                      |              |
130| decoded |                      | encoded data |
131| frames  |\                   _ | packets      |
132|_________| \                  /||______________|
133             \   __________   /
134  simple     _\||          | /  encoder
135  filtergraph   | filtered |/
136                | frames   |
137                |__________|
138
139@end verbatim
140
141Simple filtergraphs are configured with the per-stream @option{-filter} option
142(with @option{-vf} and @option{-af} aliases for video and audio respectively).
143A simple filtergraph for video can look for example like this:
144
145@verbatim
146 _______        _____________        _______        ________
147|       |      |             |      |       |      |        |
148| input | ---> | deinterlace | ---> | scale | ---> | output |
149|_______|      |_____________|      |_______|      |________|
150
151@end verbatim
152
153Note that some filters change frame properties but not frame contents. E.g. the
154@code{fps} filter in the example above changes number of frames, but does not
155touch the frame contents. Another example is the @code{setpts} filter, which
156only sets timestamps and otherwise passes the frames unchanged.
157
158@subsection Complex filtergraphs
159Complex filtergraphs are those which cannot be described as simply a linear
160processing chain applied to one stream. This is the case, for example, when the graph has
161more than one input and/or output, or when output stream type is different from
162input. They can be represented with the following diagram:
163
164@verbatim
165 _________
166|         |
167| input 0 |\                    __________
168|_________| \                  |          |
169             \   _________    /| output 0 |
170              \ |         |  / |__________|
171 _________     \| complex | /
172|         |     |         |/
173| input 1 |---->| filter  |\
174|_________|     |         | \   __________
175               /| graph   |  \ |          |
176              / |         |   \| output 1 |
177 _________   /  |_________|    |__________|
178|         | /
179| input 2 |/
180|_________|
181
182@end verbatim
183
184Complex filtergraphs are configured with the @option{-filter_complex} option.
185Note that this option is global, since a complex filtergraph, by its nature,
186cannot be unambiguously associated with a single stream or file.
187
188The @option{-lavfi} option is equivalent to @option{-filter_complex}.
189
190A trivial example of a complex filtergraph is the @code{overlay} filter, which
191has two video inputs and one video output, containing one video overlaid on top
192of the other. Its audio counterpart is the @code{amix} filter.
193
194@section Stream copy
195Stream copy is a mode selected by supplying the @code{copy} parameter to the
196@option{-codec} option. It makes @command{ffmpeg} omit the decoding and encoding
197step for the specified stream, so it does only demuxing and muxing. It is useful
198for changing the container format or modifying container-level metadata. The
199diagram above will, in this case, simplify to this:
200
201@verbatim
202 _______              ______________            ________
203|       |            |              |          |        |
204| input |  demuxer   | encoded data |  muxer   | output |
205| file  | ---------> | packets      | -------> | file   |
206|_______|            |______________|          |________|
207
208@end verbatim
209
210Since there is no decoding or encoding, it is very fast and there is no quality
211loss. However, it might not work in some cases because of many factors. Applying
212filters is obviously also impossible, since filters work on uncompressed data.
213
214@c man end DETAILED DESCRIPTION
215
216@chapter Stream selection
217@c man begin STREAM SELECTION
218
219@command{ffmpeg} provides the @code{-map} option for manual control of stream selection in each
220output file. Users can skip @code{-map} and let ffmpeg perform automatic stream selection as
221described below. The @code{-vn / -an / -sn / -dn} options can be used to skip inclusion of
222video, audio, subtitle and data streams respectively, whether manually mapped or automatically
223selected, except for those streams which are outputs of complex filtergraphs.
224
225@section Description
226The sub-sections that follow describe the various rules that are involved in stream selection.
227The examples that follow next show how these rules are applied in practice.
228
229While every effort is made to accurately reflect the behavior of the program, FFmpeg is under
230continuous development and the code may have changed since the time of this writing.
231
232@subsection Automatic stream selection
233
234In the absence of any map options for a particular output file, ffmpeg inspects the output
235format to check which type of streams can be included in it, viz. video, audio and/or
236subtitles. For each acceptable stream type, ffmpeg will pick one stream, when available,
237from among all the inputs.
238
239It will select that stream based upon the following criteria:
240@itemize
241@item
242for video, it is the stream with the highest resolution,
243@item
244for audio, it is the stream with the most channels,
245@item
246for subtitles, it is the first subtitle stream found but there's a caveat.
247The output format's default subtitle encoder can be either text-based or image-based,
248and only a subtitle stream of the same type will be chosen.
249@end itemize
250
251In the case where several streams of the same type rate equally, the stream with the lowest
252index is chosen.
253
254Data or attachment streams are not automatically selected and can only be included
255using @code{-map}.
256@subsection Manual stream selection
257
258When @code{-map} is used, only user-mapped streams are included in that output file,
259with one possible exception for filtergraph outputs described below.
260
261@subsection Complex filtergraphs
262
263If there are any complex filtergraph output streams with unlabeled pads, they will be added
264to the first output file. This will lead to a fatal error if the stream type is not supported
265by the output format. In the absence of the map option, the inclusion of these streams leads
266to the automatic stream selection of their types being skipped. If map options are present,
267these filtergraph streams are included in addition to the mapped streams.
268
269Complex filtergraph output streams with labeled pads must be mapped once and exactly once.
270
271@subsection Stream handling
272
273Stream handling is independent of stream selection, with an exception for subtitles described
274below. Stream handling is set via the @code{-codec} option addressed to streams within a
275specific @emph{output} file. In particular, codec options are applied by ffmpeg after the
276stream selection process and thus do not influence the latter. If no @code{-codec} option is
277specified for a stream type, ffmpeg will select the default encoder registered by the output
278file muxer.
279
280An exception exists for subtitles. If a subtitle encoder is specified for an output file, the
281first subtitle stream found of any type, text or image, will be included. ffmpeg does not validate
282if the specified encoder can convert the selected stream or if the converted stream is acceptable
283within the output format. This applies generally as well: when the user sets an encoder manually,
284the stream selection process cannot check if the encoded stream can be muxed into the output file.
285If it cannot, ffmpeg will abort and @emph{all} output files will fail to be processed.
286
287@section Examples
288
289The following examples illustrate the behavior, quirks and limitations of ffmpeg's stream
290selection methods.
291
292They assume the following three input files.
293
294@verbatim
295
296input file 'A.avi'
297      stream 0: video 640x360
298      stream 1: audio 2 channels
299
300input file 'B.mp4'
301      stream 0: video 1920x1080
302      stream 1: audio 2 channels
303      stream 2: subtitles (text)
304      stream 3: audio 5.1 channels
305      stream 4: subtitles (text)
306
307input file 'C.mkv'
308      stream 0: video 1280x720
309      stream 1: audio 2 channels
310      stream 2: subtitles (image)
311@end verbatim
312
313@subsubheading Example: automatic stream selection
314@example
315ffmpeg -i A.avi -i B.mp4 out1.mkv out2.wav -map 1:a -c:a copy out3.mov
316@end example
317There are three output files specified, and for the first two, no @code{-map} options
318are set, so ffmpeg will select streams for these two files automatically.
319
320@file{out1.mkv} is a Matroska container file and accepts video, audio and subtitle streams,
321so ffmpeg will try to select one of each type.@*
322For video, it will select @code{stream 0} from @file{B.mp4}, which has the highest
323resolution among all the input video streams.@*
324For audio, it will select @code{stream 3} from @file{B.mp4}, since it has the greatest
325number of channels.@*
326For subtitles, it will select @code{stream 2} from @file{B.mp4}, which is the first subtitle
327stream from among @file{A.avi} and @file{B.mp4}.
328
329@file{out2.wav} accepts only audio streams, so only @code{stream 3} from @file{B.mp4} is
330selected.
331
332For @file{out3.mov}, since a @code{-map} option is set, no automatic stream selection will
333occur. The @code{-map 1:a} option will select all audio streams from the second input
334@file{B.mp4}. No other streams will be included in this output file.
335
336For the first two outputs, all included streams will be transcoded. The encoders chosen will
337be the default ones registered by each output format, which may not match the codec of the
338selected input streams.
339
340For the third output, codec option for audio streams has been set
341to @code{copy}, so no decoding-filtering-encoding operations will occur, or @emph{can} occur.
342Packets of selected streams shall be conveyed from the input file and muxed within the output
343file.
344
345@subsubheading Example: automatic subtitles selection
346@example
347ffmpeg -i C.mkv out1.mkv -c:s dvdsub -an out2.mkv
348@end example
349Although @file{out1.mkv} is a Matroska container file which accepts subtitle streams, only a
350video and audio stream shall be selected. The subtitle stream of @file{C.mkv} is image-based
351and the default subtitle encoder of the Matroska muxer is text-based, so a transcode operation
352for the subtitles is expected to fail and hence the stream isn't selected. However, in
353@file{out2.mkv}, a subtitle encoder is specified in the command and so, the subtitle stream is
354selected, in addition to the video stream. The presence of @code{-an} disables audio stream
355selection for @file{out2.mkv}.
356
357@subsubheading Example: unlabeled filtergraph outputs
358@example
359ffmpeg -i A.avi -i C.mkv -i B.mp4 -filter_complex "overlay" out1.mp4 out2.srt
360@end example
361A filtergraph is setup here using the @code{-filter_complex} option and consists of a single
362video filter. The @code{overlay} filter requires exactly two video inputs, but none are
363specified, so the first two available video streams are used, those of @file{A.avi} and
364@file{C.mkv}. The output pad of the filter has no label and so is sent to the first output file
365@file{out1.mp4}. Due to this, automatic selection of the video stream is skipped, which would
366have selected the stream in @file{B.mp4}. The audio stream with most channels viz. @code{stream 3}
367in @file{B.mp4}, is chosen automatically. No subtitle stream is chosen however, since the MP4
368format has no default subtitle encoder registered, and the user hasn't specified a subtitle encoder.
369
370The 2nd output file, @file{out2.srt}, only accepts text-based subtitle streams. So, even though
371the first subtitle stream available belongs to @file{C.mkv}, it is image-based and hence skipped.
372The selected stream, @code{stream 2} in @file{B.mp4}, is the first text-based subtitle stream.
373
374@subsubheading Example: labeled filtergraph outputs
375@example
376ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0[outv];overlay;aresample" \
377       -map '[outv]' -an        out1.mp4 \
378                                out2.mkv \
379       -map '[outv]' -map 1:a:0 out3.mkv
380@end example
381
382The above command will fail, as the output pad labelled @code{[outv]} has been mapped twice.
383None of the output files shall be processed.
384
385@example
386ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0[outv];overlay;aresample" \
387       -an        out1.mp4 \
388                  out2.mkv \
389       -map 1:a:0 out3.mkv
390@end example
391
392This command above will also fail as the hue filter output has a label, @code{[outv]},
393and hasn't been mapped anywhere.
394
395The command should be modified as follows,
396@example
397ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0,split=2[outv1][outv2];overlay;aresample" \
398        -map '[outv1]' -an        out1.mp4 \
399                                  out2.mkv \
400        -map '[outv2]' -map 1:a:0 out3.mkv
401@end example
402The video stream from @file{B.mp4} is sent to the hue filter, whose output is cloned once using
403the split filter, and both outputs labelled. Then a copy each is mapped to the first and third
404output files.
405
406The overlay filter, requiring two video inputs, uses the first two unused video streams. Those
407are the streams from @file{A.avi} and @file{C.mkv}. The overlay output isn't labelled, so it is
408sent to the first output file @file{out1.mp4}, regardless of the presence of the @code{-map} option.
409
410The aresample filter is sent the first unused audio stream, that of @file{A.avi}. Since this filter
411output is also unlabelled, it too is mapped to the first output file. The presence of @code{-an}
412only suppresses automatic or manual stream selection of audio streams, not outputs sent from
413filtergraphs. Both these mapped streams shall be ordered before the mapped stream in @file{out1.mp4}.
414
415The video, audio and subtitle streams mapped to @code{out2.mkv} are entirely determined by
416automatic stream selection.
417
418@file{out3.mkv} consists of the cloned video output from the hue filter and the first audio
419stream from @file{B.mp4}.
420@*
421
422@c man end STREAM SELECTION
423
424@chapter Options
425@c man begin OPTIONS
426
427@include fftools-common-opts.texi
428
429@section Main options
430
431@table @option
432
433@item -f @var{fmt} (@emph{input/output})
434Force input or output file format. The format is normally auto detected for input
435files and guessed from the file extension for output files, so this option is not
436needed in most cases.
437
438@item -i @var{url} (@emph{input})
439input file url
440
441@item -y (@emph{global})
442Overwrite output files without asking.
443
444@item -n (@emph{global})
445Do not overwrite output files, and exit immediately if a specified
446output file already exists.
447
448@item -stream_loop @var{number} (@emph{input})
449Set number of times input stream shall be looped. Loop 0 means no loop,
450loop -1 means infinite loop.
451
452@item -recast_media (@emph{global})
453Allow forcing a decoder of a different media type than the one
454detected or designated by the demuxer. Useful for decoding media
455data muxed as data streams.
456
457@item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
458@itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
459Select an encoder (when used before an output file) or a decoder (when used
460before an input file) for one or more streams. @var{codec} is the name of a
461decoder/encoder or a special value @code{copy} (output only) to indicate that
462the stream is not to be re-encoded.
463
464For example
465@example
466ffmpeg -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT
467@end example
468encodes all video streams with libx264 and copies all audio streams.
469
470For each stream, the last matching @code{c} option is applied, so
471@example
472ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
473@end example
474will copy all the streams except the second video, which will be encoded with
475libx264, and the 138th audio, which will be encoded with libvorbis.
476
477@item -t @var{duration} (@emph{input/output})
478When used as an input option (before @code{-i}), limit the @var{duration} of
479data read from the input file.
480
481When used as an output option (before an output url), stop writing the
482output after its duration reaches @var{duration}.
483
484@var{duration} must be a time duration specification,
485see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
486
487-to and -t are mutually exclusive and -t has priority.
488
489@item -to @var{position} (@emph{input/output})
490Stop writing the output or reading the input at @var{position}.
491@var{position} must be a time duration specification,
492see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
493
494-to and -t are mutually exclusive and -t has priority.
495
496@item -fs @var{limit_size} (@emph{output})
497Set the file size limit, expressed in bytes. No further chunk of bytes is written
498after the limit is exceeded. The size of the output file is slightly more than the
499requested file size.
500
501@item -ss @var{position} (@emph{input/output})
502When used as an input option (before @code{-i}), seeks in this input file to
503@var{position}. Note that in most formats it is not possible to seek exactly,
504so @command{ffmpeg} will seek to the closest seek point before @var{position}.
505When transcoding and @option{-accurate_seek} is enabled (the default), this
506extra segment between the seek point and @var{position} will be decoded and
507discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
508will be preserved.
509
510When used as an output option (before an output url), decodes but discards
511input until the timestamps reach @var{position}.
512
513@var{position} must be a time duration specification,
514see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
515
516@item -sseof @var{position} (@emph{input})
517
518Like the @code{-ss} option but relative to the "end of file". That is negative
519values are earlier in the file, 0 is at EOF.
520
521@item -isync @var{input_index} (@emph{input})
522Assign an input as a sync source.
523
524This will take the difference between the start times of the target and reference inputs and
525offset the timestamps of the target file by that difference. The source timestamps of the two
526inputs should derive from the same clock source for expected results. If @code{copyts} is set
527then @code{start_at_zero} must also be set. If either of the inputs has no starting timestamp
528then no sync adjustment is made.
529
530Acceptable values are those that refer to a valid ffmpeg input index. If the sync reference is
531the target index itself or @var{-1}, then no adjustment is made to target timestamps. A sync
532reference may not itself be synced to any other input.
533
534Default value is @var{-1}.
535
536@item -itsoffset @var{offset} (@emph{input})
537Set the input time offset.
538
539@var{offset} must be a time duration specification,
540see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
541
542The offset is added to the timestamps of the input files. Specifying
543a positive offset means that the corresponding streams are delayed by
544the time duration specified in @var{offset}.
545
546@item -itsscale @var{scale} (@emph{input,per-stream})
547Rescale input timestamps. @var{scale} should be a floating point number.
548
549@item -timestamp @var{date} (@emph{output})
550Set the recording timestamp in the container.
551
552@var{date} must be a date specification,
553see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
554
555@item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
556Set a metadata key/value pair.
557
558An optional @var{metadata_specifier} may be given to set metadata
559on streams, chapters or programs. See @code{-map_metadata}
560documentation for details.
561
562This option overrides metadata set with @code{-map_metadata}. It is
563also possible to delete metadata by using an empty value.
564
565For example, for setting the title in the output file:
566@example
567ffmpeg -i in.avi -metadata title="my title" out.flv
568@end example
569
570To set the language of the first audio stream:
571@example
572ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
573@end example
574
575@item -disposition[:stream_specifier] @var{value} (@emph{output,per-stream})
576Sets the disposition for a stream.
577
578By default, the disposition is copied from the input stream, unless the output
579stream this option applies to is fed by a complex filtergraph - in that case the
580disposition is unset by default.
581
582@var{value} is a sequence of items separated by '+' or '-'. The first item may
583also be prefixed with '+' or '-', in which case this option modifies the default
584value. Otherwise (the first item is not prefixed) this options overrides the
585default value. A '+' prefix adds the given disposition, '-' removes it. It is
586also possible to clear the disposition by setting it to 0.
587
588If no @code{-disposition} options were specified for an output file, ffmpeg will
589automatically set the 'default' disposition on the first stream of each type,
590when there are multiple streams of this type in the output file and no stream of
591that type is already marked as default.
592
593The @code{-dispositions} option lists the known dispositions.
594
595For example, to make the second audio stream the default stream:
596@example
597ffmpeg -i in.mkv -c copy -disposition:a:1 default out.mkv
598@end example
599
600To make the second subtitle stream the default stream and remove the default
601disposition from the first subtitle stream:
602@example
603ffmpeg -i in.mkv -c copy -disposition:s:0 0 -disposition:s:1 default out.mkv
604@end example
605
606To add an embedded cover/thumbnail:
607@example
608ffmpeg -i in.mp4 -i IMAGE -map 0 -map 1 -c copy -c:v:1 png -disposition:v:1 attached_pic out.mp4
609@end example
610
611Not all muxers support embedded thumbnails, and those who do, only support a few formats, like JPEG or PNG.
612
613@item -program [title=@var{title}:][program_num=@var{program_num}:]st=@var{stream}[:st=@var{stream}...] (@emph{output})
614
615Creates a program with the specified @var{title}, @var{program_num} and adds the specified
616@var{stream}(s) to it.
617
618@item -target @var{type} (@emph{output})
619Specify target file type (@code{vcd}, @code{svcd}, @code{dvd}, @code{dv},
620@code{dv50}). @var{type} may be prefixed with @code{pal-}, @code{ntsc-} or
621@code{film-} to use the corresponding standard. All the format options
622(bitrate, codecs, buffer sizes) are then set automatically. You can just type:
623
624@example
625ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
626@end example
627
628Nevertheless you can specify additional options as long as you know
629they do not conflict with the standard, as in:
630
631@example
632ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
633@end example
634
635The parameters set for each target are as follows.
636
637@strong{VCD}
638@example
639@var{pal}:
640-f vcd -muxrate 1411200 -muxpreload 0.44 -packetsize 2324
641-s 352x288 -r 25
642-codec:v mpeg1video -g 15 -b:v 1150k -maxrate:v 1150k -minrate:v 1150k -bufsize:v 327680
643-ar 44100 -ac 2
644-codec:a mp2 -b:a 224k
645
646@var{ntsc}:
647-f vcd -muxrate 1411200 -muxpreload 0.44 -packetsize 2324
648-s 352x240 -r 30000/1001
649-codec:v mpeg1video -g 18 -b:v 1150k -maxrate:v 1150k -minrate:v 1150k -bufsize:v 327680
650-ar 44100 -ac 2
651-codec:a mp2 -b:a 224k
652
653@var{film}:
654-f vcd -muxrate 1411200 -muxpreload 0.44 -packetsize 2324
655-s 352x240 -r 24000/1001
656-codec:v mpeg1video -g 18 -b:v 1150k -maxrate:v 1150k -minrate:v 1150k -bufsize:v 327680
657-ar 44100 -ac 2
658-codec:a mp2 -b:a 224k
659@end example
660
661@strong{SVCD}
662@example
663@var{pal}:
664-f svcd -packetsize 2324
665-s 480x576 -pix_fmt yuv420p -r 25
666-codec:v mpeg2video -g 15 -b:v 2040k -maxrate:v 2516k -minrate:v 0 -bufsize:v 1835008 -scan_offset 1
667-ar 44100
668-codec:a mp2 -b:a 224k
669
670@var{ntsc}:
671-f svcd -packetsize 2324
672-s 480x480 -pix_fmt yuv420p -r 30000/1001
673-codec:v mpeg2video -g 18 -b:v 2040k -maxrate:v 2516k -minrate:v 0 -bufsize:v 1835008 -scan_offset 1
674-ar 44100
675-codec:a mp2 -b:a 224k
676
677@var{film}:
678-f svcd -packetsize 2324
679-s 480x480 -pix_fmt yuv420p -r 24000/1001
680-codec:v mpeg2video -g 18 -b:v 2040k -maxrate:v 2516k -minrate:v 0 -bufsize:v 1835008 -scan_offset 1
681-ar 44100
682-codec:a mp2 -b:a 224k
683@end example
684
685@strong{DVD}
686@example
687@var{pal}:
688-f dvd -muxrate 10080k -packetsize 2048
689-s 720x576 -pix_fmt yuv420p -r 25
690-codec:v mpeg2video -g 15 -b:v 6000k -maxrate:v 9000k -minrate:v 0 -bufsize:v 1835008
691-ar 48000
692-codec:a ac3 -b:a 448k
693
694@var{ntsc}:
695-f dvd -muxrate 10080k -packetsize 2048
696-s 720x480 -pix_fmt yuv420p -r 30000/1001
697-codec:v mpeg2video -g 18 -b:v 6000k -maxrate:v 9000k -minrate:v 0 -bufsize:v 1835008
698-ar 48000
699-codec:a ac3 -b:a 448k
700
701@var{film}:
702-f dvd -muxrate 10080k -packetsize 2048
703-s 720x480 -pix_fmt yuv420p -r 24000/1001
704-codec:v mpeg2video -g 18 -b:v 6000k -maxrate:v 9000k -minrate:v 0 -bufsize:v 1835008
705-ar 48000
706-codec:a ac3 -b:a 448k
707@end example
708
709@strong{DV}
710@example
711@var{pal}:
712-f dv
713-s 720x576 -pix_fmt yuv420p -r 25
714-ar 48000 -ac 2
715
716@var{ntsc}:
717-f dv
718-s 720x480 -pix_fmt yuv411p -r 30000/1001
719-ar 48000 -ac 2
720
721@var{film}:
722-f dv
723-s 720x480 -pix_fmt yuv411p -r 24000/1001
724-ar 48000 -ac 2
725@end example
726The @code{dv50} target is identical to the @code{dv} target except that the pixel format set is @code{yuv422p} for all three standards.
727
728Any user-set value for a parameter above will override the target preset value. In that case, the output may
729not comply with the target standard.
730
731@item -dn (@emph{input/output})
732As an input option, blocks all data streams of a file from being filtered or
733being automatically selected or mapped for any output. See @code{-discard}
734option to disable streams individually.
735
736As an output option, disables data recording i.e. automatic selection or
737mapping of any data stream. For full manual control see the @code{-map}
738option.
739
740@item -dframes @var{number} (@emph{output})
741Set the number of data frames to output. This is an obsolete alias for
742@code{-frames:d}, which you should use instead.
743
744@item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
745Stop writing to the stream after @var{framecount} frames.
746
747@item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
748@itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
749Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
750codec-dependent.
751If @var{qscale} is used without a @var{stream_specifier} then it applies only
752to the video stream, this is to maintain compatibility with previous behavior
753and as specifying the same codec specific value to 2 different codecs that is
754audio and video generally is not what is intended when no stream_specifier is
755used.
756
757@anchor{filter_option}
758@item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
759Create the filtergraph specified by @var{filtergraph} and use it to
760filter the stream.
761
762@var{filtergraph} is a description of the filtergraph to apply to
763the stream, and must have a single input and a single output of the
764same type of the stream. In the filtergraph, the input is associated
765to the label @code{in}, and the output to the label @code{out}. See
766the ffmpeg-filters manual for more information about the filtergraph
767syntax.
768
769See the @ref{filter_complex_option,,-filter_complex option} if you
770want to create filtergraphs with multiple inputs and/or outputs.
771
772@item -filter_script[:@var{stream_specifier}] @var{filename} (@emph{output,per-stream})
773This option is similar to @option{-filter}, the only difference is that its
774argument is the name of the file from which a filtergraph description is to be
775read.
776
777@item -reinit_filter[:@var{stream_specifier}] @var{integer} (@emph{input,per-stream})
778This boolean option determines if the filtergraph(s) to which this stream is fed gets
779reinitialized when input frame parameters change mid-stream. This option is enabled by
780default as most video and all audio filters cannot handle deviation in input frame properties.
781Upon reinitialization, existing filter state is lost, like e.g. the frame count @code{n}
782reference available in some filters. Any frames buffered at time of reinitialization are lost.
783The properties where a change triggers reinitialization are,
784for video, frame resolution or pixel format;
785for audio, sample format, sample rate, channel count or channel layout.
786
787@item -filter_threads @var{nb_threads} (@emph{global})
788Defines how many threads are used to process a filter pipeline. Each pipeline
789will produce a thread pool with this many threads available for parallel processing.
790The default is the number of available CPUs.
791
792@item -pre[:@var{stream_specifier}] @var{preset_name} (@emph{output,per-stream})
793Specify the preset for matching stream(s).
794
795@item -stats (@emph{global})
796Print encoding progress/statistics. It is on by default, to explicitly
797disable it you need to specify @code{-nostats}.
798
799@item -stats_period @var{time} (@emph{global})
800Set period at which encoding progress/statistics are updated. Default is 0.5 seconds.
801
802@item -progress @var{url} (@emph{global})
803Send program-friendly progress information to @var{url}.
804
805Progress information is written periodically and at the end of
806the encoding process. It is made of "@var{key}=@var{value}" lines. @var{key}
807consists of only alphanumeric characters. The last key of a sequence of
808progress information is always "progress".
809
810The update period is set using @code{-stats_period}.
811
812@anchor{stdin option}
813@item -stdin
814Enable interaction on standard input. On by default unless standard input is
815used as an input. To explicitly disable interaction you need to specify
816@code{-nostdin}.
817
818Disabling interaction on standard input is useful, for example, if
819ffmpeg is in the background process group. Roughly the same result can
820be achieved with @code{ffmpeg ... < /dev/null} but it requires a
821shell.
822
823@item -debug_ts (@emph{global})
824Print timestamp information. It is off by default. This option is
825mostly useful for testing and debugging purposes, and the output
826format may change from one version to another, so it should not be
827employed by portable scripts.
828
829See also the option @code{-fdebug ts}.
830
831@item -attach @var{filename} (@emph{output})
832Add an attachment to the output file. This is supported by a few formats
833like Matroska for e.g. fonts used in rendering subtitles. Attachments
834are implemented as a specific type of stream, so this option will add
835a new stream to the file. It is then possible to use per-stream options
836on this stream in the usual way. Attachment streams created with this
837option will be created after all the other streams (i.e. those created
838with @code{-map} or automatic mappings).
839
840Note that for Matroska you also have to set the mimetype metadata tag:
841@example
842ffmpeg -i INPUT -attach DejaVuSans.ttf -metadata:s:2 mimetype=application/x-truetype-font out.mkv
843@end example
844(assuming that the attachment stream will be third in the output file).
845
846@item -dump_attachment[:@var{stream_specifier}] @var{filename} (@emph{input,per-stream})
847Extract the matching attachment stream into a file named @var{filename}. If
848@var{filename} is empty, then the value of the @code{filename} metadata tag
849will be used.
850
851E.g. to extract the first attachment to a file named 'out.ttf':
852@example
853ffmpeg -dump_attachment:t:0 out.ttf -i INPUT
854@end example
855To extract all attachments to files determined by the @code{filename} tag:
856@example
857ffmpeg -dump_attachment:t "" -i INPUT
858@end example
859
860Technical note -- attachments are implemented as codec extradata, so this
861option can actually be used to extract extradata from any stream, not just
862attachments.
863@end table
864
865@section Video Options
866
867@table @option
868@item -vframes @var{number} (@emph{output})
869Set the number of video frames to output. This is an obsolete alias for
870@code{-frames:v}, which you should use instead.
871@item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
872Set frame rate (Hz value, fraction or abbreviation).
873
874As an input option, ignore any timestamps stored in the file and instead
875generate timestamps assuming constant frame rate @var{fps}.
876This is not the same as the @option{-framerate} option used for some input formats
877like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
878If in doubt use @option{-framerate} instead of the input option @option{-r}.
879
880As an output option, duplicate or drop input frames to achieve constant output
881frame rate @var{fps}.
882
883@item -fpsmax[:@var{stream_specifier}] @var{fps} (@emph{output,per-stream})
884Set maximum frame rate (Hz value, fraction or abbreviation).
885
886Clamps output frame rate when output framerate is auto-set and is higher than this value.
887Useful in batch processing or when input framerate is wrongly detected as very high.
888It cannot be set together with @code{-r}. It is ignored during streamcopy.
889
890@item -s[:@var{stream_specifier}] @var{size} (@emph{input/output,per-stream})
891Set frame size.
892
893As an input option, this is a shortcut for the @option{video_size} private
894option, recognized by some demuxers for which the frame size is either not
895stored in the file or is configurable -- e.g. raw video or video grabbers.
896
897As an output option, this inserts the @code{scale} video filter to the
898@emph{end} of the corresponding filtergraph. Please use the @code{scale} filter
899directly to insert it at the beginning or some other place.
900
901The format is @samp{wxh} (default - same as source).
902
903@item -aspect[:@var{stream_specifier}] @var{aspect} (@emph{output,per-stream})
904Set the video display aspect ratio specified by @var{aspect}.
905
906@var{aspect} can be a floating point number string, or a string of the
907form @var{num}:@var{den}, where @var{num} and @var{den} are the
908numerator and denominator of the aspect ratio. For example "4:3",
909"16:9", "1.3333", and "1.7777" are valid argument values.
910
911If used together with @option{-vcodec copy}, it will affect the aspect ratio
912stored at container level, but not the aspect ratio stored in encoded
913frames, if it exists.
914
915@item -vn (@emph{input/output})
916As an input option, blocks all video streams of a file from being filtered or
917being automatically selected or mapped for any output. See @code{-discard}
918option to disable streams individually.
919
920As an output option, disables video recording i.e. automatic selection or
921mapping of any video stream. For full manual control see the @code{-map}
922option.
923
924@item -vcodec @var{codec} (@emph{output})
925Set the video codec. This is an alias for @code{-codec:v}.
926
927@item -pass[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
928Select the pass number (1 or 2). It is used to do two-pass
929video encoding. The statistics of the video are recorded in the first
930pass into a log file (see also the option -passlogfile),
931and in the second pass that log file is used to generate the video
932at the exact requested bitrate.
933On pass 1, you may just deactivate audio and set output to null,
934examples for Windows and Unix:
935@example
936ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL
937ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null
938@end example
939
940@item -passlogfile[:@var{stream_specifier}] @var{prefix} (@emph{output,per-stream})
941Set two-pass log file name prefix to @var{prefix}, the default file name
942prefix is ``ffmpeg2pass''. The complete file name will be
943@file{PREFIX-N.log}, where N is a number specific to the output
944stream
945
946@item -vf @var{filtergraph} (@emph{output})
947Create the filtergraph specified by @var{filtergraph} and use it to
948filter the stream.
949
950This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
951
952@item -autorotate
953Automatically rotate the video according to file metadata. Enabled by
954default, use @option{-noautorotate} to disable it.
955
956@item -autoscale
957Automatically scale the video according to the resolution of first frame.
958Enabled by default, use @option{-noautoscale} to disable it. When autoscale is
959disabled, all output frames of filter graph might not be in the same resolution
960and may be inadequate for some encoder/muxer. Therefore, it is not recommended
961to disable it unless you really know what you are doing.
962Disable autoscale at your own risk.
963@end table
964
965@section Advanced Video options
966
967@table @option
968@item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
969Set pixel format. Use @code{-pix_fmts} to show all the supported
970pixel formats.
971If the selected pixel format can not be selected, ffmpeg will print a
972warning and select the best pixel format supported by the encoder.
973If @var{pix_fmt} is prefixed by a @code{+}, ffmpeg will exit with an error
974if the requested pixel format can not be selected, and automatic conversions
975inside filtergraphs are disabled.
976If @var{pix_fmt} is a single @code{+}, ffmpeg selects the same pixel format
977as the input (or graph output) and automatic conversions are disabled.
978
979@item -sws_flags @var{flags} (@emph{input/output})
980Set SwScaler flags.
981
982@item -rc_override[:@var{stream_specifier}] @var{override} (@emph{output,per-stream})
983Rate control override for specific intervals, formatted as "int,int,int"
984list separated with slashes. Two first values are the beginning and
985end frame numbers, last one is quantizer to use if positive, or quality
986factor if negative.
987
988@item -ilme
989Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
990Use this option if your input file is interlaced and you want
991to keep the interlaced format for minimum losses.
992The alternative is to deinterlace the input stream by use of a filter
993such as @code{yadif} or @code{bwdif}, but deinterlacing introduces losses.
994@item -psnr
995Calculate PSNR of compressed frames.
996@item -vstats
997Dump video coding statistics to @file{vstats_HHMMSS.log}.
998@item -vstats_file @var{file}
999Dump video coding statistics to @var{file}.
1000@item -vstats_version @var{file}
1001Specifies which version of the vstats format to use. Default is 2.
1002
1003version = 1 :
1004
1005@code{frame= %5d q= %2.1f PSNR= %6.2f f_size= %6d s_size= %8.0fkB time= %0.3f br= %7.1fkbits/s avg_br= %7.1fkbits/s}
1006
1007version > 1:
1008
1009@code{out= %2d st= %2d frame= %5d q= %2.1f PSNR= %6.2f f_size= %6d s_size= %8.0fkB time= %0.3f br= %7.1fkbits/s avg_br= %7.1fkbits/s}
1010@item -top[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
1011top=1/bottom=0/auto=-1 field first
1012@item -dc @var{precision}
1013Intra_dc_precision.
1014@item -vtag @var{fourcc/tag} (@emph{output})
1015Force video tag/fourcc. This is an alias for @code{-tag:v}.
1016@item -qphist (@emph{global})
1017Show QP histogram
1018@item -vbsf @var{bitstream_filter}
1019Deprecated see -bsf
1020
1021@item -force_key_frames[:@var{stream_specifier}] @var{time}[,@var{time}...] (@emph{output,per-stream})
1022@item -force_key_frames[:@var{stream_specifier}] expr:@var{expr} (@emph{output,per-stream})
1023@item -force_key_frames[:@var{stream_specifier}] source (@emph{output,per-stream})
1024@item -force_key_frames[:@var{stream_specifier}] source_no_drop (@emph{output,per-stream})
1025
1026@var{force_key_frames} can take arguments of the following form:
1027
1028@table @option
1029
1030@item @var{time}[,@var{time}...]
1031If the argument consists of timestamps, ffmpeg will round the specified times to the nearest
1032output timestamp as per the encoder time base and force a keyframe at the first frame having
1033timestamp equal or greater than the computed timestamp. Note that if the encoder time base is too
1034coarse, then the keyframes may be forced on frames with timestamps lower than the specified time.
1035The default encoder time base is the inverse of the output framerate but may be set otherwise
1036via @code{-enc_time_base}.
1037
1038If one of the times is "@code{chapters}[@var{delta}]", it is expanded into
1039the time of the beginning of all chapters in the file, shifted by
1040@var{delta}, expressed as a time in seconds.
1041This option can be useful to ensure that a seek point is present at a
1042chapter mark or any other designated place in the output file.
1043
1044For example, to insert a key frame at 5 minutes, plus key frames 0.1 second
1045before the beginning of every chapter:
1046@example
1047-force_key_frames 0:05:00,chapters-0.1
1048@end example
1049
1050@item expr:@var{expr}
1051If the argument is prefixed with @code{expr:}, the string @var{expr}
1052is interpreted like an expression and is evaluated for each frame. A
1053key frame is forced in case the evaluation is non-zero.
1054
1055The expression in @var{expr} can contain the following constants:
1056@table @option
1057@item n
1058the number of current processed frame, starting from 0
1059@item n_forced
1060the number of forced frames
1061@item prev_forced_n
1062the number of the previous forced frame, it is @code{NAN} when no
1063keyframe was forced yet
1064@item prev_forced_t
1065the time of the previous forced frame, it is @code{NAN} when no
1066keyframe was forced yet
1067@item t
1068the time of the current processed frame
1069@end table
1070
1071For example to force a key frame every 5 seconds, you can specify:
1072@example
1073-force_key_frames expr:gte(t,n_forced*5)
1074@end example
1075
1076To force a key frame 5 seconds after the time of the last forced one,
1077starting from second 13:
1078@example
1079-force_key_frames expr:if(isnan(prev_forced_t),gte(t,13),gte(t,prev_forced_t+5))
1080@end example
1081
1082@item source
1083If the argument is @code{source}, ffmpeg will force a key frame if
1084the current frame being encoded is marked as a key frame in its source.
1085
1086@item source_no_drop
1087If the argument is @code{source_no_drop}, ffmpeg will force a key frame if
1088the current frame being encoded is marked as a key frame in its source.
1089In cases where this particular source frame has to be dropped,
1090enforce the next available frame to become a key frame instead.
1091
1092@end table
1093
1094Note that forcing too many keyframes is very harmful for the lookahead
1095algorithms of certain encoders: using fixed-GOP options or similar
1096would be more efficient.
1097
1098@item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
1099When doing stream copy, copy also non-key frames found at the
1100beginning.
1101
1102@item -init_hw_device @var{type}[=@var{name}][:@var{device}[,@var{key=value}...]]
1103Initialise a new hardware device of type @var{type} called @var{name}, using the
1104given device parameters.
1105If no name is specified it will receive a default name of the form "@var{type}%d".
1106
1107The meaning of @var{device} and the following arguments depends on the
1108device type:
1109@table @option
1110
1111@item cuda
1112@var{device} is the number of the CUDA device.
1113
1114The following options are recognized:
1115@table @option
1116@item primary_ctx
1117If set to 1, uses the primary device context instead of creating a new one.
1118@end table
1119
1120Examples:
1121@table @emph
1122@item -init_hw_device cuda:1
1123Choose the second device on the system.
1124
1125@item -init_hw_device cuda:0,primary_ctx=1
1126Choose the first device and use the primary device context.
1127@end table
1128
1129@item dxva2
1130@var{device} is the number of the Direct3D 9 display adapter.
1131
1132@item d3d11va
1133@var{device} is the number of the Direct3D 11 display adapter.
1134
1135@item vaapi
1136@var{device} is either an X11 display name or a DRM render node.
1137If not specified, it will attempt to open the default X11 display (@emph{$DISPLAY})
1138and then the first DRM render node (@emph{/dev/dri/renderD128}).
1139
1140@item vdpau
1141@var{device} is an X11 display name.
1142If not specified, it will attempt to open the default X11 display (@emph{$DISPLAY}).
1143
1144@item qsv
1145@var{device} selects a value in @samp{MFX_IMPL_*}. Allowed values are:
1146@table @option
1147@item auto
1148@item sw
1149@item hw
1150@item auto_any
1151@item hw_any
1152@item hw2
1153@item hw3
1154@item hw4
1155@end table
1156If not specified, @samp{auto_any} is used.
1157(Note that it may be easier to achieve the desired result for QSV by creating the
1158platform-appropriate subdevice (@samp{dxva2} or @samp{d3d11va} or @samp{vaapi}) and then deriving a
1159QSV device from that.)
1160
1161Alternatively, @samp{child_device_type} helps to choose platform-appropriate subdevice type.
1162On Windows @samp{d3d11va} is used as default subdevice type.
1163
1164Examples:
1165@table @emph
1166@item -init_hw_device qsv:hw,child_device_type=d3d11va
1167Choose the GPU subdevice with type @samp{d3d11va} and create QSV device with @samp{MFX_IMPL_HARDWARE}.
1168
1169@item -init_hw_device qsv:hw,child_device_type=dxva2
1170Choose the GPU subdevice with type @samp{dxva2} and create QSV device with @samp{MFX_IMPL_HARDWARE}.
1171@end table
1172
1173@item opencl
1174@var{device} selects the platform and device as @emph{platform_index.device_index}.
1175
1176The set of devices can also be filtered using the key-value pairs to find only
1177devices matching particular platform or device strings.
1178
1179The strings usable as filters are:
1180@table @option
1181@item platform_profile
1182@item platform_version
1183@item platform_name
1184@item platform_vendor
1185@item platform_extensions
1186@item device_name
1187@item device_vendor
1188@item driver_version
1189@item device_version
1190@item device_profile
1191@item device_extensions
1192@item device_type
1193@end table
1194
1195The indices and filters must together uniquely select a device.
1196
1197Examples:
1198@table @emph
1199@item -init_hw_device opencl:0.1
1200Choose the second device on the first platform.
1201
1202@item -init_hw_device opencl:,device_name=Foo9000
1203Choose the device with a name containing the string @emph{Foo9000}.
1204
1205@item -init_hw_device opencl:1,device_type=gpu,device_extensions=cl_khr_fp16
1206Choose the GPU device on the second platform supporting the @emph{cl_khr_fp16}
1207extension.
1208@end table
1209
1210@item vulkan
1211If @var{device} is an integer, it selects the device by its index in a
1212system-dependent list of devices.  If @var{device} is any other string, it
1213selects the first device with a name containing that string as a substring.
1214
1215The following options are recognized:
1216@table @option
1217@item debug
1218If set to 1, enables the validation layer, if installed.
1219@item linear_images
1220If set to 1, images allocated by the hwcontext will be linear and locally mappable.
1221@item instance_extensions
1222A plus separated list of additional instance extensions to enable.
1223@item device_extensions
1224A plus separated list of additional device extensions to enable.
1225@end table
1226
1227Examples:
1228@table @emph
1229@item -init_hw_device vulkan:1
1230Choose the second device on the system.
1231
1232@item -init_hw_device vulkan:RADV
1233Choose the first device with a name containing the string @emph{RADV}.
1234
1235@item -init_hw_device vulkan:0,instance_extensions=VK_KHR_wayland_surface+VK_KHR_xcb_surface
1236Choose the first device and enable the Wayland and XCB instance extensions.
1237@end table
1238
1239@end table
1240
1241@item -init_hw_device @var{type}[=@var{name}]@@@var{source}
1242Initialise a new hardware device of type @var{type} called @var{name},
1243deriving it from the existing device with the name @var{source}.
1244
1245@item -init_hw_device list
1246List all hardware device types supported in this build of ffmpeg.
1247
1248@item -filter_hw_device @var{name}
1249Pass the hardware device called @var{name} to all filters in any filter graph.
1250This can be used to set the device to upload to with the @code{hwupload} filter,
1251or the device to map to with the @code{hwmap} filter.  Other filters may also
1252make use of this parameter when they require a hardware device.  Note that this
1253is typically only required when the input is not already in hardware frames -
1254when it is, filters will derive the device they require from the context of the
1255frames they receive as input.
1256
1257This is a global setting, so all filters will receive the same device.
1258
1259@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
1260Use hardware acceleration to decode the matching stream(s). The allowed values
1261of @var{hwaccel} are:
1262@table @option
1263@item none
1264Do not use any hardware acceleration (the default).
1265
1266@item auto
1267Automatically select the hardware acceleration method.
1268
1269@item vdpau
1270Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
1271
1272@item dxva2
1273Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
1274
1275@item d3d11va
1276Use D3D11VA (DirectX Video Acceleration) hardware acceleration.
1277
1278@item vaapi
1279Use VAAPI (Video Acceleration API) hardware acceleration.
1280
1281@item qsv
1282Use the Intel QuickSync Video acceleration for video transcoding.
1283
1284Unlike most other values, this option does not enable accelerated decoding (that
1285is used automatically whenever a qsv decoder is selected), but accelerated
1286transcoding, without copying the frames into the system memory.
1287
1288For it to work, both the decoder and the encoder must support QSV acceleration
1289and no filters must be used.
1290@end table
1291
1292This option has no effect if the selected hwaccel is not available or not
1293supported by the chosen decoder.
1294
1295Note that most acceleration methods are intended for playback and will not be
1296faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
1297will usually need to copy the decoded frames from the GPU memory into the system
1298memory, resulting in further performance loss. This option is thus mainly
1299useful for testing.
1300
1301@item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
1302Select a device to use for hardware acceleration.
1303
1304This option only makes sense when the @option{-hwaccel} option is also specified.
1305It can either refer to an existing device created with @option{-init_hw_device}
1306by name, or it can create a new device as if
1307@samp{-init_hw_device} @var{type}:@var{hwaccel_device}
1308were called immediately before.
1309
1310@item -hwaccels
1311List all hardware acceleration components enabled in this build of ffmpeg.
1312Actual runtime availability depends on the hardware and its suitable driver
1313being installed.
1314
1315@end table
1316
1317@section Audio Options
1318
1319@table @option
1320@item -aframes @var{number} (@emph{output})
1321Set the number of audio frames to output. This is an obsolete alias for
1322@code{-frames:a}, which you should use instead.
1323@item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
1324Set the audio sampling frequency. For output streams it is set by
1325default to the frequency of the corresponding input stream. For input
1326streams this option only makes sense for audio grabbing devices and raw
1327demuxers and is mapped to the corresponding demuxer options.
1328@item -aq @var{q} (@emph{output})
1329Set the audio quality (codec-specific, VBR). This is an alias for -q:a.
1330@item -ac[:@var{stream_specifier}] @var{channels} (@emph{input/output,per-stream})
1331Set the number of audio channels. For output streams it is set by
1332default to the number of input audio channels. For input streams
1333this option only makes sense for audio grabbing devices and raw demuxers
1334and is mapped to the corresponding demuxer options.
1335@item -an (@emph{input/output})
1336As an input option, blocks all audio streams of a file from being filtered or
1337being automatically selected or mapped for any output. See @code{-discard}
1338option to disable streams individually.
1339
1340As an output option, disables audio recording i.e. automatic selection or
1341mapping of any audio stream. For full manual control see the @code{-map}
1342option.
1343@item -acodec @var{codec} (@emph{input/output})
1344Set the audio codec. This is an alias for @code{-codec:a}.
1345@item -sample_fmt[:@var{stream_specifier}] @var{sample_fmt} (@emph{output,per-stream})
1346Set the audio sample format. Use @code{-sample_fmts} to get a list
1347of supported sample formats.
1348
1349@item -af @var{filtergraph} (@emph{output})
1350Create the filtergraph specified by @var{filtergraph} and use it to
1351filter the stream.
1352
1353This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
1354@end table
1355
1356@section Advanced Audio options
1357
1358@table @option
1359@item -atag @var{fourcc/tag} (@emph{output})
1360Force audio tag/fourcc. This is an alias for @code{-tag:a}.
1361@item -absf @var{bitstream_filter}
1362Deprecated, see -bsf
1363@item -guess_layout_max @var{channels} (@emph{input,per-stream})
1364If some input channel layout is not known, try to guess only if it
1365corresponds to at most the specified number of channels. For example, 2
1366tells to @command{ffmpeg} to recognize 1 channel as mono and 2 channels as
1367stereo but not 6 channels as 5.1. The default is to always try to guess. Use
13680 to disable all guessing.
1369@end table
1370
1371@section Subtitle options
1372
1373@table @option
1374@item -scodec @var{codec} (@emph{input/output})
1375Set the subtitle codec. This is an alias for @code{-codec:s}.
1376@item -sn (@emph{input/output})
1377As an input option, blocks all subtitle streams of a file from being filtered or
1378being automatically selected or mapped for any output. See @code{-discard}
1379option to disable streams individually.
1380
1381As an output option, disables subtitle recording i.e. automatic selection or
1382mapping of any subtitle stream. For full manual control see the @code{-map}
1383option.
1384@item -sbsf @var{bitstream_filter}
1385Deprecated, see -bsf
1386@end table
1387
1388@section Advanced Subtitle options
1389
1390@table @option
1391
1392@item -fix_sub_duration
1393Fix subtitles durations. For each subtitle, wait for the next packet in the
1394same stream and adjust the duration of the first to avoid overlap. This is
1395necessary with some subtitles codecs, especially DVB subtitles, because the
1396duration in the original packet is only a rough estimate and the end is
1397actually marked by an empty subtitle frame. Failing to use this option when
1398necessary can result in exaggerated durations or muxing failures due to
1399non-monotonic timestamps.
1400
1401Note that this option will delay the output of all data until the next
1402subtitle packet is decoded: it may increase memory consumption and latency a
1403lot.
1404
1405@item -canvas_size @var{size}
1406Set the size of the canvas used to render subtitles.
1407
1408@end table
1409
1410@section Advanced options
1411
1412@table @option
1413@item -map [-]@var{input_file_id}[:@var{stream_specifier}][?][,@var{sync_file_id}[:@var{stream_specifier}]] | @var{[linklabel]} (@emph{output})
1414
1415Designate one or more input streams as a source for the output file. Each input
1416stream is identified by the input file index @var{input_file_id} and
1417the input stream index @var{input_stream_id} within the input
1418file. Both indices start at 0. If specified,
1419@var{sync_file_id}:@var{stream_specifier} sets which input stream
1420is used as a presentation sync reference.
1421
1422The first @code{-map} option on the command line specifies the
1423source for output stream 0, the second @code{-map} option specifies
1424the source for output stream 1, etc.
1425
1426A @code{-} character before the stream identifier creates a "negative" mapping.
1427It disables matching streams from already created mappings.
1428
1429A trailing @code{?} after the stream index will allow the map to be
1430optional: if the map matches no streams the map will be ignored instead
1431of failing. Note the map will still fail if an invalid input file index
1432is used; such as if the map refers to a non-existent input.
1433
1434An alternative @var{[linklabel]} form will map outputs from complex filter
1435graphs (see the @option{-filter_complex} option) to the output file.
1436@var{linklabel} must correspond to a defined output link label in the graph.
1437
1438For example, to map ALL streams from the first input file to output
1439@example
1440ffmpeg -i INPUT -map 0 output
1441@end example
1442
1443For example, if you have two audio streams in the first input file,
1444these streams are identified by "0:0" and "0:1". You can use
1445@code{-map} to select which streams to place in an output file. For
1446example:
1447@example
1448ffmpeg -i INPUT -map 0:1 out.wav
1449@end example
1450will map the input stream in @file{INPUT} identified by "0:1" to
1451the (single) output stream in @file{out.wav}.
1452
1453For example, to select the stream with index 2 from input file
1454@file{a.mov} (specified by the identifier "0:2"), and stream with
1455index 6 from input @file{b.mov} (specified by the identifier "1:6"),
1456and copy them to the output file @file{out.mov}:
1457@example
1458ffmpeg -i a.mov -i b.mov -c copy -map 0:2 -map 1:6 out.mov
1459@end example
1460
1461To select all video and the third audio stream from an input file:
1462@example
1463ffmpeg -i INPUT -map 0:v -map 0:a:2 OUTPUT
1464@end example
1465
1466To map all the streams except the second audio, use negative mappings
1467@example
1468ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
1469@end example
1470
1471To map the video and audio streams from the first input, and using the
1472trailing @code{?}, ignore the audio mapping if no audio streams exist in
1473the first input:
1474@example
1475ffmpeg -i INPUT -map 0:v -map 0:a? OUTPUT
1476@end example
1477
1478To pick the English audio stream:
1479@example
1480ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
1481@end example
1482
1483Note that using this option disables the default mappings for this output file.
1484
1485@item -ignore_unknown
1486Ignore input streams with unknown type instead of failing if copying
1487such streams is attempted.
1488
1489@item -copy_unknown
1490Allow input streams with unknown type to be copied instead of failing if copying
1491such streams is attempted.
1492
1493@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][?][:@var{output_file_id}.@var{stream_specifier}]
1494Map an audio channel from a given input to an output. If
1495@var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
1496be mapped on all the audio streams.
1497
1498Using "-1" instead of
1499@var{input_file_id}.@var{stream_specifier}.@var{channel_id} will map a muted
1500channel.
1501
1502A trailing @code{?} will allow the map_channel to be
1503optional: if the map_channel matches no channel the map_channel will be ignored instead
1504of failing.
1505
1506For example, assuming @var{INPUT} is a stereo audio file, you can switch the
1507two audio channels with the following command:
1508@example
1509ffmpeg -i INPUT -map_channel 0.0.1 -map_channel 0.0.0 OUTPUT
1510@end example
1511
1512If you want to mute the first channel and keep the second:
1513@example
1514ffmpeg -i INPUT -map_channel -1 -map_channel 0.0.1 OUTPUT
1515@end example
1516
1517The order of the "-map_channel" option specifies the order of the channels in
1518the output stream. The output channel layout is guessed from the number of
1519channels mapped (mono if one "-map_channel", stereo if two, etc.). Using "-ac"
1520in combination of "-map_channel" makes the channel gain levels to be updated if
1521input and output channel layouts don't match (for instance two "-map_channel"
1522options and "-ac 6").
1523
1524You can also extract each channel of an input to specific outputs; the following
1525command extracts two channels of the @var{INPUT} audio stream (file 0, stream 0)
1526to the respective @var{OUTPUT_CH0} and @var{OUTPUT_CH1} outputs:
1527@example
1528ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1
1529@end example
1530
1531The following example splits the channels of a stereo input into two separate
1532streams, which are put into the same output file:
1533@example
1534ffmpeg -i stereo.wav -map 0:0 -map 0:0 -map_channel 0.0.0:0.0 -map_channel 0.0.1:0.1 -y out.ogg
1535@end example
1536
1537Note that currently each output stream can only contain channels from a single
1538input stream; you can't for example use "-map_channel" to pick multiple input
1539audio channels contained in different streams (from the same or different files)
1540and merge them into a single output stream. It is therefore not currently
1541possible, for example, to turn two separate mono streams into a single stereo
1542stream. However splitting a stereo stream into two single channel mono streams
1543is possible.
1544
1545If you need this feature, a possible workaround is to use the @emph{amerge}
1546filter. For example, if you need to merge a media (here @file{input.mkv}) with 2
1547mono audio streams into one single stereo channel audio stream (and keep the
1548video stream), you can use the following command:
1549@example
1550ffmpeg -i input.mkv -filter_complex "[0:1] [0:2] amerge" -c:a pcm_s16le -c:v copy output.mkv
1551@end example
1552
1553To map the first two audio channels from the first input, and using the
1554trailing @code{?}, ignore the audio channel mapping if the first input is
1555mono instead of stereo:
1556@example
1557ffmpeg -i INPUT -map_channel 0.0.0 -map_channel 0.0.1? OUTPUT
1558@end example
1559
1560@item -map_metadata[:@var{metadata_spec_out}] @var{infile}[:@var{metadata_spec_in}] (@emph{output,per-metadata})
1561Set metadata information of the next output file from @var{infile}. Note that
1562those are file indices (zero-based), not filenames.
1563Optional @var{metadata_spec_in/out} parameters specify, which metadata to copy.
1564A metadata specifier can have the following forms:
1565@table @option
1566@item @var{g}
1567global metadata, i.e. metadata that applies to the whole file
1568
1569@item @var{s}[:@var{stream_spec}]
1570per-stream metadata. @var{stream_spec} is a stream specifier as described
1571in the @ref{Stream specifiers} chapter. In an input metadata specifier, the first
1572matching stream is copied from. In an output metadata specifier, all matching
1573streams are copied to.
1574
1575@item @var{c}:@var{chapter_index}
1576per-chapter metadata. @var{chapter_index} is the zero-based chapter index.
1577
1578@item @var{p}:@var{program_index}
1579per-program metadata. @var{program_index} is the zero-based program index.
1580@end table
1581If metadata specifier is omitted, it defaults to global.
1582
1583By default, global metadata is copied from the first input file,
1584per-stream and per-chapter metadata is copied along with streams/chapters. These
1585default mappings are disabled by creating any mapping of the relevant type. A negative
1586file index can be used to create a dummy mapping that just disables automatic copying.
1587
1588For example to copy metadata from the first stream of the input file to global metadata
1589of the output file:
1590@example
1591ffmpeg -i in.ogg -map_metadata 0:s:0 out.mp3
1592@end example
1593
1594To do the reverse, i.e. copy global metadata to all audio streams:
1595@example
1596ffmpeg -i in.mkv -map_metadata:s:a 0:g out.mkv
1597@end example
1598Note that simple @code{0} would work as well in this example, since global
1599metadata is assumed by default.
1600
1601@item -map_chapters @var{input_file_index} (@emph{output})
1602Copy chapters from input file with index @var{input_file_index} to the next
1603output file. If no chapter mapping is specified, then chapters are copied from
1604the first input file with at least one chapter. Use a negative file index to
1605disable any chapter copying.
1606
1607@item -benchmark (@emph{global})
1608Show benchmarking information at the end of an encode.
1609Shows real, system and user time used and maximum memory consumption.
1610Maximum memory consumption is not supported on all systems,
1611it will usually display as 0 if not supported.
1612@item -benchmark_all (@emph{global})
1613Show benchmarking information during the encode.
1614Shows real, system and user time used in various steps (audio/video encode/decode).
1615@item -timelimit @var{duration} (@emph{global})
1616Exit after ffmpeg has been running for @var{duration} seconds in CPU user time.
1617@item -dump (@emph{global})
1618Dump each input packet to stderr.
1619@item -hex (@emph{global})
1620When dumping packets, also dump the payload.
1621@item -readrate @var{speed} (@emph{input})
1622Limit input read speed.
1623
1624Its value is a floating-point positive number which represents the maximum duration of
1625media, in seconds, that should be ingested in one second of wallclock time.
1626Default value is zero and represents no imposed limitation on speed of ingestion.
1627Value @code{1} represents real-time speed and is equivalent to @code{-re}.
1628
1629Mainly used to simulate a capture device or live input stream (e.g. when reading from a file).
1630Should not be used with a low value when input is an actual capture device or live stream as
1631it may cause packet loss.
1632
1633It is useful for when flow speed of output packets is important, such as live streaming.
1634@item -re (@emph{input})
1635Read input at native frame rate. This is equivalent to setting @code{-readrate 1}.
1636@item -vsync @var{parameter} (@emph{global})
1637@itemx -fps_mode[:@var{stream_specifier}] @var{parameter} (@emph{output,per-stream})
1638Set video sync method / framerate mode. vsync is applied to all output video streams
1639but can be overridden for a stream by setting fps_mode. vsync is deprecated and will be
1640removed in the future.
1641
1642For compatibility reasons some of the values for vsync can be specified as numbers (shown
1643in parentheses in the following table).
1644
1645@table @option
1646@item passthrough (0)
1647Each frame is passed with its timestamp from the demuxer to the muxer.
1648@item cfr (1)
1649Frames will be duplicated and dropped to achieve exactly the requested
1650constant frame rate.
1651@item vfr (2)
1652Frames are passed through with their timestamp or dropped so as to
1653prevent 2 frames from having the same timestamp.
1654@item drop
1655As passthrough but destroys all timestamps, making the muxer generate
1656fresh timestamps based on frame-rate.
1657@item auto (-1)
1658Chooses between cfr and vfr depending on muxer capabilities. This is the
1659default method.
1660@end table
1661
1662Note that the timestamps may be further modified by the muxer, after this.
1663For example, in the case that the format option @option{avoid_negative_ts}
1664is enabled.
1665
1666With -map you can select from which stream the timestamps should be
1667taken. You can leave either video or audio unchanged and sync the
1668remaining stream(s) to the unchanged one.
1669
1670@item -frame_drop_threshold @var{parameter}
1671Frame drop threshold, which specifies how much behind video frames can
1672be before they are dropped. In frame rate units, so 1.0 is one frame.
1673The default is -1.1. One possible usecase is to avoid framedrops in case
1674of noisy timestamps or to increase frame drop precision in case of exact
1675timestamps.
1676
1677@item -async @var{samples_per_second}
1678Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
1679the parameter is the maximum samples per second by which the audio is changed.
1680-async 1 is a special case where only the start of the audio stream is corrected
1681without any later correction.
1682
1683Note that the timestamps may be further modified by the muxer, after this.
1684For example, in the case that the format option @option{avoid_negative_ts}
1685is enabled.
1686
1687This option has been deprecated. Use the @code{aresample} audio filter instead.
1688
1689@item -adrift_threshold @var{time}
1690Set the minimum difference between timestamps and audio data (in seconds) to trigger
1691adding/dropping samples to make it match the timestamps. This option effectively is
1692a threshold to select between hard (add/drop) and soft (squeeze/stretch) compensation.
1693@code{-async} must be set to a positive value.
1694
1695@item -apad @var{parameters} (@emph{output,per-stream})
1696Pad the output audio stream(s). This is the same as applying @code{-af apad}.
1697Argument is a string of filter parameters composed the same as with the @code{apad} filter.
1698@code{-shortest} must be set for this output for the option to take effect.
1699
1700@item -copyts
1701Do not process input timestamps, but keep their values without trying
1702to sanitize them. In particular, do not remove the initial start time
1703offset value.
1704
1705Note that, depending on the @option{vsync} option or on specific muxer
1706processing (e.g. in case the format option @option{avoid_negative_ts}
1707is enabled) the output timestamps may mismatch with the input
1708timestamps even when this option is selected.
1709
1710@item -start_at_zero
1711When used with @option{copyts}, shift input timestamps so they start at zero.
1712
1713This means that using e.g. @code{-ss 50} will make output timestamps start at
171450 seconds, regardless of what timestamp the input file started at.
1715
1716@item -copytb @var{mode}
1717Specify how to set the encoder timebase when stream copying.  @var{mode} is an
1718integer numeric value, and can assume one of the following values:
1719
1720@table @option
1721@item 1
1722Use the demuxer timebase.
1723
1724The time base is copied to the output encoder from the corresponding input
1725demuxer. This is sometimes required to avoid non monotonically increasing
1726timestamps when copying video streams with variable frame rate.
1727
1728@item 0
1729Use the decoder timebase.
1730
1731The time base is copied to the output encoder from the corresponding input
1732decoder.
1733
1734@item -1
1735Try to make the choice automatically, in order to generate a sane output.
1736@end table
1737
1738Default value is -1.
1739
1740@item -enc_time_base[:@var{stream_specifier}] @var{timebase} (@emph{output,per-stream})
1741Set the encoder timebase. @var{timebase} is a floating point number,
1742and can assume one of the following values:
1743
1744@table @option
1745@item 0
1746Assign a default value according to the media type.
1747
1748For video - use 1/framerate, for audio - use 1/samplerate.
1749
1750@item -1
1751Use the input stream timebase when possible.
1752
1753If an input stream is not available, the default timebase will be used.
1754
1755@item >0
1756Use the provided number as the timebase.
1757
1758This field can be provided as a ratio of two integers (e.g. 1:24, 1:48000)
1759or as a floating point number (e.g. 0.04166, 2.0833e-5)
1760@end table
1761
1762Default value is 0.
1763
1764@item -bitexact (@emph{input/output})
1765Enable bitexact mode for (de)muxer and (de/en)coder
1766@item -shortest (@emph{output})
1767Finish encoding when the shortest output stream ends.
1768@item -dts_delta_threshold
1769Timestamp discontinuity delta threshold.
1770@item -dts_error_threshold @var{seconds}
1771Timestamp error delta threshold. This threshold use to discard crazy/damaged
1772timestamps and the default is 30 hours which is arbitrarily picked and quite
1773conservative.
1774@item -muxdelay @var{seconds} (@emph{output})
1775Set the maximum demux-decode delay.
1776@item -muxpreload @var{seconds} (@emph{output})
1777Set the initial demux-decode delay.
1778@item -streamid @var{output-stream-index}:@var{new-value} (@emph{output})
1779Assign a new stream-id value to an output stream. This option should be
1780specified prior to the output filename to which it applies.
1781For the situation where multiple output files exist, a streamid
1782may be reassigned to a different value.
1783
1784For example, to set the stream 0 PID to 33 and the stream 1 PID to 36 for
1785an output mpegts file:
1786@example
1787ffmpeg -i inurl -streamid 0:33 -streamid 1:36 out.ts
1788@end example
1789
1790@item -bsf[:@var{stream_specifier}] @var{bitstream_filters} (@emph{output,per-stream})
1791Set bitstream filters for matching streams. @var{bitstream_filters} is
1792a comma-separated list of bitstream filters. Use the @code{-bsfs} option
1793to get the list of bitstream filters.
1794@example
1795ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264
1796@end example
1797@example
1798ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
1799@end example
1800
1801@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream})
1802Force a tag/fourcc for matching streams.
1803
1804@item -timecode @var{hh}:@var{mm}:@var{ss}SEP@var{ff}
1805Specify Timecode for writing. @var{SEP} is ':' for non drop timecode and ';'
1806(or '.') for drop.
1807@example
1808ffmpeg -i input.mpg -timecode 01:02:03.04 -r 30000/1001 -s ntsc output.mpg
1809@end example
1810
1811@anchor{filter_complex_option}
1812@item -filter_complex @var{filtergraph} (@emph{global})
1813Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1814outputs. For simple graphs -- those with one input and one output of the same
1815type -- see the @option{-filter} options. @var{filtergraph} is a description of
1816the filtergraph, as described in the ``Filtergraph syntax'' section of the
1817ffmpeg-filters manual.
1818
1819Input link labels must refer to input streams using the
1820@code{[file_index:stream_specifier]} syntax (i.e. the same as @option{-map}
1821uses). If @var{stream_specifier} matches multiple streams, the first one will be
1822used. An unlabeled input will be connected to the first unused input stream of
1823the matching type.
1824
1825Output link labels are referred to with @option{-map}. Unlabeled outputs are
1826added to the first output file.
1827
1828Note that with this option it is possible to use only lavfi sources without
1829normal input files.
1830
1831For example, to overlay an image over video
1832@example
1833ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map
1834'[out]' out.mkv
1835@end example
1836Here @code{[0:v]} refers to the first video stream in the first input file,
1837which is linked to the first (main) input of the overlay filter. Similarly the
1838first video stream in the second input is linked to the second (overlay) input
1839of overlay.
1840
1841Assuming there is only one video stream in each input file, we can omit input
1842labels, so the above is equivalent to
1843@example
1844ffmpeg -i video.mkv -i image.png -filter_complex 'overlay[out]' -map
1845'[out]' out.mkv
1846@end example
1847
1848Furthermore we can omit the output label and the single output from the filter
1849graph will be added to the output file automatically, so we can simply write
1850@example
1851ffmpeg -i video.mkv -i image.png -filter_complex 'overlay' out.mkv
1852@end example
1853
1854As a special exception, you can use a bitmap subtitle stream as input: it
1855will be converted into a video with the same size as the largest video in
1856the file, or 720x576 if no video is present. Note that this is an
1857experimental and temporary solution. It will be removed once libavfilter has
1858proper support for subtitles.
1859
1860For example, to hardcode subtitles on top of a DVB-T recording stored in
1861MPEG-TS format, delaying the subtitles by 1 second:
1862@example
1863ffmpeg -i input.ts -filter_complex \
1864  '[#0x2ef] setpts=PTS+1/TB [sub] ; [#0x2d0] [sub] overlay' \
1865  -sn -map '#0x2dc' output.mkv
1866@end example
1867(0x2d0, 0x2dc and 0x2ef are the MPEG-TS PIDs of respectively the video,
1868audio and subtitles streams; 0:0, 0:3 and 0:7 would have worked too)
1869
1870To generate 5 seconds of pure red video using lavfi @code{color} source:
1871@example
1872ffmpeg -filter_complex 'color=c=red' -t 5 out.mkv
1873@end example
1874
1875@item -filter_complex_threads @var{nb_threads} (@emph{global})
1876Defines how many threads are used to process a filter_complex graph.
1877Similar to filter_threads but used for @code{-filter_complex} graphs only.
1878The default is the number of available CPUs.
1879
1880@item -lavfi @var{filtergraph} (@emph{global})
1881Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1882outputs. Equivalent to @option{-filter_complex}.
1883
1884@item -filter_complex_script @var{filename} (@emph{global})
1885This option is similar to @option{-filter_complex}, the only difference is that
1886its argument is the name of the file from which a complex filtergraph
1887description is to be read.
1888
1889@item -accurate_seek (@emph{input})
1890This option enables or disables accurate seeking in input files with the
1891@option{-ss} option. It is enabled by default, so seeking is accurate when
1892transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
1893e.g. when copying some streams and transcoding the others.
1894
1895@item -seek_timestamp (@emph{input})
1896This option enables or disables seeking by timestamp in input files with the
1897@option{-ss} option. It is disabled by default. If enabled, the argument
1898to the @option{-ss} option is considered an actual timestamp, and is not
1899offset by the start time of the file. This matters only for files which do
1900not start from timestamp 0, such as transport streams.
1901
1902@item -thread_queue_size @var{size} (@emph{input})
1903This option sets the maximum number of queued packets when reading from the
1904file or device. With low latency / high rate live streams, packets may be
1905discarded if they are not read in a timely manner; setting this value can
1906force ffmpeg to use a separate input thread and read packets as soon as they
1907arrive. By default ffmpeg only does this if multiple inputs are specified.
1908
1909@item -sdp_file @var{file} (@emph{global})
1910Print sdp information for an output stream to @var{file}.
1911This allows dumping sdp information when at least one output isn't an
1912rtp stream. (Requires at least one of the output formats to be rtp).
1913
1914@item -discard (@emph{input})
1915Allows discarding specific streams or frames from streams.
1916Any input stream can be fully discarded, using value @code{all} whereas
1917selective discarding of frames from a stream occurs at the demuxer
1918and is not supported by all demuxers.
1919
1920@table @option
1921@item none
1922Discard no frame.
1923
1924@item default
1925Default, which discards no frames.
1926
1927@item noref
1928Discard all non-reference frames.
1929
1930@item bidir
1931Discard all bidirectional frames.
1932
1933@item nokey
1934Discard all frames excepts keyframes.
1935
1936@item all
1937Discard all frames.
1938@end table
1939
1940@item -abort_on @var{flags} (@emph{global})
1941Stop and abort on various conditions. The following flags are available:
1942
1943@table @option
1944@item empty_output
1945No packets were passed to the muxer, the output is empty.
1946@item empty_output_stream
1947No packets were passed to the muxer in some of the output streams.
1948@end table
1949
1950@item -max_error_rate (@emph{global})
1951Set fraction of decoding frame failures across all inputs which when crossed
1952ffmpeg will return exit code 69. Crossing this threshold does not terminate
1953processing. Range is a floating-point number between 0 to 1. Default is 2/3.
1954
1955@item -xerror (@emph{global})
1956Stop and exit on error
1957
1958@item -max_muxing_queue_size @var{packets} (@emph{output,per-stream})
1959When transcoding audio and/or video streams, ffmpeg will not begin writing into
1960the output until it has one packet for each such stream. While waiting for that
1961to happen, packets for other streams are buffered. This option sets the size of
1962this buffer, in packets, for the matching output stream.
1963
1964The default value of this option should be high enough for most uses, so only
1965touch this option if you are sure that you need it.
1966
1967@item -muxing_queue_data_threshold @var{bytes} (@emph{output,per-stream})
1968This is a minimum threshold until which the muxing queue size is not taken into
1969account. Defaults to 50 megabytes per stream, and is based on the overall size
1970of packets passed to the muxer.
1971
1972@item -auto_conversion_filters (@emph{global})
1973Enable automatically inserting format conversion filters in all filter
1974graphs, including those defined by @option{-vf}, @option{-af},
1975@option{-filter_complex} and @option{-lavfi}. If filter format negotiation
1976requires a conversion, the initialization of the filters will fail.
1977Conversions can still be performed by inserting the relevant conversion
1978filter (scale, aresample) in the graph.
1979On by default, to explicitly disable it you need to specify
1980@code{-noauto_conversion_filters}.
1981
1982@item -bits_per_raw_sample[:@var{stream_specifier}] @var{value} (@emph{output,per-stream})
1983Declare the number of bits per raw sample in the given output stream to be
1984@var{value}. Note that this option sets the information provided to the
1985encoder/muxer, it does not change the stream to conform to this value. Setting
1986values that do not match the stream properties may result in encoding failures
1987or invalid output files.
1988
1989@end table
1990
1991@section Preset files
1992A preset file contains a sequence of @var{option}=@var{value} pairs,
1993one for each line, specifying a sequence of options which would be
1994awkward to specify on the command line. Lines starting with the hash
1995('#') character are ignored and are used to provide comments. Check
1996the @file{presets} directory in the FFmpeg source tree for examples.
1997
1998There are two types of preset files: ffpreset and avpreset files.
1999
2000@subsection ffpreset files
2001ffpreset files are specified with the @code{vpre}, @code{apre},
2002@code{spre}, and @code{fpre} options. The @code{fpre} option takes the
2003filename of the preset instead of a preset name as input and can be
2004used for any kind of codec. For the @code{vpre}, @code{apre}, and
2005@code{spre} options, the options specified in a preset file are
2006applied to the currently selected codec of the same type as the preset
2007option.
2008
2009The argument passed to the @code{vpre}, @code{apre}, and @code{spre}
2010preset options identifies the preset file to use according to the
2011following rules:
2012
2013First ffmpeg searches for a file named @var{arg}.ffpreset in the
2014directories @file{$FFMPEG_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
2015the datadir defined at configuration time (usually @file{PREFIX/share/ffmpeg})
2016or in a @file{ffpresets} folder along the executable on win32,
2017in that order. For example, if the argument is @code{libvpx-1080p}, it will
2018search for the file @file{libvpx-1080p.ffpreset}.
2019
2020If no such file is found, then ffmpeg will search for a file named
2021@var{codec_name}-@var{arg}.ffpreset in the above-mentioned
2022directories, where @var{codec_name} is the name of the codec to which
2023the preset file options will be applied. For example, if you select
2024the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
2025then it will search for the file @file{libvpx-1080p.ffpreset}.
2026
2027@subsection avpreset files
2028avpreset files are specified with the @code{pre} option. They work similar to
2029ffpreset files, but they only allow encoder- specific options. Therefore, an
2030@var{option}=@var{value} pair specifying an encoder cannot be used.
2031
2032When the @code{pre} option is specified, ffmpeg will look for files with the
2033suffix .avpreset in the directories @file{$AVCONV_DATADIR} (if set), and
2034@file{$HOME/.avconv}, and in the datadir defined at configuration time (usually
2035@file{PREFIX/share/ffmpeg}), in that order.
2036
2037First ffmpeg searches for a file named @var{codec_name}-@var{arg}.avpreset in
2038the above-mentioned directories, where @var{codec_name} is the name of the codec
2039to which the preset file options will be applied. For example, if you select the
2040video codec with @code{-vcodec libvpx} and use @code{-pre 1080p}, then it will
2041search for the file @file{libvpx-1080p.avpreset}.
2042
2043If no such file is found, then ffmpeg will search for a file named
2044@var{arg}.avpreset in the same directories.
2045
2046@c man end OPTIONS
2047
2048@chapter Examples
2049@c man begin EXAMPLES
2050
2051@section Video and Audio grabbing
2052
2053If you specify the input format and device then ffmpeg can grab video
2054and audio directly.
2055
2056@example
2057ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
2058@end example
2059
2060Or with an ALSA audio source (mono input, card id 1) instead of OSS:
2061@example
2062ffmpeg -f alsa -ac 1 -i hw:1 -f video4linux2 -i /dev/video0 /tmp/out.mpg
2063@end example
2064
2065Note that you must activate the right video source and channel before
2066launching ffmpeg with any TV viewer such as
2067@uref{http://linux.bytesex.org/xawtv/, xawtv} by Gerd Knorr. You also
2068have to set the audio recording levels correctly with a
2069standard mixer.
2070
2071@section X11 grabbing
2072
2073Grab the X11 display with ffmpeg via
2074
2075@example
2076ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0 /tmp/out.mpg
2077@end example
2078
20790.0 is display.screen number of your X11 server, same as
2080the DISPLAY environment variable.
2081
2082@example
2083ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0+10,20 /tmp/out.mpg
2084@end example
2085
20860.0 is display.screen number of your X11 server, same as the DISPLAY environment
2087variable. 10 is the x-offset and 20 the y-offset for the grabbing.
2088
2089@section Video and Audio file format conversion
2090
2091Any supported file format and protocol can serve as input to ffmpeg:
2092
2093Examples:
2094@itemize
2095@item
2096You can use YUV files as input:
2097
2098@example
2099ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
2100@end example
2101
2102It will use the files:
2103@example
2104/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
2105/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
2106@end example
2107
2108The Y files use twice the resolution of the U and V files. They are
2109raw files, without header. They can be generated by all decent video
2110decoders. You must specify the size of the image with the @option{-s} option
2111if ffmpeg cannot guess it.
2112
2113@item
2114You can input from a raw YUV420P file:
2115
2116@example
2117ffmpeg -i /tmp/test.yuv /tmp/out.avi
2118@end example
2119
2120test.yuv is a file containing raw YUV planar data. Each frame is composed
2121of the Y plane followed by the U and V planes at half vertical and
2122horizontal resolution.
2123
2124@item
2125You can output to a raw YUV420P file:
2126
2127@example
2128ffmpeg -i mydivx.avi hugefile.yuv
2129@end example
2130
2131@item
2132You can set several input files and output files:
2133
2134@example
2135ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
2136@end example
2137
2138Converts the audio file a.wav and the raw YUV video file a.yuv
2139to MPEG file a.mpg.
2140
2141@item
2142You can also do audio and video conversions at the same time:
2143
2144@example
2145ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
2146@end example
2147
2148Converts a.wav to MPEG audio at 22050 Hz sample rate.
2149
2150@item
2151You can encode to several formats at the same time and define a
2152mapping from input stream to output streams:
2153
2154@example
2155ffmpeg -i /tmp/a.wav -map 0:a -b:a 64k /tmp/a.mp2 -map 0:a -b:a 128k /tmp/b.mp2
2156@end example
2157
2158Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map
2159file:index' specifies which input stream is used for each output
2160stream, in the order of the definition of output streams.
2161
2162@item
2163You can transcode decrypted VOBs:
2164
2165@example
2166ffmpeg -i snatch_1.vob -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a libmp3lame -b:a 128k snatch.avi
2167@end example
2168
2169This is a typical DVD ripping example; the input is a VOB file, the
2170output an AVI file with MPEG-4 video and MP3 audio. Note that in this
2171command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
2172GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
2173input video. Furthermore, the audio stream is MP3-encoded so you need
2174to enable LAME support by passing @code{--enable-libmp3lame} to configure.
2175The mapping is particularly useful for DVD transcoding
2176to get the desired audio language.
2177
2178NOTE: To see the supported input formats, use @code{ffmpeg -demuxers}.
2179
2180@item
2181You can extract images from a video, or create a video from many images:
2182
2183For extracting images from a video:
2184@example
2185ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
2186@end example
2187
2188This will extract one video frame per second from the video and will
2189output them in files named @file{foo-001.jpeg}, @file{foo-002.jpeg},
2190etc. Images will be rescaled to fit the new WxH values.
2191
2192If you want to extract just a limited number of frames, you can use the
2193above command in combination with the @code{-frames:v} or @code{-t} option,
2194or in combination with -ss to start extracting from a certain point in time.
2195
2196For creating a video from many images:
2197@example
2198ffmpeg -f image2 -framerate 12 -i foo-%03d.jpeg -s WxH foo.avi
2199@end example
2200
2201The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
2202composed of three digits padded with zeroes to express the sequence
2203number. It is the same syntax supported by the C printf function, but
2204only formats accepting a normal integer are suitable.
2205
2206When importing an image sequence, -i also supports expanding
2207shell-like wildcard patterns (globbing) internally, by selecting the
2208image2-specific @code{-pattern_type glob} option.
2209
2210For example, for creating a video from filenames matching the glob pattern
2211@code{foo-*.jpeg}:
2212@example
2213ffmpeg -f image2 -pattern_type glob -framerate 12 -i 'foo-*.jpeg' -s WxH foo.avi
2214@end example
2215
2216@item
2217You can put many streams of the same type in the output:
2218
2219@example
2220ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut
2221@end example
2222
2223The resulting output file @file{test12.nut} will contain the first four streams
2224from the input files in reverse order.
2225
2226@item
2227To force CBR video output:
2228@example
2229ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
2230@end example
2231
2232@item
2233The four options lmin, lmax, mblmin and mblmax use 'lambda' units,
2234but you may use the QP2LAMBDA constant to easily convert from 'q' units:
2235@example
2236ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
2237@end example
2238
2239@end itemize
2240@c man end EXAMPLES
2241
2242@include config.texi
2243@ifset config-all
2244@ifset config-avutil
2245@include utils.texi
2246@end ifset
2247@ifset config-avcodec
2248@include codecs.texi
2249@include bitstream_filters.texi
2250@end ifset
2251@ifset config-avformat
2252@include formats.texi
2253@include protocols.texi
2254@end ifset
2255@ifset config-avdevice
2256@include devices.texi
2257@end ifset
2258@ifset config-swresample
2259@include resampler.texi
2260@end ifset
2261@ifset config-swscale
2262@include scaler.texi
2263@end ifset
2264@ifset config-avfilter
2265@include filters.texi
2266@end ifset
2267@include general_contents.texi
2268@end ifset
2269
2270@chapter See Also
2271
2272@ifhtml
2273@ifset config-all
2274@url{ffmpeg.html,ffmpeg}
2275@end ifset
2276@ifset config-not-all
2277@url{ffmpeg-all.html,ffmpeg-all},
2278@end ifset
2279@url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
2280@url{ffmpeg-utils.html,ffmpeg-utils},
2281@url{ffmpeg-scaler.html,ffmpeg-scaler},
2282@url{ffmpeg-resampler.html,ffmpeg-resampler},
2283@url{ffmpeg-codecs.html,ffmpeg-codecs},
2284@url{ffmpeg-bitstream-filters.html,ffmpeg-bitstream-filters},
2285@url{ffmpeg-formats.html,ffmpeg-formats},
2286@url{ffmpeg-devices.html,ffmpeg-devices},
2287@url{ffmpeg-protocols.html,ffmpeg-protocols},
2288@url{ffmpeg-filters.html,ffmpeg-filters}
2289@end ifhtml
2290
2291@ifnothtml
2292@ifset config-all
2293ffmpeg(1),
2294@end ifset
2295@ifset config-not-all
2296ffmpeg-all(1),
2297@end ifset
2298ffplay(1), ffprobe(1),
2299ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
2300ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
2301ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
2302@end ifnothtml
2303
2304@include authors.texi
2305
2306@ignore
2307
2308@setfilename ffmpeg
2309@settitle ffmpeg video converter
2310
2311@end ignore
2312
2313@bye
2314