Record desktop

Make a recording (video file) of what is shown by your computer (PC's) display (video screen output).

This can be done for free, without having to pay for software. And by free, I mean, true free-ware, even open-source software, not trial-ware / freemium (such as Camtasia, e.g.).

Also, VLC, and mplayer (mencoder) can be used for  live screen-casting / streaming of  any raster/pixel/video data. I bet those use much of the same open-source codebase as ffmpeg does (from that project).

Windows
When using Microsoft's Windows O.S. (NT-based), as opposed to a Unix or unix-like flavour OS distribution (e.g. BSDs or GNU/Linux) (or even Apple's Mac OS X) ,

obtain install
Download a free windows build (compiled version that can run under Windows) from Zeranoe

in the form of a 7-zip archive: http://ffmpeg.zeranoe.com/builds/win32/static/ffmpeg-latest-win32-static.7z

- ffmpeg for Windows daily/nightly builds

ffmpeg for Windows can access Direct Show multi-media (audio and video) devices.

run

to (query, and (get a)) list what devices are available to ffmpeg.exe as input and output.

gdigrab
One possible means of input (doesn't use dshow but instead) is "gdigrab"

which makes use of GDI. G.D.I. is an acronym that stands for "Microsoft Windows graphics device interface"
 * official Microsoft MSDN
 * Graphics Device Interface

example:

N.B. "-video_size" doesn't seem to work for me. It captures entire Desktop display output, anyway.

Read about more options provided by ffmpeg.exe using GDIgrab: http://ffmpeg.org/ffmpeg-devices.html#gdigrab


 * offset
 * "-offset_x"
 * "-offset_y"
 * window (task) " " (the text in the Title bar of a window that is running within/on your Desktop)

include audio
It is possible to record audio input (stream(s)) simultaneously to accompany the video (raster/pixel data from the display/Desktop):

But, in my experience, this created a lag (latency  mis-synchronisation between audio and video content/streams, in which the video lagged behind the audio) (upon playback). (See further below in this article for colon " " syntax with use of not GDIgrab but "screen-capture-recorder").)

Also, I managed to combine audio from two sources -- make sure the value given/assigned to "amix=inputs=" is not '1' (Particularly, in this case, that value is '2'). Recording from 2 simultaneous audio inputs is possible with : " "

The primary thing to add to your ffmpeg command line is another DirectShow input:

example command-line: c:Downloads\ffmpeg.zeranoe.com__builds\bin\ffmpeg -f gdigrab -i desktop -framerate 15 -f dshow -i audio="Headset Microphone (Logitech US" -f dshow -i audio="virtual-audio-capturer" filter_complex amix=inputs=2 -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -vsync vfr -acodec pcm_s16le muxed-video-file.mkv

screen-capture-recorder
In order to record desktop, the following software (free) (open-source * ) needs to be installed to make Desktop recording  available as one of the possible input devices: Screen Capture(r) Recorder.

* http://GitHub.com/rdp/screen-capture-recorder-to-video-windows-free

Obtain the installer (" ") file from:

http://SourceForge.net/projects/screencapturer/files/

Installing that package will provide two additional (virtual, if you will) input devices to DirectShow (for use by ffmpeg.exe) :

" " (for visual/raster/pixel grab)

and

" " which is a loop-back audio device that takes the digital? output of the sound card (sound device) and allows that to be recorded (as an input stream source).

example of use
Here is an example command-line to run, that makes use of the new DirectShow input device(s) :

"screen-capture-recorder"

ffmpeg.exe -f dshow -framerate 10 -i video="screen-capture-recorder" -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -vsync vfr ouput-video.mkv

Another example -- record simultaneous audio, as well:

Break that command-line down, to help better understand it (and how to tweak it, for your individual case):

ffmpeg (binary executable)


 * next specifies the INPUT
 * video (visual, raster, pixels)
 * -f dshow
 * -i video="screen-capture-recorder"
 * additional options (modifiers), such as to specify the frame-rate (explicitly) by including (adding) . That means 10 frames-per-second. (Do NOT use the " " switch for this!)
 * audio
 * -f show (you start off this (the audio)  component of the command line with that string of " "
 * -i audio="Headset Microphone (Logitech US" (Apparently no closing parenthesis character at end, enclosed with that pair of double-quotation marks ? )
 * an additional option can go here, such as:
 * -filter_complex amix=inputs=1  (set that last integer value to ' ' if you specify more than one simultaneous input device.
 * next, OUTPUT (keep in mind that certain options that were specified in the input stage component of the command line invocation of ffmpeg, can be independently controlled for the output. E.g. a given region of the source display can be chosen for input into ffmpeg, but another region (taken from what the input stream provides) can be chosen to be encoded in output/product result file).
 * video
 * -vcodec libx264 (in this case, using the open-source H.264 encoding library, that is linked to the ffmpeg binary executable
 * -pix_fmt yuv240p (I read that this ensures maximum playback compatibility for most systems of the file that ffmpeg will create here.
 * (additional options for the encoding)
 * -vsync vfr
 * audio
 * -acodec pcm_s16le
 * The last argument (value that is plugged in(to) the command line is  the  location within the filesystem hierarchy tree of where to store (save) (spit out) the output that ffmpeg generates. This is usually a file on a mounted filesystem volume. At the minimum, specify a (ny arbitrary combination of characters / text string (given that the filesystem supports it, oh, and the command interpreter supports it and doesn't mangle "expand"  interpret it, as well)) filename -- and the filename suffix/extension does matter to ffmpeg. It cannot be arbitrary. It's not like any combination of characters to the right of the right-most period (dot character) in the filename value (string)  is okay. FFmpeg cares about it because it will determine which kind of container file format to house the outputting video and audio streams within.

better synchronization
Notice the colon (":") syntax. ... as opposed to a separate " " for each input device (accessed through Windows's DirectShow layer).

may not give as good a video+audio sync as( what was shown, previously, above, which is):

And, either way (regardless of whether using either of those two possibilities/variations/variants, shown directly above, previously),

an additional (con-current simultaneous audio input device/stream) cannot be daisy-chained using the same " " syntax.

Instead, it must be separately specified as part of the command line :

Also take careful note of the presence of the integer value of ' ' for:

Since ffmpeg will be combining audio from two different input streams/sources.

If the following software is not installed, "screen-capture-recorder" and "virtual-audio-capturer" will not be available as input devices for ffmpeg (through the Direct Sound layer) ...

software solution download
- source: this posting, on this thread.

The software program that offers that is called Screen Capturer Recorder. Free binaries (installer packages) : http://SourceForge.net/projects/screencapturer/files/

N.B.

The off-set that I successfully used with an X11 X.org GNU/Linux -based system  didn't work, as well as "-s" switch  for  x and y   width and height of capture region area (co-ordinates).

Linux
Or maybe any unix-like Operating System (Unix-flavour) ( * nix), including GNU/Linux distros, BSD-based (including OS X ?) -- any graphical environment based upon / powered by the X server (X11, X.org)

ffmpeg (in compiled form) (that can be run) is available in all of the major distros' software repositories.

apt-get

N.B.: Actually, Debian and derviative distros like Ubuntu use a fork of the ffmpeg codebase called libAV Libav. The following command-lines (commands and switches and syntax), examples, and such in this article should work the same as with the official ffmpeg itself.

Another note: ffmpeg is an open-source project. It is often used (compiled) into library form. However, a front-end piece of application software (binary executable) is necessary to make use of the functionality in its libraries. An example is the Windows build "ffmpeg.exe</tt>". With ffmpeg installed, simply type " " on the terminal</tt> (emulator, command-line interpreter/processor). If the libav fork is installed instead of official ffmpeg, the command " " should be linked to a binary executable file (program) called " "

record
-ac</tt> audio channels ('1' for mono, '2' for stereo)

-r</tt> frame rate of video. 30 is standard US/Japan TV, 25 is European. I recommend 10 or 15 for this purpose.

If "pulse</tt>" audio (layer) is not available, try:

-i hw:0


 * TIP: Disable your screen saver. That -will- interfere with the video that is captured. This is not semantic, instead it captures pixels (raster).

Another tip is to first use command "sleep</tt>" (for *nix GNU/Linux  Unix-like OSes)  followed by an integer argument/value which specifies how many seconds to wait before executing the next command -- which, in this case, is ffmpeg</tt>.

e.g.

sleep 5 && ffmpeg -f alsa -ac 1 -i pulse -f x11grab -r 10 -s 1024x720 -i :0.0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -crf 0 -threads 0 output-file1.mkv

Once that long compound command line is invoked (with the | Enter | key being pressed), the system (command-line interpreter console terminal) will wait 5 seconds and then it will launch ffmpeg, which will be recording (making the video file).

offset
If the area that you want visually-captured

off-set co-ordinates

add (x) (y)

ffmpeg -f alsa -ac 1 -i 'hw:1,0' -f x11grab -r 10 -s 1084x704 -i :0.0+62,168 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -crf 0 -threads 0 section1.mkv

will capture a window whose top-left co-ordinates (position) is at 62 pixels (from left edge of display/desktop/screen) and  168 lines (pixels) down from top.


 * more examples / source:

xwininfo
To choose a region of the X server's (X11) output -- one particular window or box, let's say, get the co-ordinates (bounds) (boundaries of pixel range) using a tool called xwininfo</tt>

xwininfo</tt> ( X Win Info) (or, think of it as (meaning)  X.org/X11 Window Information)

It should ship with any X11/Xorg installation (that comes with most GNU/Linux distros, and often other *nix Unix-like  Unix flavours OSes)

example usage
$ xwininfo

xwininfo: Please select the window about which you would like information by clicking the mouse in that window.

xwininfo: Window id: 0x4600898 "Title of webpage that is in web browser's window"

Absolute upper-left X: 38 Absolute upper-left Y: 43 Relative upper-left X: 0 Relative upper-left Y: 0 Width: 1002 Height: 709 Depth: 24 Visual: 0x21 Visual Class: TrueColor Border width: 0 Class: InputOutput Colormap: 0x20 (installed) Bit Gravity State: NorthWestGravity Window Gravity State: NorthWestGravity Backing Store State: NotUseful Save Under State: no Map State: IsViewable Override Redirect State: no Corners:  +38+43  -240+43  -240-272  +38-272 -geometry 1002x709+35+20

recordMyDesktop
Another piece of application software (for GNU/Linux only) is recordMyDesktop</tt> which converts output to Theora-format video. Theora can be thought of as the open-source equivalent (on par with) to the original Part 2 of MPEG 4 spec from early 2000s. VP8 is more (or now, VP9) competitive with h.264 and maybe h.265. The codebase for "recordMyDesktop" has not been updated since 2008. But it works!

Use it thusly:

That -bitrate</tt> is high.

related
How to convert media files using FFmpeg

How to find basic codec and compression info of a media file in Linux