This manual was last updated 10 October 2011 for version 1.2.1 of Bino.
Copyright © 2011 Martin Lambers (marlam@marlam.de), Stefan Eilemann (eile@eyescale.ch), Frédéric Devernay (Frederic.Devernay@inrialpes.fr)
Copying and distribution of this file and the referenced image files, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. These files are offered as-is, without any warranty.
Short Contents:
Bino is a 3D video player with multi-display support.
3D videos are more accurately called stereoscopic videos. Such videos have separate views for the left and right eye and thus allow depth perception through stereopsis.
The left and right view of a stereoscopic video can be stored using different layouts. Sometimes the two views are stored as two separate video streams, but most often both views are packed into a single video stream and need to be unpacked by the video player. Bino supports all commonly used layouts. See Input Layouts.
To display a stereoscopic video, the left and right view have to be prepared in a special way so that the left eye sees the left view and the right eye sees the right view. Different display techniques use different approaches to achieve this separation of the two views. Bino supports a wide variety of such techniques. See Output Techniques.
This section describes the command line interface of Bino.
Synopsis:
bino [
option...] [
file...]
Bino combines all input files into one media source which is then played. This means you can have video, audio, and subtitle streams in separate files. The files are decoded with the FFmpeg libraries, so URLs and other special constructs are supported.
The left and right view of a stereoscopic video can be stored using different layouts. Sometimes they are stored in separate files, sometimes in separate streams inside the same file, and often they are packed into a single video on top of each other or next to each other, with or without a reduction of resolution.
By default, Bino autodetects the input layout from meta data stored in the file
(for example, the Matroska and WebM formats have a StereoMode
field for
that purpose).
If no meta data is available, Bino tries to autodetect the input layout based on
the file name. See File Name Conventions. If that fails, too, Bino guesses
based on the resolution of the input.
If the meta data stored in a file does not indicate the input layout, Bino tries to guess it by looking at the last part of the file name before the file name extension (.ext).
The following file name forms are recognized:
The left and right view of a stereoscopic video need to be displayed in a way that ensures that the left view is only seen by the left eye of the user and the right view is only seen by the right eye. There are many different techniques to achieve this separation of left and right view, and Bino supports most of them.
With some display techniques, some part of the right view may also seen by the left eye and vice versa. This is called crosstalk and leads to ghosting artefacts that reduce image quality. For some display types, Bino can reduce such artefacts. See Crosstalk Ghostbusting.
One of the simplest output techniques is anaglyph glasses. Such glasses use color filters to separate the left and right view. Anaglyph glasses are cheap and work with every display, but the view separation and color reproduction are of relatively low quality. Still, depending on the video material, Bino can achieve high quality results using the Dubois method to produce video output for anaglyph glasses.
With OpenGL, the default method to display stereoscopic 3D content is OpenGL quad buffered stereo, often used with active shutter glasses. However, graphics card manufacturers tend to enable this output technique only on expensive high end hardware.
Many 3D computer displays use polarized glasses to separate left and right view, and some autostereoscopic displays do not require any glasses at all. Most of these 3D computer displays expect left and right view packed in a single video frame, e.g. on top of each other or next to each other or partitioned into even and odd pixel lines or columns. Bino supports all variants of such modes; refer to the manual of your display to find out which mode is required.
Another common way to display 3D stereoscopic content is to use two conventional 2D displays or projectors for the left and right view and combine both views either using a half-transparent mirror, or a single screen with polarized glasses. This is supported in Bino using multiscreen output. See Basic Multi Display Support.
For more complex setups, such as powerwalls or virtual reality installations driven by render clusters, Bino supports distributed video rendering via Equalizer. See Advanced Multi Display Support.
The default output technique for stereoscopic 3D input is OpenGL quad buffered stereo if the graphics card supports it, otherwise red/cyan anaglyph glasses.
Many stereoscopic display devices suffer from crosstalk between the left and right view. This results in ghosting artifacts that can degrade the viewing quality, depending on the video content.
Bino can optionally reduce the ghosting artifacts. For this, it needs two know
Please note that ghostbusting does not work with anaglyph glasses.
To measure the display crosstalk, do the following:
gamma-pattern-tb.png
image
and correct the display gamma settings according to the included instructions.
You need to have correct gamma settings before measuring crosstalk.
crosstalk-pattern-tb.png
image and
determine the crosstalk levels using the included instructions.
You now have three crosstalk values for the red, green, and blue channels. You can now tell Bino about this using the --crosstalk option. For example, if you have measured 8% of crosstalk for red, 12% for green, and 10% for blue, use
$ bino --crosstalk 0.08,0.12,0.10
Once you know the crosstalk levels of your display device, you can set the amount of ghostbusting that Bino should apply using the --ghostbust option. This will vary depending on the content you want to watch. Movies with very dark scenes should be viewed with at least 50% ghostbusting (--ghostbust 0.5), whereas overall bright movies, where crosstalk is less disturbing, could be viewed with a lower level (e.g. --ghostbust 0.1).
To check if you crosstalk calibration is correct, display the crosstalk patterns with full ghostbusting, like this:
$ bino --crosstalk 0.08,0.12,0.10 --ghostbust 1.0 crosstalk-pattern-tb.png
The remaining crosstalk should optimally be 0%.
For basic multi display support, Bino requires that all displays are connected to a single computer and are configured to display one large desktop. For such a setup, you can configure which screens Bino should use in fullscreen mode.
For example, if you have two projectors L and R that project onto a single
screen with polarization filters, and you have configured your desktop to cover
both projectors next to each other (LR), then you can configure fullscreen mode
to use both projectors and select the left-right
output technique.
For similar setups, it is sometimes useful to mirror either the left or the right view horizontally or vertically. This, too, can be configured in Bino's fullscreen settings.
Of course, you can also combine multiple monitors to form one large display and use this with some other output technique, e.g. anaglyph glasses.
For more advanced setups, e.g. involving multiple computers and/or graphics cards or non-planar projection surfaces, you can use Bino's advanced multi display support via Equalizer.
Bino supports distributed multi-display output via the Equalizer framework.
This is how it works:
First, install Equalizer 1.0 or later. See
http://www.equalizergraphics.com/. Verify that it works by running the
included eqHello
example.
Then, build Bino with Equalizer support. The output of configure should contain the following line:
Equalizer: yes
Now you need an Equalizer configuration file for your display setup.
Bino needs a two-dimensional Equalizer canvas (= combined screen area), subdivided into segments (= single display areas). For example, if you have two projectors that project onto a 2m x 1m screen side-by-side, then your canvas is 2m x 1m large, and you have two segments: the first segment fills the left half of the canvas, and the second segment fills the right half.
Next, Equalizer needs to know how to render into each segment. For this purpose, you define several hierarchical objects: nodes (= processes, possibly on different systems), pipes (= graphics cards), windows (= output windows with OpenGL contexts), and channels (= parts of windows). The video output happens at the channel level: each channel is assigned to one segment of the canvas. Most probably you just have one fullscreen window per pipe, and a single output channel per window.
Note that one node is special: the application node, which is the node that you initially start (the other nodes are started automatically by Equalizer). The application node is called 'appNode' in the Equalizer configuration, and Bino will play audio only on the application node. All video output is then synchronized to this audio output.
Once you have your configuration file (examples are given below), you can check if it works correctly using the eqHello example:
$ eqHello --eq-config configuration.eqc
Once you made sure that this works, you can start Bino using this command:
$ bino -o equalizer --eq-config configuration.eqc video.mp4
Note that all your nodes need access to the video file using the same name, so a shared filesystem is helpful if you use multiple systems.
To play live video from a webcam or TV card, you can set up a streaming server using ffserver (part of FFmpeg) or vlc, and then give the appropriate URL to Bino. You can use multicast to stream the video to multiple systems efficiently.
The output mode -o equalizer-3d allows to configure non-planar projections. Bino projects the video onto a virtual screen in 3D space. The screen is located in the distance of the biggest front-facing segment, and sized to fill the wall optimally. By configuring the output segments accordingly, various advanced display configurations can be used, e.g. displays rotated around the Z axis by an arbitrary angle or non-planar screens.
In this example, you have a 2m x 1m screen and two projectors: one for the left half of the screen, and one for the right half. The two projectors are connected to two graphics cards on the same system.
In this situation, you have one node with two pipes, and each pipe has a fullscreen window with a single output channel. The first output channel is assigned to the left segment, and the second output channel is assigned to the right channel. The resulting configuration looks like this:
server { config { appNode { pipe { device 0 window { attributes { hint_fullscreen ON } channel { name "left" }}} pipe { device 1 window { attributes { hint_fullscreen ON } channel { name "right" }}} } observer {} layout { view { observer 0 }} canvas { layout 0 wall { bottom_left [ 0.0 0.0 -1 ] bottom_right [ 2.0 0.0 -1 ] top_left [ 0.0 1.0 -1 ] } segment { channel "left" viewport [ 0.0 0.0 0.5 1.0 ] } segment { channel "right" viewport [ 0.5 0.0 0.5 1.0 ] } } compound { compound { channel ( view 0 segment 0 ) swapbarrier {} } compound { channel ( view 0 segment 1 ) swapbarrier {} } } } }
In the following example, you have a 4m x 3m screen for 3D projection via passive stereo (e.g. polarization). You have two systems, "render1" and "render2", each equipped with two graphics cards. The two cards on "render1" generate two images for the left half of the screen: one for the left eye view and one for the right eye view. The two cards on "render2" generate left and right view for the right half of the screen. Additionally, you have a system called "master" which has a sound card and should display a small control window.
This setup is very similar to the situation shown in
multi-display-vrlab.jpg
.
The configuration looks like this:
server { connection { hostname "master" } config { appNode { connection { hostname "master" } pipe { window { viewport [ 100 100 400 300 ] channel { name "control" }}} } node { connection { hostname "render1" } pipe { device 0 window { attributes { hint_fullscreen ON } channel { name "render1left" }}} pipe { device 1 window { attributes { hint_fullscreen ON } channel { name "render1right" }}} } node { connection { hostname "render2" } pipe { device 0 window { attributes { hint_fullscreen ON } channel { name "render2left" }}} pipe { device 1 window { attributes { hint_fullscreen ON } channel { name "render2right" }}} } observer {} layout { view { observer 0 }} canvas { layout 0 wall { bottom_left [ 0.0 0.0 -1 ] bottom_right [ 4.0 0.0 -1 ] top_left [ 0.0 3.0 -1 ] } segment { channel "render1left" viewport [ 0.0 0.0 0.5 1.0 ] } segment { channel "render1right" viewport [ 0.0 0.0 0.5 1.0 ] } segment { channel "render2left" viewport [ 0.5 0.0 0.5 1.0 ] } segment { channel "render2right" viewport [ 0.5 0.0 0.5 1.0 ] } segment { channel "control" viewport [ 0.0 0.0 1.0 1.0 ] } } compound { compound { eye [ LEFT ] channel ( view 0 segment 0 ) swapbarrier {} } compound { eye [ RIGHT ] channel ( view 0 segment 1 ) swapbarrier {} } compound { eye [ LEFT ] channel ( view 0 segment 2 ) swapbarrier {} } compound { eye [ RIGHT ] channel ( view 0 segment 3 ) swapbarrier {} } compound { channel ( view 0 segment 4 ) swapbarrier {} } } } }
The -o equalizer-3d mode allows to set up arbitrary-oriented screens using either the wall-based or projection-based 3D frustum descriptions.
In this example we set up two 16:10 displays side-by-side which have been
rotated around their Z axis by 1.3 degrees radians (~74 degrees). The image
multi-display-rotated.jpg
illustrates
this setup. Other setups include distortion-correct projection for curved
screens, or arbitrarily-placed screens in a 3D space.
First, we rotate a normally-aligned screen by 1.3 degrees and output the result:
eq::Matrix4f matrix(eq::Matrix4f::IDENTITY); matrix.rotate(1.3f, eq::Vector3f::FORWARD); wall.bottomLeft = matrix * wall.bottomLeft; wall.bottomRight = matrix * wall.bottomRight; wall.topLeft = matrix * wall.topLeft; std::cout << wall << std::endl;
yields a rotated screen centered on the origin:
bottom_left [ -0.69578 0.6371 -1 ] bottom_right [ -0.26778 -0.9046 -1 ] top_left [ 0.26778 0.9046 -1 ]
This screen has to be moved along the X-axis for the left and right screen by 0.5195m, which places the edges of the screen on the origin. The resulting wall descriptions are used for the left and right segment, as shown in the configuration below.
The configuration references two GPUs full-screen output. By changing the node resource section, the outputs may be mapped to two computers. When disabling the fullscreen mode and setting 'device 0' for the second pipe, two windows simulate this setup on a single machine.
global { EQ_WINDOW_IATTR_HINT_FULLSCREEN ON } server { config { appNode { pipe { device 0 window { viewport [ .215 .5 .4 .4 ] channel { name "channel1" } } } pipe { device 1 window { viewport [ .285 .1 .4 .4 ] attributes{ hint_drawable window } channel { name "channel2" } } } } layout { view{ }} canvas { layout 0 segment { channel "channel1" wall { bottom_left [ -1.21528 0.6371 -1 ] bottom_right [ -0.78728 -0.9046 -1 ] top_left [ -0.25172 0.9046 -1 ] } } segment { channel "channel2" wall { bottom_left [ -0.17628 0.6371 -1 ] bottom_right [ 0.25172 -0.9046 -1 ] top_left [ 0.78728 0.9046 -1 ] } } } compound { compound { channel ( segment 0 ) swapbarrier{} } compound { channel ( segment 1 ) swapbarrier{} } } } }
Bino reacts on a number of keyboard shortcuts during playback.
The following shortcuts are recognized:
Bino supports remote controls via LIRC.
Use the client name ‘bino’ in your LIRC configuration. The default LIRC
configuration file usually is ~/.lircrc
. You can use the
‘--lirc-config’ option to use one or more custom LIRC configuration files
instead.
The following commands are available:
Example LIRC configuration file excerpt:
begin remote = ... button = ... prog = bino config = adjust-brightness +0.05 end
Bino supports the following types of camera devices:
[
hostname]:
display.
screen[+
x,
y]
. For example,
smith:0.0+160,120
would capture from X11 display :0.0
on host
smith
, starting at position 160,120
.
Note: for firewire and x11 devices to work, your FFmpeg libraries must have libdc1394 and x11grab support enabled.