welcome: please sign in
location: rtslam / Using


Requirements and recommendations

Configuring the demo

There are two configuration files :

You can use the '@' character to refer to <data-path> when specifying the location of the config file to use : --config-estimation=@/estimation.cfg

The parameters are documented in the main.hpp file, with the definition of the data structures ConfigSetup and ConfigEstimation.

Offline processing

If you already have some data and you want to process them with RT-SLAM:

Sample demo

You can download here a sample image and IMU sequence to quickly test RT-SLAM offline. Give the path where the archive has been unzipped with the option --data-path.

Note that this sequence was meant to illustrate the high dynamic movements that the system can withstand thanks to IMU-vision fusion, so it is recommended to enable the use of IMU data with --robot=1 option. It will still diverge at the very end because it goes beyond the IMU limits (300 deg/s).

As an example you can use a command similar to this one to run the demo (from $JAFAR_DIR/build/modules/rtslam):

demo_suite/x86_64-linux-gnu/demo_slam --disp-2d=1 --disp-3d=1 --robot=1 --camera=1 --map=1 --render-all=0 --replay=1 --rand-seed=1 --pause=1 --config-setup=@/setup.cfg --config-estimation=@/estimation.cfg --data-path=/home/cyril/2011-02-15_inertial-high-dyn/

Running the demo

   1 cd $JAFAR_DIR/build/modules/rtslam
   2 demo_suite/<platform>/demo_slam <options>

Control options



Possible values





0/1 (disabled/enabled)

use 2D display

Qt installed



0/1 (disabled/enabled)

use 3D display

Gdhe installed



0/1 (disabled/enabled)

force rendering display for all frames

--replay 1



0/1/2/3 (offline/online/online no slam/offline replay)

replay mode: offline runs from offline data ; online runs from live sensors ; online no slam disables slam processing for pure data dumps ; offline replay replays exactly an online run selecting the same data.
/!\ Don't run online (--replay 0/2) with --data-path pointing to a data set you want to keep, because it will start by cleaning it!

set --data-path to path with dumped data



0/1 (disabled/enabled)

dump the images (with --replay=0) or the rendered views (with --replay=1)

set --data-path to free directory



0/1/2 (Off/Socket/Poster)

export the pose of the robot



0/1/<filename> (disabled/enabled to rtslam.log/enabled to <filename>)

log result output in text file in --data-path



0/1/n (generate new one/replay uses the saved one/use this value)

random seed to use



0/n (disabled/pause after frame n)

pause after each data integrated, waiting for space key pressed in 2d display window or console if no 2d display

--replay 1/3




path to store or read data



<setup config file>

use this file for setup configuration



<estimation config file>

use this file for estimation configuration



0/1/2/3/4/5 (Off/Trace/Warning/Debug/VerboseDebug/VeryVerboseDebug)

verbosity level of debug output

compiled in debug mode


prints help


prints usage

Slam options



0/1/1n (constant velocity/inertial/order n constant)

motion model used for the prediction step




<cams>=sum(2^id) defines which camera are used (for instance 1 for left camera, 2 for right, 3 for both left and right cameras), <mode> (0=Raw/1=Rectify/2=Stereo) defines how cameras are used, only Raw being implemented right now



0/1/2 (visual odometry/global/local)

type of map used, ie how landmark memory is managed: visual odometry forgets landmarks soon after they are lost, use it for long ranges ; global keeps landmarks in the whole environment to allow loop closure, use it in small ranges ; local is optimized for submaps, but not fully implemented yet



0/1 (disabled/enabled)

use odometry data from genom poster or log file as observations in the filter



0/1/2/3 (disabled/position/position+velocity/position+orientation)

use absolute sensor such as GPS or MoCap from genom poster or log file as observations in the filter




initial absolute heading, required for use of GPS (overrides the value in setup.cfg)

Raw data options



0/<environment id><trajectory id>

use the ad hoc simulator and specify which trajectory and which environment to use, does not work for inertial



0/1/2 (internal/external with shutter control/external without shutter control)

configure the trigger used for the cameras




cameras frequency




shutter time, 0.0 meaning automatic




only use one image every <div>, with shift of <mod-camX> for each camera


For instance to start a simple SLAM process :

   1 demo_suite/<platform>/demo_slam --disp-2d=1 --disp-3d=1 --camera=10

Controlling the demo

When doing a replay, you can control how slam is running by interacting with the 2D viewer:

Interpreting the demo

Color code:

Another way to remember the color code:

2D landmark representation:

Reasons for a landmark not being observed:


If you are willing to submit some contribution, you can either:

OpenrobotsWiki: rtslam/Using (last edited 2017-05-22 11:39:53 by matthieu)