Semantic camera

A smart camera allowing to retrieve objects in its field of view

This sensor emulates a hight level camera that outputs the names of the objects that are located within the field of view of the camera.

The sensor determines first which objects are to be tracked (objects marked with a Logic Property called Object, cf documentation on passive objects for more on that). If the Label property is defined, it is used as exported name. Else the Blender object name is used. If the Type property is set, it is exported as well.

Then a test is made to identify which of these objects are inside of the view frustum of the camera. Finally, a single visibility test is performed by casting a ray from the center of the camera to the center of the object. If anything other than the test object is found first by the ray, the object is considered to be occluded by something else, even if it is only the center that is being blocked. This last check is a bit costly and can be deactivated by setting the sensor property noocclusion to True.

See also Generic Camera for generic informations about Morse cameras.

Configuration parameters for semantic camera

You can set these properties in your scripts with <component>.properties(<property1>=..., <property2>=...).

  • cam_width (default: 256)

    (no documentation available yet)

  • cam_height (default: 256)

    (no documentation available yet)

  • cam_focal (default: 25.0)

    (no documentation available yet)

  • cam_near (default: 0.1)

    (no documentation available yet)

  • cam_far (default: 100.0)

    (no documentation available yet)

  • Vertical_Flip (default: False)

    (no documentation available yet)

  • retrieve_depth (default: False)

    (no documentation available yet)

  • retrieve_zbuffer (default: False)

    (no documentation available yet)

  • noocclusion (bool, default: False)

    Do not check for objects possibly hiding each others (faster but less realistic behaviour)

Data fields

This sensor exports these datafields at each simulation step:

  • timestamp (float, initial value: 0.0)

    number of milliseconds in simulated time

  • visible_objects (list<objects>, initial value: [])
    A list containing the different objects visible by the camera. Each object is represented by a dictionary composed of:
    • name (String): the name of the object
    • type (String): the type of the object
    • position (vec3<float>): the position of the object, in meter, in the blender frame
    • orientation (quaternion): the orientation of the object, in the blender frame

Interface support:

Services for Semantic camera

  • get_local_data() (blocking)

    Returns the current data stored in the sensor.

    • Return value

      a dictionary of the current sensor’s data

  • get_configurations() (blocking)

    Returns the configurations of a component (parsed from the properties).

    • Return value

      a dictionary of the current component’s configurations

  • get_properties() (blocking)

    Returns the properties of a component.

    • Return value

      a dictionary of the current component’s properties

Examples

The following examples show how to use this component in a Builder script:

from morse.builder import *

robot = ATRV()

# creates a new instance of the sensor
semanticcamera = SemanticCamera()

# place your component at the correct location
semanticcamera.translate(<x>, <y>, <z>)
semanticcamera.rotate(<rx>, <ry>, <rz>)

robot.append(semanticcamera)

# define one or several communication interface, like 'socket'
semanticcamera.add_interface(<interface>)

env = Environment('empty')

Other sources of examples

(This page has been auto-generated from MORSE module morse.sensors.semantic_camera.)

Table Of Contents

Previous topic

Search And Rescue sensor

Next topic

Stereo Camera Unit

This Page