Skip to main content

collect_calibration_data

Skill class for ai.intrinsic.collect_calibration_data skill.

The collect_calibration_data skill automates the process of capturing data points for calibration. It iterates through a provided list of robot waypoints. At each waypoint, it performs the following steps:

  1. Moves the robot to the target pose using the move_robot skill.
  2. Waits for motion to settle and senses the actual joint positions using update_robot_joint_positions.
  3. Calls CaptureData on the unified CalibrationService, passing the sensed robot pose. The service handles image acquisition and pattern detection.
  4. Optionally, returns the robot to a safe initial position before proceeding to avoid collisions (if requested).

The following typical camera cases are supported:

  • STATIONARY_CAMERA: The camera has a fixed position in the workcell and the calibration object is mounted on the robot's flange.

  • MOVING_CAMERA: The camera is mounted on the robot's flange and the calibration object has a fixed position in the workcell.

The data collection can be performed with collision detection or without. The mode without collision detection is useful when the model of the world is not accurate, but we are still certain that the provided waypoints can be safely reached.

Prerequisites

This skill depends on the update_robot_joint_positions and move_robot skills. Make sure these skills have been installed to your solution before running collect_calibration_data.

Additionally, for collect_calibration_data to work, a calibration session must have been initialized in the CalibrationService by calling the initialize_calibration skill. This setup step ensures that the service is ready to collect data points that will be later used for intrinsic, camera-to-robot and/or camera-to-camera calibration.

Optionally, if you wish to apply a pseudo-random sampling strategy for selecting waypoints, you can call the sample_calibration_poses skill to generate the waypoints passed to collect_calibration_data.

Usage Example

This skill does not have any usage example yet.

Parameters

pose_estimator

Id of the pose estimator to use for calibration pattern detection.

pattern_detection_config

Pattern detection configuration.

calibration_case

Has to be one of STATIONARY_CAMERA or MOVING_CAMERA.

calibration_object

Uniquely identifies the object used for calibration. Typically a calibration pattern, but - with an appropriate pose estimator - all kinds of objects can be used.

waypoints

Robot waypoints that will be used for data collection.

minimum_margin

Minimum margin between the moving object (calibration pattern for the STATIONARY_CAMERA case, and camera for the MOVING_CAMERA case) and all other world objects.

Set this parameter to a higher value if you are unsure about the exact positions of the objects in your world.

disable_collision_checking

Set to true to run without collision checking. This enables cases where the exact camera position is unknown, or the world is not accurately modelled.

ensure_same_branch_ik

If true, this restricts the sampled poses to the same IK branch of the robot.

motion_type

Determines the type of robot motion used to collect the data. Defaults to a planned move.

skip_return_to_base_between_waypoints

Indicates if we want to return to the base robot pose after visiting the individual waypoints. Using the return-to-base strategy can help to increase robustness, e.g. in presence of a dress-pack.

arm_part

Name of the ICON arm part to control. If not provided but only a single arm part is present in the ICON instance, that part will be used.

use_unified_calibration_service

This field is about to be deprecated and be defaulted to true.

Capabilities

camera

Resource with capability CameraConfig

robot

Resource having all of the following capabilities:

  • Icon2Connection

  • Icon2PositionPart

Returns

hand_eye_calibration_request

This return value is about to be deprecated.

Error Code

The skill does not have error codes yet