SLAM Module

The SLAM (Simultaneous Localization and Mapping) module estimates the vehicle’s pose (position and orientation) while simultaneously building a map of cone landmarks on the track. This module is critical for autonomous navigation, providing both the vehicle state and track map to downstream planning and control systems.

Overview

Our system implements GraphSLAM, a full SLAM approach that optimizes the entire trajectory and map jointly. We provide two distinct implementations that differ in how they handle sensor fusion:

  1. Early Stage Fusion: Receives pre-fused cone observations from the perception module

  2. Late Stage Fusion (MWEKF): Receives separate camera and LiDAR observations and fuses them within SLAM

Published Topics

  • /slam/state (feb_msgs/State): Vehicle state (x, y, velocity, heading, lap_count)

  • /slam/map/global (feb_msgs/Map): Complete cone map (published after first lap)

  • /slam/map/local (feb_msgs/Map): Local subset of cones near vehicle

  • /slam/conemap (sensor_msgs/PointCloud): Visualization of cone map (if enabled)

  • /slam/pose (geometry_msgs/PoseStamped): Vehicle pose for RViz (if enabled)

  • /slam/guessed_positions (sensor_msgs/PointCloud): Historical vehicle positions (if enabled)

Subscribed Topics

Early Stage Fusion:

  • /fusion/cones or /perception_cones (eufs_msgs/ConeArrayWithCovariance or feb_msgs/ConesCartesian): Fused cone observations

  • /imu (sensor_msgs/Imu): Orientation and acceleration

  • /odometry/wheelspeeds or /ground_truth/wheel_speeds (Float64MultiArray or eufs_msgs/WheelSpeedsStamped): Wheel speed measurements

Late Stage Fusion (MWEKF):

  • /camera/cones (feb_msgs/ConesCartesian): Camera-only cone detections

  • /lidar/cones (feb_msgs/ConesCartesian): LiDAR-only cone detections

  • /imu (sensor_msgs/Imu): Orientation measurements

  • /cmd (ackermann_msgs/AckermannDriveStamped): MPC control commands

  • /odometry/wheelspeeds or /ground_truth/wheel_speeds: Wheel speed measurements

API Reference

For detailed implementation documentation including class docstrings and method descriptions, see the source code in slam_ws/graphslam_global/graphslam_global/.

GraphSLAM Solver

class GraphSLAMSolve.GraphSLAMSolve(x0=numpy.array, initial_rows=100, initial_cols=100, dx_weight=2.0, z_weight=1.0, dclip=2.0, expansion_ratio=1.7, local_radius=100000.0)[source]

Bases: object

Parameters
  • x0 (numpy.ndarray) –

  • initial_rows (int) –

  • initial_cols (int) –

  • dx_weight (float) –

  • z_weight (float) –

  • dclip (float) –

  • expansion_ratio (float) –

  • local_radius (float) –

icp(z, color)[source]

rudimentary data association method that tries to rotate and translate z until it optimally lines up with self.lhat

Parameters
  • z (np.ndarray) – measurements, in global reference frame

  • color (np.ndarray) – color of measurements

Returns

measurements, translated and rotated optimally

Return type

np.ndarray

data_assocation_lap_2(z, color)[source]

use ICP for loop closure with every new set of cones in lap 2+

Args:

Parameters
  • z (numpy.ndarray) –

  • color (numpy.ndarray) –

Return type

numpy.ndarray

lap_completion()[source]

checks whether the car’s latest position estimate crosses line between orange cone position estimate

Args: N/A - uses slam position data

Updates:

lap_counter: number of times car has passed orange cones

Returns: N/A

Return type

bool

update_graph(dx, new_cones, matched_cones, first_update=False)[source]

add edges to the graph corresponding to a movement and new vision data

Parameters
  • dx (ndarray) – difference in position from last update. [dx, dy]

  • new_cones (ndarray) – cones not yet in slam map, in local frame to car [x y color]

  • matched_cones (ndarray) – cones in slam map, with indices in map, in local frame [map index, x, y, color]

Return type

None

update_cones(new_cones, matched_cones, first_update=False)[source]
Parameters
  • new_cones (ndarray) – cones not yet in slam map, in local frame to car [x y color]

  • matched_cones (ndarray) – cones in slam map, with indices in map, in local frame [map index, x, y, color]

Returns: indices of new cones in global slam map

solve_graph()[source]

solve graph. does not return results.

Return type

None

get_cones(color=None, indices=None)[source]
Return type

numpy.ndarray

get_positions()[source]
Return type

numpy.ndarray

get_colors(indices=None)[source]
check_resize(len_message)[source]
update_position(dx)[source]

Utility Functions

utility_functions.compute_timediff(slam, header)[source]

Function that takes in message header and computes difference in time from last state msg Input: Header (std_msg/Header) - uint32 seq - time stamp - string frame_id Output: timediff: float

Parameters

header (std_msgs.msg.Header) –

Return type

float

utility_functions.quat_to_euler(quat)[source]

Function that takes in quaternion and converts to Eulerian angles Input: Quat (Quaternion) - float x - float y - float z - float w Output: roll, pitch, yaw

utility_functions.cartesian_to_polar(car_state, cone)[source]
utility_functions.compareAngle(self, a, b, threshold)[source]
utility_functions.compute_delta_velocity(self, acc, dt, imu_direction)[source]

Function that takes in linear acceleration and dt and outputs velocity (linear) Input: - linear_acceleration: from imu message - dt: calculated at each time step Output: - linear_velocity #NOTE: we assume linear acceleration is in the car’s frame. This means

x is in the longitudinal direction (positive = forwards) y is in the lateral direction (positive = to the right) depending on how IMU is providing this data, change accordingly

Parameters
  • acc (geometry_msgs.msg.Vector3) –

  • dt (float) –

  • imu_direction (str) –

Return type

float

utility_functions.rotate_and_translate_cones(msg, camera)[source]
Parameters

msg (feb_msgs.msg.ConesCartesian) –

utility_functions.adjust_cone_poses_to_previous_pose(previous_states, cones, timestamp)[source]

Args: previous_states: nx3 array of [timestamp, x, y] cones: nx3 array of [x y color] - the cones message we got timestamp: float of time the CAMERA recieved the cones at

Returns: cones: nx3 array of [x y color] of adjusted cones idx_to_remove: latest index to remove – old poses

utility_functions.ccw(A, B, C)[source]
utility_functions.segments_intersect(A, B, C, D)[source]