Localization and Mapping

These classes support simulation of vehicle and map estimation in a simple planar world with point landmarks.

State estimation

Two state estimators are included.

Extended Kalman filter

The EKF is capable of vehicle localization, map estimation or SLAM.

class roboticstoolbox.mobile.EKF(robot, sensor=None, map=None, P0=None, x_est=None, joseph=True, animate=True, x0=[0, 0, 0], verbose=False, history=True, workspace=None)[source]

Bases: object

__init__(robot, sensor=None, map=None, P0=None, x_est=None, joseph=True, animate=True, x0=[0, 0, 0], verbose=False, history=True, workspace=None)[source]

Extended Kalman filter

Parameters:
  • robot (2-tuple) – robot motion model

  • sensor (2-tuple, optional) – vehicle mounted sensor model, defaults to None

  • map (LandmarkMap, optional) – landmark map, defaults to None

  • P0 (ndarray(n,n), optional) – initial covariance matrix, defaults to None

  • x_est (array_like(n), optional) – initial state estimate, defaults to None

  • joseph (bool, optional) – use Joseph update of covariance, defaults to True

  • animate (bool, optional) – show animation of vehicle motion, defaults to True

  • x0 (array_like(n), optional) – initial EKF state, defaults to [0, 0, 0]

  • verbose (bool, optional) – display extra debug information, defaults to False

  • history (bool, optional) – retain step-by-step history, defaults to True

  • workspace (scalar, array_like(2), array_like(4)) – dimension of workspace, see expand_dims()

This class solves several classical robotic estimation problems, which are selected according to the arguments:

Problem

len(x)

robot

sensor

map

P0

Dead reckoning

3

(veh,V)

None

None

P0

Map-based localization

3

(veh,V)

(smodel,W)

yes

P0

Map creation

2N

(veh,None)

(smodel,W)

None

None

SLAM

3+2N

(veh,V)

(smodel,W)

None

P0

where:

  • veh models the robotic vehicle kinematics and odometry and is a VehicleBase subclass

  • V is the estimated odometry (process) noise covariance as an ndarray(3,3)

  • smodel models the robot mounted sensor and is a SensorBase subclass

  • W is the estimated sensor (measurement) noise covariance as an ndarray(2,2)

The state vector has different lengths depending on the particular estimation problem, see below.

At each iteration of the EKF:

  • invoke the step method of the robot

  • obtains the next control input from the driver agent, and apply it as the vehicle control input

  • the vehicle returns a noisy odometry estimate

  • the state prediction is computed

  • the true pose is used to determine a noisy sensor observation

  • the state is corrected, new landmarks are added to the map

The working area of the robot is defined by workspace or inherited from the landmark map attached to the sensor (see expand_dims()):

workspace

x-range

y-range

A (scalar)

-A:A

-A:A

[A, B]

A:B

A:B

[A, B, C, D]

A:B

C:D

Dead-reckoning localization

The state \(\vec{x} = (x, y, \theta)\) is the estimated vehicle configuration.

Create a vehicle with odometry covariance V, add a driver to it, run the Kalman filter with estimated covariances V and initial vehicle state covariance P0:

V = np.diag([0.02, np.radians(0.5)]) ** 2;
robot = Bicycle(covar=V)
robot.control = RandomPath(workspace=10)

x_sdev = [0.05, 0.05, np.radians(0.5)]
P0 = np.diag(x_sdev) ** 2
ekf = EKF(robot=(robot, V), P0=P0)

ekf.run(T=20)  # run the simulation for 20 seconds

robot.plot_xy(color="b")  # plot the true vehicle path
ekf.plot_xy(color="r")    # overlay the estimated path
ekf.plot_ellipse(filled=True, facecolor="g", alpha=0.3)  # overlay uncertainty ellipses

# plot the covariance against time
t = ekf.get_t();
pn = ekf.get_Pnorm()
plt.plot(t, pn);

Map-based vehicle localization

The state \(\vec{x} = (x, y, \theta)\) is the estimated vehicle configuration.

Create a vehicle with odometry covariance V, add a driver to it, create a map with 20 point landmarks, create a sensor that uses the map and vehicle state to estimate landmark range and bearing with covariance W, the Kalman filter with estimated covariances V and W and initial vehicle state covariance P0:

V = np.diag([0.02, np.radians(0.5)]) ** 2;
robot = Bicycle(covar=V)
robot.control = RandomPath(workspace=10)

map = LandmarkMap(nlandmarks=20, workspace=10)

W = np.diag([0.1, np.radians(1)]) ** 2
sensor = RangeBearingSensor(robot=robot, map=map, covar=W, angle=[-np.pi/2, np.pi/2], range=4, animate=True)

x_sdev = [0.05, 0.05, np.radians(0.5)]
P0 = np.diag(x_sdev) ** 2
ekf = EKF(robot=(robot, V), P0=P0, map=map, sensor=(sensor, W))

ekf.run(T=20)  # run the simulation for 20 seconds

map.plot()  #  plot the map
robot.plot_xy(color="b")  # plot the true vehicle path
ekf.plot_xy(color="r")    # overlay the estimated path
ekf.plot_ellipse()  # overlay uncertainty ellipses

# plot the covariance against time
t = ekf.get_t();
pn = ekf.get_Pnorm()
plt.plot(t, pn);

Vehicle-based map making

The state \(\vec{x} = (x_0, y_0, \dots, x_{N-1}, y_{N-1})\) is the estimated landmark positions where \(N\) is the number of landmarks. The state vector is initially empty, and is extended by 2 elements every time a new landmark is observed.

Create a vehicle with perfect odometry (no covariance), add a driver to it, create a sensor that uses the map and vehicle state to estimate landmark range and bearing with covariance W, the Kalman filter with estimated sensor covariance W, then run the filter for N time steps:

robot = Bicycle()
robot.add_driver(RandomPath(20, 2))

map = LandmarkMap(nlandmarks=20, workspace=10, seed=0)

W = np.diag([0.1, np.radians(1)]) ** 2
sensor = RangeBearingSensor(robot, map, W)

ekf = EKF(robot=(robot, None), sensor=(sensor, W))

ekf.run(T=20)  # run the simulation for 20 seconds

map.plot()  #  plot the map
robot.plot_xy(color="b")  # plot the true vehicle path

Simultaneous localization and mapping (SLAM)

The state \(\vec{x} = (x, y, \theta, x_0, y_0, \dots, x_{N-1}, y_{N-1})\) is the estimated vehicle configuration followed by the estimated landmark positions where \(N\) is the number of landmarks. The state vector is initially of length 3, and is extended by 2 elements every time a new landmark is observed.

Create a vehicle with odometry covariance V, add a driver to it, create a map with 20 point landmarks, create a sensor that uses the map and vehicle state to estimate landmark range and bearing with covariance W, the Kalman filter with estimated covariances V and W and initial state covariance P0, then run the filter to estimate the vehicle state at each time step and the map:

V = np.diag([0.02, np.radians(0.5)]) ** 2;
robot = Bicycle(covar=V)
robot.control = RandomPath(workspace=10)

map = LandmarkMap(nlandmarks=20, workspace=10)

W = np.diag([0.1, np.radians(1)]) ** 2
sensor = RangeBearingSensor(robot=robot, map=map, covar=W, angle=[-np.pi/2, np.pi/2], range=4, animate=True)

ekf = EKF(robot=(robot, V), P0=P0, sensor=(sensor, W))

ekf.run(T=20)  # run the simulation for 20 seconds

map.plot(); # plot true map
ekf.plot_map(); # plot estimated landmark position

robot.plot_xy(); # plot true path
ekf.plot_xy(); # plot estimated robot path
ekf.plot_ellipse(); # plot estimated covariance

# plot the covariance against time
t = ekf.get_t();
pn = ekf.get_Pnorm()
plt.plot(t, pn);
Seealso:

run()

property x_est

Get EKF state

Returns:

state vector

Return type:

ndarray(n)

Returns the value of the estimated state vector at the end of simulation. The dimensions depend on the problem being solved.

property P_est

Get EKF covariance

Returns:

covariance matrix

Return type:

ndarray(n,n)

Returns the value of the estimated covariance matrix at the end of simulation. The dimensions depend on the problem being solved.

property P0

Get initial EKF covariance

Returns:

covariance matrix

Return type:

ndarray(n,n)

Returns the value of the covariance matrix passed to the constructor.

property V_est

Get estimated odometry covariance

Returns:

odometry covariance

Return type:

ndarray(2,2)

Returns the value of the estimated odometry covariance matrix passed to the constructor

property W_est

Get estimated sensor covariance

Returns:

sensor covariance

Return type:

ndarray(2,2)

Returns the value of the estimated sensor covariance matrix passed to the constructor

property robot

Get robot object

Returns:

robot used in simulation

Return type:

VehicleBase subclass

property sensor

Get sensor object

Returns:

sensor used in simulation

Return type:

SensorBase subclass

property map

Get map object

Returns:

map used in simulation

Return type:

LandmarkMap subclass

property verbose

Get verbosity state

Returns:

verbosity

Return type:

bool

property history

Get EKF simulation history

Returns:

simulation history

Return type:

list of namedtuples

At each simulation timestep a namedtuple of is appended to the history list. It contains, for that time step, estimated state and covariance, and sensor observation.

Seealso:

get_t() get_xyt() get_map() get_P() get_Pnorm()

property workspace

Size of robot workspace

Returns:

workspace bounds [xmin, xmax, ymin, ymax]

Return type:

ndarray(4)

Returns the bounds of the workspace as specified by the constructor option workspace

property landmarks

Get landmark information

Returns:

landmark information

Return type:

dict

The dictionary is indexed by the landmark id and gives a 3-tuple:

  • order in which landmark was seen

  • number of times seen

The order in which the landmark was first seen. The first observed landmark has order 0 and so on.

Seealso:

landmark()

landmark(id)[source]

Landmark information

Parameters:

id (int) – landmark index

Returns:

order in which it was first seen, number of times seen

Return type:

int, int

The first observed landmark has order 0 and so on.

Seealso:

landmarks() landmark_index() landmark_mindex()

landmark_index(id)[source]

Landmark index in complete state vector

Parameters:

id (int) – landmark index

Returns:

index in the state vector

Return type:

int

The return value j is the index of the x-coordinate of the landmark in the EKF state vector, and j+1 is the index of the y-coordinate.

Seealso:

landmark()

landmark_mindex(id)[source]

Landmark index in map state vector

Parameters:

id (int) – landmark index

Returns:

index in the state vector

Return type:

int

The return value j is the index of the x-coordinate of the landmark in the map vector, and j+1 is the index of the y-coordinate.

Seealso:

landmark()

landmark_x(id)[source]

Landmark position

Parameters:

id (int) – landmark index

Returns:

landmark position \((x,y)\)

Return type:

ndarray(2)

Returns the landmark position from the current state vector.

init()[source]
run(T, animate=False)[source]

Run the EKF simulation

Parameters:
  • T (float) – maximum simulation time in seconds

  • animate (bool, optional) – animate motion of vehicle, defaults to False

Simulates the motion of a vehicle (under the control of a driving agent) and the EKF estimator. The steps are:

  • initialize the filter, vehicle and vehicle driver agent, sensor

  • for each time step:

    • step the vehicle and its driver agent, obtain odometry

    • take a sensor reading

    • execute the EKF

    • save information as a namedtuple to the history list for later display

Seealso:

history() landmark() landmarks() get_xyt() get_t() get_map() get_P() get_Pnorm() plot_xy() plot_ellipse() plot_error() plot_map() run_animation()

run_animation(T=10, x0=None, control=None, format=None, file=None)[source]

Run the EKF simulation

Parameters:
  • T (float) – maximum simulation time in seconds

  • format (str, optional) – Output format

  • file (str, optional) – File name

Returns:

Matplotlib animation object

Return type:

matplotlib.animation.FuncAnimation()

Simulates the motion of a vehicle (under the control of a driving agent) and the EKF estimator for T seconds and returns an animation in various formats:

``format``    ``file``   description
============  =========  ============================
``"html"``    str, None  return HTML5 video
``"jshtml"``  str, None  return JS+HTML video
``"gif"``     str        return animated GIF
``"mp4"``     str        return MP4/H264 video
``None``                 return a ``FuncAnimation`` object

If file can be None then return the video as a string, otherwise it must be a filename.

The simulation steps are:

  • initialize the filter, vehicle and vehicle driver agent, sensor

  • for each time step:

    • step the vehicle and its driver agent, obtain odometry

    • take a sensor reading

    • execute the EKF

    • save information as a namedtuple to the history list for later display

Seealso:

history() landmark() landmarks() get_xyt() get_t() get_map() get_P() get_Pnorm() plot_xy() plot_ellipse() plot_error() plot_map() run_animation()

step(pause=None)[source]

Execute one timestep of the simulation

get_t()[source]

Get time from simulation

Returns:

simulation time vector

Return type:

ndarray(n)

Return simulation time vector, starts at zero. The timestep is an attribute of the robot object.

Seealso:

run() history()

get_xyt()[source]

Get estimated vehicle trajectory

Returns:

vehicle trajectory where each row is configuration \((x, y, \theta)\)

Return type:

ndarray(n,3)

Seealso:

plot_xy() run() history()

plot_xy(*args, block=None, **kwargs)[source]

Plot estimated vehicle position

Parameters:
  • args – position arguments passed to plot()

  • kwargs – keywords arguments passed to plot()

  • block (bool, optional) – hold plot until figure is closed, defaults to None

Plot the estimated vehicle path in the xy-plane.

Seealso:

get_xyt() plot_error() plot_ellipse() plot_P() run() history()

plot_ellipse(confidence=0.95, N=10, block=None, **kwargs)[source]

Plot uncertainty ellipses

Parameters:
  • confidence (float, optional) – ellipse confidence interval, defaults to 0.95

  • N (int, optional) – number of ellipses to plot, defaults to 10

  • block (bool, optional) – hold plot until figure is closed, defaults to None

  • kwargs – arguments passed to spatialmath.base.graphics.plot_ellipse()

Plot N uncertainty ellipses spaced evenly along the trajectory.

Seealso:

get_P() run() history()

plot_error(bgcolor='r', confidence=0.95, ax=None, block=None, **kwargs)[source]

Plot error with uncertainty bounds

Parameters:
  • bgcolor (str, optional) – background color, defaults to ‘r’

  • confidence (float, optional) – confidence interval, defaults to 0.95

  • block (bool, optional) – hold plot until figure is closed, defaults to None

Plot the error between actual and estimated vehicle path \((x, y, \theta)`\) versus time as three stacked plots. Heading error is wrapped into the range \([-\pi,\pi)\)

Behind each line draw a shaded polygon bgcolor showing the specified confidence bounds based on the covariance at each time step. Ideally the lines should be within the shaded polygon confidence of the time.

Note

Observations will decrease the uncertainty while periods of dead-reckoning increase it.

Seealso:

get_P() run() history()

get_map()[source]

Get estimated map

Returns:

landmark coordinates \((x, y)\)

Return type:

ndarray(n,2)

Landmarks are returned in the order they were first observed.

Seealso:

landmarks() run() history()

plot_map(marker=None, ellipse=None, confidence=0.95, block=None)[source]

Plot estimated landmarks

Parameters:
  • marker (dict, optional) – plot marker for landmark, arguments passed to plot(), defaults to “r+”

  • ellipse (dict, optional) – arguments passed to plot_ellipse(), defaults to None

  • confidence (float, optional) – ellipse confidence interval, defaults to 0.95

  • block (bool, optional) – hold plot until figure is closed, defaults to None

Plot a marker and covariance ellipses for each estimated landmark.

Seealso:

get_map() run() history()

get_P(k=None)[source]

Get covariance matrices from simulation

Parameters:

k (int, optional) – timestep, defaults to None

Returns:

covariance matrix

Return type:

ndarray(n,n) or list of ndarray(n,n)

If k is given return covariance from simulation timestep k, else return a list of all covariance matrices.

Seealso:

get_Pnorm() run() history()

get_Pnorm(k=None)[source]

Get covariance norm from simulation

Parameters:

k (int, optional) – timestep, defaults to None

Returns:

covariance matrix norm

Return type:

float or ndarray(n)

If k is given return covariance norm from simulation timestep k, else return all covariance norms as a 1D NumPy array.

Seealso:

get_P() run() history()

disp_P(P, colorbar=False)[source]

Display covariance matrix

Parameters:
  • P (ndarray(n,n)) – covariance matrix

  • colorbar – add a colorbar

Type:

bool or dict

Plot the elements of the covariance matrix as an image. If colorbar is True add a color bar, if colorbar is a dict add a color bar with these options passed to colorbar.

Note

A log scale is used.

Seealso:

imshow() matplotlib.pyplot.colorbar()

get_transform(map)[source]

Transformation from estimated map to true map frame

Parameters:

map (LandmarkMap) – known landmark positions

Returns:

transform from map to estimated map frame

Return type:

SE2 instance

Uses a least squares technique to find the transform between the landmark is world frame and the estimated landmarks in the SLAM reference frame.

Seealso:

points2tr2()

Particle filter

The particle filter is capable of map-based vehicle localization.

class roboticstoolbox.mobile.ParticleFilter(robot, sensor, R, L, nparticles=500, seed=0, x0=None, verbose=False, animate=False, history=True, workspace=None)[source]

Bases: object

__init__(robot, sensor, R, L, nparticles=500, seed=0, x0=None, verbose=False, animate=False, history=True, workspace=None)[source]

Particle filter

Parameters:
  • robot (VehicleBase subclass,) – robot motion model

  • sensor (SensorBase subclass) – vehicle mounted sensor model

  • R (ndarray(3,3)) – covariance of the zero-mean Gaussian noise added to the particles at each step (diffusion)

  • L (ndarray(2,2)) – covariance used in the sensor likelihood model

  • nparticles (int, optional) – number of particles, defaults to 500

  • seed (int, optional) – random number seed, defaults to 0

  • x0 (array_like(3), optional) – initial state, defaults to [0, 0, 0]

  • verbose (bool, optional) – display extra debug information, defaults to False

  • history (bool, optional) – retain step-by-step history, defaults to True

  • workspace (scalar, array_like(2), array_like(4)) – dimension of workspace, see expand_dims()

This class implements a Monte-Carlo estimator or particle filter for vehicle state, based on odometry, a landmark map, and landmark observations. The state of each particle is a possible vehicle configuration \((x,y, heta)\). Bootstrap particle resampling is used.

The working area is defined by workspace or inherited from the landmark map attached to the sensor (see expand_dims()):

workspace

x-range

y-range

A (scalar)

-A:A

-A:A

[A, B]

A:B

A:B

[A, B, C, D]

A:B

C:D

Particles are initially distributed uniform randomly over this area.

Example:

V = np.diag([0.02, np.radians(0.5)]) ** 2
robot = Bicycle(covar=V, animation="car", workspace=10)
robot.control = RandomPath(workspace=robot)

map = LandmarkMap(nlandmarks=20, workspace=robot.workspace)

W = np.diag([0.1, np.radians(1)]) ** 2
sensor = RangeBearingSensor(robot, map, covar=W, plot=True)

R = np.diag([0.1, 0.1, np.radians(1)]) ** 2
L = np.diag([0.1, 0.1])
pf = ParticleFilter(robot, sensor, R, L, nparticles=1000)

pf.run(T=10)

map.plot()
robot.plot_xy()
pf.plot_xy()

plt.plot(pf.get_std()[:100,:])

Note

Set seed=0 to get different behaviour from run to run.

Seealso:

run()

property robot

Get robot object

Returns:

robot used in simulation

Return type:

VehicleBase subclass

property sensor

Get sensor object

Returns:

sensor used in simulation

Return type:

SensorBase subclass

property map

Get map object

Returns:

map used in simulation

Return type:

LandmarkMap subclass

property verbose

Get verbosity state

Returns:

verbosity

Return type:

bool

property history

Get EKF simulation history

Returns:

simulation history

Return type:

list of namedtuples

At each simulation timestep a namedtuple of is appended to the history list. It contains, for that time step, estimated state and covariance, and sensor observation.

Seealso:

get_t() get_xy() get_std() get_Pnorm()

property workspace

Size of robot workspace

Returns:

workspace bounds [xmin, xmax, ymin, ymax]

Return type:

ndarray(4)

Returns the bounds of the workspace as specified by constructor option workspace

property random

Get private random number generator

Returns:

NumPy random number generator

Return type:

numpy.random.Generator

Has methods including:

  • integers(low, high, size, endpoint)

  • random(size)

  • uniform

  • normal(mean, std, size)

  • multivariate_normal(mean, covar, size)

The generator is initialized with the seed provided at constructor time every time init is called.

run(T=10, x0=None)[source]

Run the particle filter simulation

Parameters:
  • T (float) – maximum simulation time in seconds

  • x0 (array_like(3) or array_like(2)) – Initial state, defaults to value given to Vehicle constructor

Simulates the motion of a vehicle (under the control of a driving agent) and the particle-filter estimator. The steps are:

  • initialize the filter, vehicle and vehicle driver agent, sensor

  • for each time step:

    • step the vehicle and its driver agent, obtain odometry

    • take a sensor reading

    • execute the EKF

    • save information as a namedtuple to the history list for later display

Seealso:

history() landmark() landmarks() get_xy() get_t() get_std() plot_xy()

run_animation(T=10, x0=None, format=None, file=None)[source]

Run the particle filter simulation

Parameters:
  • T (float) – maximum simulation time in seconds

  • x0 (array_like(3) or array_like(2)) – Initial state, defaults to value given to Vehicle constructor

  • format (str, optional) – Output format

  • file (str, optional) – File name

Returns:

Matplotlib animation object

Return type:

matplotlib.animation.FuncAnimation()

Simulates the motion of a vehicle (under the control of a driving agent) and the particle-filter estimator and returns an animation in various formats:

``format``    ``file``   description
============  =========  ============================
``"html"``    str, None  return HTML5 video
``"jshtml"``  str, None  return JS+HTML video
``"gif"``     str        return animated GIF
``"mp4"``     str        return MP4/H264 video
``None``                 return a ``FuncAnimation`` object

The allowables types for file are given in the second column. A str value is the file name. If None is an option then return the video as a string.

For the last case, a reference to the animation object must be held if used for animation in a Jupyter cell:

anim = robot.run_animation(T=20)

The steps are:

  • initialize the filter, vehicle and vehicle driver agent, sensor

  • for each time step:

    • step the vehicle and its driver agent, obtain odometry

    • take a sensor reading

    • execute the EKF

    • save information as a namedtuple to the history list for later display

Seealso:

history() landmark() landmarks() get_xy() get_t() get_std() plot_xy()

plot_pdf()[source]

Plot particle PDF

Displays a discrete PDF of vehicle position. Creates a 3D plot where the x- and y-axes are the estimated vehicle position and the z-axis is the particle weight. Each particle is represented by a a vertical line segment of height equal to particle weight.

get_t()[source]

Get time from simulation

Returns:

simulation time vector

Return type:

ndarray(n)

Return simulation time vector, starts at zero. The timestep is an attribute of the robot object.

get_xyt()[source]

Get estimated vehicle trajectory

Returns:

vehicle trajectory where each row is configuration \((x, y, \theta)\)

Return type:

ndarray(n,3)

Seealso:

plot_xy() run() history()

get_std()[source]

Get standard deviation of particles

Returns:

standard deviation of vehicle position estimate

Return type:

ndarray(n,2)

Return the standard deviation \((\sigma_x, \sigma_y)\) of the particle cloud at each time step.

Seealso:

get_xyt()

plot_xy(block=None, **kwargs)[source]

Plot estimated vehicle position

Parameters:
  • args – position arguments passed to plot()

  • kwargs – keywords arguments passed to plot()

  • block (bool, optional) – hold plot until figure is closed, defaults to None

Plot the estimated vehicle path in the xy-plane.

Seealso:

get_xy()

Sensor models

class roboticstoolbox.mobile.RangeBearingSensor(robot, map, line_style=None, poly_style=None, covar=None, range=None, angle=None, plot=False, seed=0, **kwargs)[source]

Bases: SensorBase

__init__(robot, map, line_style=None, poly_style=None, covar=None, range=None, angle=None, plot=False, seed=0, **kwargs)[source]

Range and bearing angle sensor

Parameters:
  • robot (VehicleBase subclass) – model of robot carrying the sensor

  • map (LandmarkMap instance) – map of landmarks

  • polygon (dict, optional) – polygon style for sensing region, see plot_polygon, defaults to None

  • covar (ndarray(2,2), optional) – covariance matrix for sensor readings, defaults to None

  • range (float or array_like(2), optional) – maximum range \(r_{max}\) or range span \([r_{min}, r_{max}]\), defaults to None

  • angle (float, optional) – angular field of view, from \([-\theta, \theta]\) defaults to None

  • plot (bool, optional) – animate the sensor beams, defaults to False

  • seed (int, optional) – random number seed, defaults to 0

  • kwargs – arguments passed to SensorBase

Sensor object that returns the range and bearing angle \((r, \beta)\) to a point landmark from a robot-mounted sensor. The sensor measurements are corrupted with zero-mean Gaussian noise with covariance covar.

The sensor can have a maximum range, or a minimum and maximum range. The sensor can also have a restricted angular field of view.

The sensing region can be displayed by setting the polygon parameter which can show an outline or a filled polygon. This is updated every time reading() is called, based on the current configuration of the robot.

>>> from roboticstoolbox import Bicycle, LandmarkMap, RangeBearingSensor
>>> from math import pi
>>> robot = Bicycle()
>>> map = LandmarkMap(20)
>>> sensor = RangeBearingSensor(robot, map, range=(0.5, 20), angle=pi/4)
>>> print(sensor)
RangeBearingSensor sensor class
  LandmarkMap object with 20 landmarks, workspace=(-10.0: 10.0, -10.0: 10.0)
  W = [ 0, 0 | 0, 0 ]
  sampled every 1 samples
  range: (0.5: 20.0)
  angle: (-0.785: 0.785)
Seealso:

LandmarkMap EKF

init()[source]

Initialize sensor

  • reseed the random number generator

  • reset the counter for handling the every and fail options

  • reset the landmark log

  • initalize plots

Seealso:

SensorBase.init()

property W

Get sensor covariance

Returns:

sensor covariance

Return type:

ndarray(2,2)

Returns the value of the sensor covariance matrix passed to the constructor.

reading()[source]

Choose landmark and return observation

Returns:

range and bearing angle to a landmark, and landmark id

Return type:

ndarray(2), int

Returns an observation of a random visible landmark (range, bearing) and the id of that landmark. The landmark is chosen randomly from the set of all visible landmarks, those within the angular field of view and range limit.

If constructor argument every is set then only return a valid reading on every every calls.

If constructor argument fail is set then do not return a reading during that specified time interval.

If no valid reading is available then return (None, None)

>>> from roboticstoolbox import Bicycle, LandmarkMap, RangeBearingSensor
>>> from math import pi
>>> robot = Bicycle()
>>> map = LandmarkMap(20)
>>> sensor = RangeBearingSensor(robot, map, range=(0.5, 20), angle=pi/4)
>>> print(sensor.reading())
(array([ 7.2691, -0.039 ]), 16)

Note

  • Noise with covariance W (set by constructor) is added to the reading

  • If animate option is set then show a line from the vehicle to the landmark

    • If animate option set and the angular and distance limits are set then display the sensor field of view as a polygon.

Seealso:

h()

visible()[source]

List of all visible landmarks

Returns:

list of visible landmarks

Return type:

list of int

Return a list of the id of all landmarks that are visible, that is, it lies with the sensing range and field of view of the sensor at the robot’s current configuration.

Seealso:

isvisible() h()

isvisible(id)[source]

Test if landmark is visible

Parameters:

id (int) – landmark id

Returns:

visibility

Return type:

bool

The landmark id is visible if it lies with the sensing range and field of view of the sensor at the robot’s current configuration.

Seealso:

visible() h()

h(x, landmark=None)[source]

Landmark observation function

Parameters:
  • x (array_like(3), array_like(N,3)) – vehicle state \((x, y, \theta)\)

  • landmark (int or array_like(2), optional) – landmark id or position, defaults to None

Returns:

range and bearing angle to landmark math:(r,beta)

Return type:

ndarray(2) or ndarray(N,2)

Return the range and bearing to a landmark:

  • .h(x) is range and bearing to all landmarks, one row per landmark

  • .h(x, id) is range and bearing to landmark id

  • .h(x, p) is range and bearing to landmark with coordinates p

>>> from roboticstoolbox import Bicycle, LandmarkMap, RangeBearingSensor
>>> from math import pi
>>> robot = Bicycle()
>>> map = LandmarkMap(20)
>>> sensor = RangeBearingSensor(robot, map, range=(0.5, 20), angle=pi/4)
>>> z = sensor.h((1, 2, pi/2), 3)
>>> print(z)
[10.7111  1.4826]

Note

  • Noise with covariance (property W) is added to each row of z.

  • Performs fast vectorized operation where x is an ndarray(n,3).

  • The landmark is assumed to be visible, field of view and range limits are not applied.

Seealso:

reading() Hx() Hw() Hp()

Hx(x, landmark)[source]

Jacobian dh/dx

Parameters:
  • x (array_like(3)) – vehicle state \((x, y, \theta)\)

  • arg (int or array_like(2)) – landmark id or coordinate

Returns:

Jacobian matrix

Return type:

ndarray(2,3)

Compute the Jacobian of the observation function with respect to vehicle configuration \(\partial h/\partial x\)

  • sensor.Hx(q, id) is Jacobian for landmark id

  • sensor.h(q, p) is Jacobian for landmark with coordinates p

Seealso:

h() Hp() Hw()

Hp(x, landmark)[source]

Jacobian dh/dp

Parameters:
  • x (array_like(3)) – vehicle state \((x, y, \theta)\)

  • arg (int or array_like(2)) – landmark id or coordinate

Returns:

Jacobian matrix

Return type:

ndarray(2,2)

Compute the Jacobian of the observation function with respect to landmark position \(\partial h/\partial p\)

  • sensor.Hp(x, id) is Jacobian for landmark id

  • sensor.Hp(x, p) is Jacobian for landmark with coordinates p

Seealso:

h() Hx() Hw()

Hw(x, landmark)[source]

Jacobian dh/dw

Parameters:
  • x (array_like(3)) – vehicle state \((x, y, \theta)\)

  • arg (int or array_like(2)) – landmark id or coordinate

Returns:

Jacobian matrix

Return type:

ndarray(2,2)

Compute the Jacobian of the observation function with respect to sensor noise \(\partial h/\partial w\)

  • sensor.Hw(x, id) is Jacobian for landmark id

  • sensor.Hw(x, p) is Jacobian for landmark with coordinates p

Note

x and landmark are not used to compute this.

Seealso:

h() Hx() Hp()

g(x, z)[source]

Landmark position from sensor observation

Parameters:
  • x (array_like(3)) – vehicle state \((x, y, \theta)\)

  • z (array_like(2)) – landmark observation \((r, \beta)\)

Returns:

landmark position \((x, y)\)

Return type:

ndarray(2)

Compute the world coordinate of a landmark given the observation z from a vehicle state with x.

Seealso:

h() Gx() Gz()

Gx(x, z)[source]

Jacobian dg/dx

Parameters:
  • x (array_like(3)) – vehicle state \((x, y, \theta)\)

  • z (array_like(2)) – landmark observation \((r, \beta)\)

Returns:

Jacobian matrix

Return type:

ndarray(2,3)

Compute the Jacobian of the landmark position function with respect to landmark position \(\partial g/\partial x\)

Seealso:

g()

Gz(x, z)[source]

Jacobian dg/dz

Parameters:
  • x (array_like(3)) – vehicle state \((x, y, \theta)\)

  • z (array_like(2)) – landmark observation \((r, \beta)\)

Returns:

Jacobian matrix

Return type:

ndarray(2,2)

Compute the Jacobian of the landmark position function with respect to sensor observation \(\partial g/\partial z\)

Seealso:

g()

property map

Landmark map associated with sensor (superclass)

Returns:

robot

Return type:

LandmarkMap

plot(id)

Plot sensor observation

Parameters:

id (int) – landmark id

Draws a line from the robot to landmark id.

Note

  • The line is drawn using the line_style given at constructor time

property random

Get private random number generator (superclass)

Returns:

NumPy random number generator

Return type:

numpy.random.Generator

Has methods including:

The generator is initialized with the seed provided at constructor time every time init() is called.

Seealso:

init()

property robot

Robot associated with sensor (superclass)

Returns:

robot

Return type:

VehicleBase subclass

property verbose

Get verbosity state

Returns:

verbosity

Return type:

bool

Map models

class roboticstoolbox.mobile.landmarkmap.LandmarkMap(map, workspace=10, verbose=True, seed=0)[source]

Bases: object

Map of planar point landmarks

Parameters:
  • map (ndarray(2, N) or int) – map or number of landmarks

  • workspace (scalar, array_like(2), array_like(4), optional) – workspace or map bounds, defaults to 10

  • verbose (bool, optional) – display debug information, defaults to True

  • seed (int, optional) – random number seed, defaults to 0

Returns:

a landmark map object

Return type:

LandmarkMap

A LandmarkMap object represents a rectangular 2D environment with a number of point landmarks.

The landmarks can be specified explicitly or be uniform randomly positioned inside a region defined by the workspace. The workspace can be numeric:

workspace

x-range

y-range

A (scalar)

-A:A

-A:A

[A, B]

A:B

A:B

[A, B, C, D]

A:B

C:D

or any object that has a workspace attribute.

Example:

>>> from roboticstoolbox import LandmarkMap
>>> map = LandmarkMap(20)
>>> print(map)
LandmarkMap object with 20 landmarks, workspace=(-10.0: 10.0, -10.0: 10.0)
>>> print(map[3])  # coordinates of landmark 3
[-9.6694  2.9438]

The object is an iterator that returns consecutive landmark coordinates.

Reference:

Robotics, Vision & Control, Chap 6, Peter Corke, Springer 2011

See also RangeBearingSensor EKF

__init__(map, workspace=10, verbose=True, seed=0)[source]
__len__()[source]

Number of landmarks in map

Returns:

number of landmarks in the map

Return type:

int

property landmarks

xy-coordinates of all landmarks

Returns:

xy-coordinates for landmark points

Return type:

ndarray(2, n)

property workspace

Size of map workspace

Returns:

workspace bounds [xmin, xmax, ymin, ymax]

Return type:

ndarray(4)

Returns the bounds of the workspace as specified by constructor option workspace.

__getitem__(k)[source]

Get landmark coordinates from map

Parameters:

k (int) – landmark index

Returns:

coordinate \((x,y)\) of k’th landmark

Return type:

ndarray(2)

plot(labels=False, block=None, **kwargs)[source]

Plot landmark map

Parameters:
  • labels (bool, optional) – number the points on the plot, defaults to False

  • block (bool, optional) – block until figure is closed, defaults to False

  • kwargsplot() options

Plot landmark points using Matplotlib options. Default style is black crosses.