Detection Module

Module for generation of camera configuration and detection data.

Cameras can be placed in a virtual world through a CameraConfiguration. Given a camera configuration and movement data, a detection matrix and further detection information can be obtained through the Detection class.

Occupancy and other state variables can then be estimated with such detection data.

class ollin.core.detection.CameraConfiguration(positions, directions, site, cone_range=None, cone_angle=None)[source]

Camera configuration class holding camera positions and directions.

positions

array – Array of shape [num_cams, 2] to indicate coordinates of each camera.

directions

array – Array of shape [num_cams, 2] holding a vector of camera direction for each camera.

cone_angle

float – Viewing angle of cameras in degrees.

cone_range

float – Distance to camera at which detection is possible.

range

array – Array of shape [2] specifying the dimensions of the virtual world.

site

Site – Site object holding information about the virtual world.

num_cams

int – Number of cameras.

detect(mov)[source]

Use camera configuration to detect movement history.

Parameters:mov (Movement) – Movement data object to be detected by the camera configuration.
Returns:data – Camera detection information.
Return type:MovementDetection
classmethod make_grid(distance, site, cone_range=None, cone_angle=None)[source]

Place grid of cameras in virtual world.

Place cameras in a square grid configuration in range of virtual world, parallel to the x and y axis, separated by some distance. Camera directions will be random.

Parameters:
  • distance (float) – Distance between adjacent cameras in grid.
  • site (Site) – Site in which to place cameras.
  • cone_range (float, optional) – Distance to camera at which detection is possible. Default behaviour is as with range.
  • cone_angle (float, optional) – Viewing angle of camera in radians. Default behaviour is as with range.
Returns:

camera – Camera configuration object in square grid configuration.

Return type:

CameraConfiguration

classmethod make_random(num, site, min_distance=None, cone_range=None, cone_angle=None)[source]

Place cameras randomly in range.

Will create a number of cameras placed at random with random directions. If min_distance option is passed, then camera positions will be chosen so that any two are not closer that min_distance.

Parameters:
  • num (int) – Number of cameras to place.
  • site (Site) – Site in which to place cameras.
  • min_distance (float, optional) – Minimum distance in between cameras.
  • cone_range (float, optional) – Distance to camera at which detection is possible. If not provided it will be extracted from the global constants, see GLOBAL_CONSTANTS.
  • cone_angle (float, optional) – Viewing angle of camera in radians. Default behaviour is as with cone_range.
Returns:

camera – Camera configuration object with random positions and directions.

Return type:

CameraConfiguration

plot(ax=None, figsize=(10, 10), include=None, cone_length=0.4, camera_alpha=0.3, camera_color=None, **kwargs)[source]

Draw camera positions and orientations.

This method will make a plot representing the camera positions and orientations in the virtual world. To help visualize the cameras corresponding territory Voronoi cells of camera points will be plotted.

Camera configuration plot adds two components:
  1. “cameras”:
    If present in include list, camera position with cone of detection will be added to the plot.
  2. “camera_voronoi”:
    If present in include list, Voronoi cells for each camera will be added to the plot.

All other components in the include list will be handed down to the Site’s plotting method. See Site.plot() to consult all plotting components defined at that level.

Parameters:
  • ax (matplotlib.axes.Axes, optional) – Axes in which to plot camera info. New axes will be created if none are provided.
  • figsize (tuple or list, optional) – Size of figure, if ax is not provided. See figsize argument in matplotlib.pyplot.figure().
  • include (list or tuple, optional) – List of components to plot. Components list will be passed first to the Site object to add the corresponding components. Then components corresponding to CameraConfiguration included in the list will be plotted.
  • cone_length (float, optional) – Length of camera cone for visualization purposes. Defaults to 0.4 km.
  • camera_alpha (float, optional) – Alpha value for camera cones.
  • camera_color (str, optional) – Color for camera position points.
  • kwargs (dict) – Other keyword arguments will be passed to the CameraConfiguration plot method.
Returns:

ax – Axes of plot for further processing.

Return type:

matplotlib.axes.Axes

class ollin.core.detection.Detection(cam, detections)[source]

Class holding camera detection information.

Cameras left at site (virtual or real) will make detections at different steps in time. Which cameras detected when can be stored in a binary matrix. This matrix can then be used for estimation of state variables.

camera_config

CameraConfiguration – The camera configuration for the detection data.

range

array – Array of shape [2] that holds the dimensions of site.

detections

array – Array of shape [steps, num_cams] containing the detection information. If:

detections[j, i] = 1

then the i-th camera had an detection event at the j-th time step.

detection_nums

array – Array of shape [num_cams] with the total number of detections for each camera.

estimate_occupancy(model='single_species', method='MAP', priors=None)[source]

Estimate occupancy and detectability from detection data.

Use one of the estimation methods to estimate occupancy and detectability, see (estimation)

Parameters:type (str) – Name of estimation method to use. See estimation documentation for a full list.
Returns:estimate – Estimate object containing estimation information.
Return type:Estimate
plot(ax=None, figsize=(10, 10), include=None, detection_cmap='Purples', detection_alpha=0.2, **kwargs)[source]

Plot camera detection data.

Plots number of detections per camera by coloring the corresponding Voronoi cell. The number of detections are transformed to [0, 1] scale and mapped to a color using a colormap.

Detection plotting adds the following optional components to the plot:
  1. “detection:
    If present in include list Voronoi regions with color fill, encoding the corresponding detection numbers, will be added to the plot.
  2. “detection_colorbar”:
    If present in include list a colorbar representing the mapping between detection numbers and colors will be added to the plot.

All other components in the include list will be handed down to the CameraConfiguration plotting method. See CameraConfiguration.plot() to see all plotting components defined there.

Parameters:
  • ax (matplotlib.axes.Axes, optional) – Axes object in which to plot detection information.
  • figsize (list or tuple, optional) – Size of figure to create if no axes object was given.
  • include (list or tuple, optional) – List of components to plot. Components list will be passed first to the Camera Configuration object to add the corresponding components. Then components corresponding to CameraConfiguration included in the list will be plotted.
  • detection_cmap (str, optional) – Colormap with which to encode detection numbers. See matplotlib.cm to see all options. Defaults to ‘Purples’.
  • detection_alpha (float, optional) – Alpha value of Voronoi region’s color fill.
  • kwargs (dict, optional) – Any additional keyword arguments will be passed to the Camera Configuration plot method.
Returns:

ax – Plot axes for further plotting.

Return type:

matplotlib.axes.Axes

class ollin.core.detection.MovementDetection(mov, cam)[source]

Class holding detection data arising from movement data.

From camera placement and movement data, camera detection data can be calculated and collected into an array of shape [num_individuals, time_steps, num_cameras]. Here:

array[j, i, k] = 1

indicates that at the i-th step the j-th individual was detected by the k-th camera. Hence more detailed analysis is possible from such data.

camera_config

CameraConfiguration – The camera configuration for the detection data.

range

array – Array of shape [2] that holds the dimensions of site.

detections

array – Array of shape [steps, num_cam] containing the detection information. Here:

detections[j, i] = 1

means that the i-th camera had an detection event at the j-th time step.

detection_nums

array – Array of shape [num_cams] with the total number of detections for each camera.

movement

Movement – Movement data being detected.

grid

array – Array of shape [num_individuals, time_steps, num_cameras] holding all detection data.

plot(ax=None, figsize=(10, 10), include=None, **kwargs)[source]

Plot camera detection data.

Plots number of detections per camera by coloring the corresponding Voronoi cell. The number of detections are transformed to [0, 1] scale and mapped to a color using a colormap.

Detection plotting adds the following optional components to the plot:
  1. “detection:
    If present in include list Voronoi regions with color fill, encoding the corresponding detection numbers, will be added to the plot.
  2. “detection_colorbar”:
    If present in include list a colorbar representing the mapping between detection numbers and colors will be added to the plot.

All other components in the include list will be passed down to the Movement plotting method. See Movement.plot() for all plot components defined at that level.

Parameters:
  • ax (matplotlib.axes.Axes, optional) – Axes object in which to plot detection information.
  • figsize (list or tuple, optional) – Size of figure to create if no axes object was given.
  • include (list or tuple, optional) – List of components to plot. Components list will be passed first to the Movement Data object to add the corresponding components. Then components corresponding to Movement included in the list will be plotted.
  • kwargs (dict, optional) – All other keyword arguments will be passed to Detection and Movement plotting methods.
Returns:

ax – Returns axes for further plotting.

Return type:

matplotlib.axes.Axes