VTC Logo

VTC Parameter Documentation


CompensationGain: The increase in object covariance values when a measurement is missed, for a single frame. This value should be 30 for most applications.

A higher compensation gain value indicates that the tracking system should look at a wider area if it cannot detect an object in the expected location. A lower compensation gain value indicates that the tracking system should continue to seek in a relatively narrow region for the object, when the object is not detected for a frame.

FTPpassword: FTP server password used for periodically uploading video background images for live monitoring.

FTPusername: FTP server username used for periodically uploading video background iamges for live monitoring.

FrameUploadIntervalMinutes: Interval in minutes for uploading video background frame to server.

IntersectionId: Arbitrary user-chosed label to uniquely identify this intersection. Can be left at 1 without affecting functionality.

KHypotheses: The tracking algorithm considers multiple possibilities at each frame. The KHypothesis parameter affects how many possibilities are considered. A higher value can yield better tracking results at the expense of computational difficulty. Use k=2 for realtime situations or computers with slower CPUs. Values up to k=8 have been used to yield better tracking, but can result in very long processing times. The tracking algorithm is expotential with respect to the number of hypotheses generated.

LambdaF: Probability of a false assignment, per detection. Typical values is 4E-07.

LambdaN: Probability of a new object, per detection. Typical value is %E-07.

MaxHypTreeDepth: The tracking algorithm consideres possibilities using a number of frames of history. This parameter dictates the number of history frames used to make a tracking decision. Typical values are 2-5. For a slower CPU or realtime tracking, start with 2. Using a value of 5 can yield better tracking results for some situations.

MaxObjectCount: The maximum number of objects that may be tracked at one time. If the scene includes few objects with many false positives, a lower max-object count can prevent tracking background noise. For scenes with many vehicles and low background noise, the maximum object count can be increased. Tracking more objects creates higher CPU demand. Typical values are 2-10.

MaxTargets: The maximum number of objects that can be detected in a single frame. This parameter is different from MaxObjectCount in the sense that MaxObjectCount dictates the number of objects that can be tracked (over multiple frames), while MaxTargets dictates the number of objects that can be detected in a single frame. Typically MaxTargets is equal or slightly higher than MaxObjectCount. Typical values are 2-10.

MinObjectSize: Minimum blob size (in pixels²) which can be detected and tracked. Typical values are 100-500. For cameras nearby to the moving objects, start with 400. For objects at a large distance from the camera, start with 100.

MinPathLength: Minimum path length to be counted as a movement, in pixel units. Typical values are 50-200. For objects moving at a greater distance from the camera, or to ensure that partial tracks are counted, start with 50. For nearby objects or to reduce false-positive track counting, start with 200.

MissThreshold: A tracked object is deleted if a tracker does not detect it (a tracking miss) for some number of frames. Typical values are 2-10. To ensure that briefly-occluded objects can be tracked during occlusion, start with a threshold of 10. If trackers are jumping to different objects after the originally-tracked object disappears, try using a threshold of 2.

Pd: The probability of detection of an object, if the object exists. Typical values are between 0.9 and 0.9999 depending on object detection accuracy. For scenes with consistent and reliable object detection, start with Pd=0.999. For scenes with frequent misses, start with Pd=0.9.

PushToServer: This checkbox dictates whether the software should transmit tracking information to a server for live monitoring. For counting existing video, do not check this box.

Px: Probability of object deletion/exit per-frame, for a given object. Typical values are between 0.0001 and 0.01. For scenes with long-lasting objects, start with Px=0.0001. For scenes with short-lived objects, start with Px=0.01.

Q_color: Kalman filter movement covariance for color dimensions. Typical values are 50-50,000. A value of 50 indicates that the colors of each object have a low variance as the object travels through the frame. A value of 50,000 indicates that each object’s color is likely to change throughout the frame and discourages using color as an important factor in object identity.
For scenes with low shadow interferences and bright lighting, a lower value (50-100) can be selected to yield improved object identity tracking. For scenes with high shadow interference or low light (evening and night-time), try a higher value around 30,000.

Q_position: Kalman filter movement covariance for position dimensions. Typical values are 10 to 5000. For scenes with smoother object movement or objects at a large distance from the camera, start with Q_position=20. For scenes with rapid object direction changes (fast turns) or very nearby objects, start with Q_position=1000.

Q_size: Kalman filter movement covariance for size dimension. Typical values are 1000 to 20,000. For scenes with consistent object image sizes (objects at a large distances from camera), start with Q_size=1000. For scenes with objects near to the camera or frequent segmentation errors, start with Q_size=10,000.

R_color: Kalman filter measurement covariance for color dimension. Typical values are 10 to 5000. For scenes with consistent object color (clear, bright overhead lighting and low shadow interference) start with R_color=20. For scenes with high amounts of color interference, or to discourage the tracker from treating color as a meaningful signal, start with R_color=5000.

R_position: Kalman filter measurement covariance for position dimension. Typical values are 10 to 250. For scenes with consistent and accurate object segmentation, stat with R_position=10. For scenes with frequent segmentation errors, start with R_position=250.

R_size: Kalman filter measurement covariance for size dimension. Typical values are 100 to 500,000. For scenes with consistent, accurate detection and faraway objects, start with R_size=100. For scenes with nearby objects and frequent segmentation errors, start with R_size=200,000.

Region polygons: Region polygons contain the location of each approach and exit. These polygons are used for movement classification.

ROI mask: The region-of-interest (ROI) mask is used to select a region of the image where objects of interest appear. The purpose of the ROI is to exclude unwanted background objects from tracking.

ServerURL: The root URL of the server used to upload tracking and image information. For the case of processing existing video, this parameter is unused.

StateUploadIntervalMs: Interval (in milliseconds) for periodic tracking information upload. Typical value is 2000. This parameter is unused for the case of processing existing video.

Timestep: The length of time of an average frame in the video to be processed. Typical values are 0.04-0.1 depending on framerate. Determine the value of this parameter by calculating 1/framerate. For example, a video with 24 FPS should be configured to use a timestep of 0.04s. This parameter can affect tracking by making objects appear to move too quickly or too slowly. If trackers appear to be lagging behind objects, double-check that this parameter has been set correctly.

Title: User-determined name for this scene.

ValRegDeviation: This parameter affects the size of the region to be searched for tracked objects. Typical values are 3-7. For scenes with consistent and accurate object detection, start with a value of 3. For scenes with frequent occlusion or segmentation errors, start with a value of 7. Too high a value can cause trackers to jump between objects.

VehicleInitialCovB: Kalman filter initialization covariance for B (blue color) dimension. Typical value is 50. This parameter should not be adjusted by end users.

VehicleInitialCovG: Kalman filter initialization covariance for G (green color) dimension. Typical value is 50. This parameter should not be adjusted by end users.

VehicleInitialCovR: Kalman filter initialization covariance for R (red color) dimension. Typical value is 50. This parameter should not be adjusted by end users.

VehicleInitialCovSize: Kalman filter initialization covariance for size dimension. Typical values are 200 - 5000. For scenes with distant objects, start with a value of 200. For scenes with nearby objects, start with a value of 5000.

VehicleInitialCovVX: Kalman filter initialization covariance for object speed in the X (horizontal) direction. Typical values are 1000 to 70,000. For nearby objects entering the frame at high velocity, start with a value of 70,000. For distant or slowly-moving objects, start with a value of 1000.

VehicleInitialCovVY: Kalman filter initialization covariance for object speed in the Y (vertical) direction. Typical values are 1000 to 70,000. For nearby objects entering the frame at high velocity, start with a value of 70,000. For distant or slowly-moving objects, start with a value of 1000.

VehicleInitialCovX: Kalman filter initialization covariance for object position in the X (horizontal) dimension. Typical values are 50 to 2000. For nearby objects entering the frame at high velocity, start with a value of 5000. For small, low density objects, start with a value of 50.

VehicleInitialCovY: Kalman filter initialization covariance for object position in the Y (vertical) dimension. Typical values are 50 to 2000. For nearby objects entering the frame at high velocity, start with a value of 5000. For small, low density objects, start with a value of 50.