You are using an unsupported browser. Please update your browser to the latest version on or before July 31, 2020.
close
You are viewing the article in preview mode. It is not live at the moment.
Analytics usage limitations
print icon

Analytics usage limitations

-----------------------------------

Related Digital Watchdog VMS Apps:  DW Spectrum IPVMS

Last Edit:  November 14, 2024

 

 

 

Analytics Overview

 

DW Spectrum has plugins already installed for the most popular manufacturers' devices. These plugins provide information regarding objects detected in images. Specifically, the plugins output rectangles showing the locations of the detected objects, as well as custom tags with metadata.

The object database can be searched by area of interest (as detected by motion detection), object types and tags provided by the plugin, date/time, and camera where the object appeared. Plugins can inject analytics events into the VMS with basic attributes: date, time, and text tags.

Part of the object metadata is stored in a Structured Query Language (SQL) database which is a file named objectdetection.sqlite_ located in the storage root catalog. The remainder of the metadata is kept in a proprietary database which consists of files stored in the subdirectory archive/metadata//YYYY/MM/analytics with file names detailed *.bin.

 

By default, DW Spectrum utilizes the largest available local storage (non-system) for storing analytical data. However, users have the option to choose where to store metadata in the Server Settings menu, Storage Management tab, by using the Use to store analytics data selection.

 

 

 

What hardware did we use during our testing?

 

Because the VMS is used in a wide range of network and hardware contexts, each situation has its capacity consideration for object storage and access. Consider a few popular usage scenarios. To get the figures shown below, we had to run the Server application with the stub plugin in a virtualized environment. The specifications of the environment are stated below:

 

 

  • Host CPU: Intel Core i7-6800K, 3.4 GHz, 6 cores, 15M cache, VT-x and VT-d enabled
  • Host chipset: Intel C610
  • Host memory: 32 GB, DRAM Freq 1066.5 MHz
  • Host HDD: WDC WD40EFRX-68N32N0
  • Host OS: Windows 10 Home 10.0.18362 N/A Build 18362
  • Guest CPU: 6 cores
  • Guest memory: 4 GB
  • Guest OS: Ubuntu 18.04 LTS

 

 

 

 

How do I create an object?

 

The analytics plugin initiates object creation, which may involve intense CPU usage if the plugin is designed that way. With a stub plugin, there is minimal CPU overhead, but real systems must account for the CPU and memory requirements of the plugin even though those are separate from VMS itself.

 

 

Creating objects implies writing operations. The amount of data written is dependent on the duration that the object appears in the camera stream. The longer the object's duration, the more metadata needs stored on the drive. For objects with an average duration of 3.3 seconds, approximately 26 KB is required to store the metadata. This results in a 7 KB/s stream overhead, which is insignificant compared to the video stream itself.

 

How do I search through the database?

 

Object search is initiated primarily by the Desktop client. A user can specify a time frame on the timeline and other parameters on the right panel. Search results are available on the notifications panel and are also presented on the timeline as yellow chunks.  This scenario is more complex since it could result in read requests across the entire System metadata.

 

Searching over 2000 objects belonging to a single camera within a 1-hour timeframe causes the VMS server to read 3MB of data from the analytics storage. Database indices typically allow for better performance, however, to keep it simple, let's assume the I/O amount scales linearly with the timespan, average object intensity, and camera count. For example, if we store an object database on a general HDD with 60 MB/s random read speed and have a search request latency of 1500 ms, it will yield a maximum 30-hour search timeframe.

 

CPU performance does not significantly affect this scenario. In contrast, RAM availability may provide a performance boost for searches with partially repeated criteria as the OS caches recently read data. This usage scenario may also benefit if SSD drives are utilized to store object metadata. Due to high latency and usually unstable throughput, remote drives (CIFS or NFS) do not perform well enough for this usage scenario.

 

 

How do I search through the database?  

 

When a camera gets moved from one server to another server within a system, the metadata gets stored on multiple servers. In this case, the server that the client connects to will query all the other servers for any metadata that matches the filter.

 

 

Evaluating performance in this scenario is highly complex since it relies on multiple factors: the number of servers included, how distributed the metadata is across those servers, and the network throughput between servers. As before, the most likely bottleneck is the random read I/O throughput for metadata storage. Additionally, if a remote server responds with tens of thousands of objects over a congested connection, there may be network issues as well.

 

To prevent false failover triggers and enhance throughput and responsiveness, it is critical to constantly verify and ensure there is stable and consistent network connectivity between the servers at all times

 

 

For More Information or Technical Support

DW Technical Support:  866.446.3595 (option 4)

https://www.digital-watchdog.com/contact-tech-support/

______________________________________________________________________________

DW Sales:  866.446.3595                   [email protected]            www.digital-watchdog.com

scroll to top icon