Hot Seat: Megh Computing CEO on Fulfilling the Promise of Intelligent Video Analytics

admincamera, security, Video

Megh Computing is a fully customizable, cross-platform video analytics solution provider for real-time actionable insights. The company was established in 2017 and is based in Portland, Ore., with development offices in Bangalore, India.

Co-founder and CEO PK Gupta joins the conversation to talk analytics deployment, customization and more.

As technology continually moves to the edge with video analytics and smart sensors, what are the tradeoffs versus Cloud deployment?

GUPTA: The demand for edge analytics is increasing rapidly with the explosion of streaming data from sensors, cameras and other sources. Of these, video remains the dominant data source with over a billion cameras deployed globally. Enterprises want to extract intelligence from these data streams using analytics to create business value.

Most of this processing is increasingly being done at the edge close to the data source. Moving the data to the Cloud for processing incurs transmission costs, potentially increases security risks and introduces latencies in the response time. Hence intelligent video analytics [IVA] is moving to the edge.

pk gupta headshot

Prabhat K. Gupta.

Many end users are concerned about sending video data off-premises; what options are there for processing on-premises yet leveraging Cloud benefits?

GUPTA: Many IVA solutions force users to choose between deploying their solution on-premises at the edge or hosted in the Cloud. Hybrid models allow on-premises deployments to benefit from the scalability and flexibility of Cloud computing. In this model, the video processing pipeline is split between on-premises processing and Cloud processing.

In a simple implementation, only the metadata is forwarded to the Cloud for storage and search. In another implementation, the data ingestion and transformation are done at the edge. Only frames with activity are forwarded to the Cloud for processing for the analytics. This model is a good compromise between balancing the latency and costs between edge processing and Cloud computing.

Image-based video analytics has historically needed filtering services due false positives; how does deep learning reduce those?

GUPTA: Traditional attempts at IVA have not met the expectations of enterprises because of limited functionality and poor accuracy. These solutions use image-based video analytics with computer vision processing for object detection and classification. These techniques are prone to errors necessitating the need to deploy filtering services.

In contrast, techniques using optimized deep learning models trained to detect people or objects coupled with analytics libraries for the business rules can essentially eliminate false positives. Special deep learning models can be created for custom use cases like PPE compliance, collision avoidance, etc.

We hear “custom use case” frequently with video AI; what does it mean?

GUPTA: Most use cases must be customized to meet the functional and performance requirements to deliver IVA. The first level of customization required universally includes the ability to configure the monitoring zones in the camera field of view, set up thresholds for the analytics, configure the alarms and set up the frequency and recipients of notifications. These configuration capabilities should be provided via a dashboard using graphical interfaces to allow the users to set up the analytics for proper operation.

The second level of customization involves updating the video analytics pipeline with new deep learning models or new analytics libraries to improve the performance. The third level includes training and deploying new deep learning models to implement new use cases, e.g., a model to detect PPE for worker safety, or to count inventory items in a retail store.

Can smart sensors such as lidar, presence detection, radar, etc. be integrated into an analytics platform?

GUPTA: IVA typically processes video data from cameras only and delivers insights based on analyzing the images. And sensor data are typically analyzed by separate systems to produce insights from lidar, radar and other sensors. A human operator is inserted in the loop to combine the results from the disparate platforms to reduce false positives for specific use cases like tailgating, employee authentication, etc.

An IVA platform that can ingest data from cameras and sensors using the same pipeline and use machine learning-based contextual analytics can deliver insights for these and other use cases. The contextual analytics component can be configured with simple rules and then it can learn to improve the rules over time to deliver highly accurate and meaningful insights.

The post Hot Seat: Megh Computing CEO on Fulfilling the Promise of Intelligent Video Analytics appeared first on Security Sales & Integration.