For the past few years, the major topic of focus for video surveillance has been focused on the migration from analog to IP and what’s needed to make this switch possible. It’s an important conversation, but there are other emerging trends and technologies that are redefining not only the infrastructure behind video surveillance but also how enterprises and organizations are fundamentally using this technology.

According to IHS, by 2015 more than 70 percent of all network camera shipments will have megapixel resolution. This shift is occurring for the following reasons:

  • Improved compression codecs (H.264)
  • Networks (networking hardware, server and storage) supporting IP surveillance are higher performing and purpose built
  • Better image quality, higher resolutions give more useable image detail
  • Improved performance of high resolution cameras in difficult lighting conditions (e.g., lowlight and WDR)
  • Reduced storage costs
  • Better coverage, potentially reducing camera counts

High-definition resolution provides up to five times higher resolution than standard analog. The two most widely used high-definition standards come from the Society of Motion Picture and Television Engineers (SMPTE): 296M (720p) and 274M (1080). 296M defines a resolution of 1280x720 pixels with high color fidelity in a 16:9 format using progressive scanning at 25/30 Hz and 50/60 Hz. 274M defines a resolution of 1920x1080 pixels with high color fidelity in a 16:9 format using either interlaced or progressive scanning at 25/30 Hz and 50/60 Hz. A camera that adheres to one of these standards indicates HDTV quality and provides the benefits in resolution, color fidelity and frame rates.

In addition, new ultra high-definition standards by SMPTE are just starting to enter the market. A few manufacturers released products at the recent ISC West show. The two new resolutions are 4,000 pixels and 8,000 pixels, even though the latter is still far from being used in the market.

With an increase in resolutions, there is a need for higher performing video compression codecs to build efficiencies in video data traffic bandwidth and storage requirements. In early 2013, the Motion Picture Experts Group (MPEG) and Video Coding Experts Group (VCEG) released the High Efficiency Video Coding (HEVC) or H.265 algorithm. The Advanced Video Coding (AVC) or H.264 codec was one of the catalysts that allowed for the adoption of higher resolutions. The hope is for new video compression codec such as H.265 to further build efficiencies and push adoption for higher resolutions.

The move to high resolution and the subsequent decline in standard definition is resulting in the use of resolutions somewhere between standard resolution and 2 megapixel or HDTV 1080p. Resolutions in this range can address a majority of end-user applications, with resolutions above 2 megapixels used in specific applications that demand higher resolution (e.g., in license plate recognition or other high-detail image applications). As a result, most manufacturers have shifited focus from resolution to image quality by developing network cameras with increased performance in depth of field, low light and wide dynamic range (WDR) applications.

The increase in resolution and image quality has also improved performance capabilities of video analytics software. As video is being used in new and creative ways, capturing that data and analyzing it in real-time is essential in monitoring situations or understanding behavior. This new type of intelligence—video content analytics—is expected to be growing market, hitting $600 million in sales by 2015.

Right now, there are two principal types of video analytics: real-time and search/analyze. Real-time analytics monitor live video streams and provide instant alerts based on a variety of user predefined events. Search/analyze analytics allow a user the capability to quickly analyze archived video data for events of interest. Depending upon complexity video analytics can be edge-based or server-based. Many edge based analytics are starting to become standard features such as camera tampering and object direction monitoring. As processor speeds continue to increase, more and more analytics will move to the edge. Server-based analytics is more advanced primarily because more processing power is available and multiple video analytic algorithms can be performed at the same time in addition mass amounts of archived video data analyzed.

Video analytics isn’t the only thing being pushed to the edge. Thanks to advancements in storage, cameras now have the ability to record video and audio to on-board SD memory card. Edge storage, as it’s called, increases system reliability and provides uninterruptible recording. Edge storage is proving useful in applications such as transportation (buses, trains), in large wireless installations, and to provide additional redundancy in large mission critical installations with failover recording servers and in small installations without failover recording servers. Much like edge analytics, improvements in technology will only accelerate edge storage.


For more information contact your local Anixter representative or call 1.800.ANIXTER.