The approach presented in this patent applies computer vision modeling techniques to video compression in a powerful way that hasn’t been done before.
Boston, MA (PRWEB) November 11, 2014
EuclidIQ’s patent titled “Video Compression Repository and Model Reuse,” relates to reusing and distributing models for additional compression gains across multiple videos.
The patent outlines a revolutionary “Smart Model” encoding framework that includes categorizing, saving, and reusing feature and object models from one video to improve compression in other videos. These Smart Models can be personalized and downloaded to a user’s device during off-peak times, ready to help decode that user’s videos. The Smart Models fit naturally with cloud-based video distribution systems.
“The approach presented in this patent applies computer vision modeling techniques to video compression in a powerful way that hasn’t been done before, by finding and exploiting redundancies across videos in addition to the traditional video compression technique of finding redundancies within videos,” said Nigel Lee, EuclidIQ’s Chief Science Officer.
From the patent abstract:
“Systems and methods of improving video encoding/decoding efficiency may be provided. A feature-based processing stream is applied to video data having a series of video frames. Computer-vision-based feature and object detection algorithms identify regions of interest throughout the video datacube. The detected features and objects are modeled with a compact set of parameters, and similar feature/object instances are associated across frames. Associated features/objects are formed into tracks, and each track is given a representative, characteristic feature. Similar characteristic features are clustered and then stored in a model library, for reuse in the compression of other videos. A model-based compression framework makes use of the preserved model data by detecting features in a new video to be encoded, relating those features to specific blocks of data, and accessing similar model information from the model library. The formation of model libraries can be specialized to include personal, "smart" model libraries, differential libraries, and predictive libraries. Predictive model libraries can be modified to handle a variety of demand scenarios.”
“As video repositories experience exponential growth, innovation like this will provide both the much needed compression for the repository and the ability to use off-peak bandwidth for model distribution,” said Richard Wingard, EuclidIQ’s Chief Executive.
Additionally, a second patent was granted titled “Computer Method And Apparatus For Processing Image Data”. This patent is related to detecting components of interest of a video signal that use a disproportionate amount of bandwidth. The detected components of interest are normalized spatially using a deformable mesh plus normalized on whole mesh movement. A structure from motion analysis is used to determine changes in object pose/position from one video frame to another. Motion estimation includes constraining estimates of object motion by deformation models, structural models, and illumination models.
EuclidIQ continues to advance and protect its Intellectual property rights with patent filings in the US and around the world.
EuclidIQ is a video technology development company focused on the delivery of HD video over constrained networks. EuclidIQ has been tackling this challenge and has invented an innovative technology that delivers significant benefits in video distribution. The company’s patented technology applies unique modeling and video analysis techniques to deliver improved video compression with no loss of visual quality. To learn more about EuclidIQ and its video processing technology, visit the company website, or contact info(at)euclidiq(dot)com.