Unstructured data, that is ordinary information, videos and sensor measurements not in a formal structure; such as a database, is growing by leaps and bounds. Factors such as higher resolution, higher frame rate, multi-camera video projects and the internet of Things will lead to about a 56% annual increase in this type of data each year according to a keynote presentation by Dave Elliot from Google Cloud at the 2017 Creative Storage Conference (CS 2017). This will lead to huge challenges managing this unstructured content and finding what you need, when you need it.
Various technologies under the general name of Artificial Intelligence (AI) can provide ways to organize this avalanche of unstructured data. Using technologies such as Machine Learning (ML), content can be made available to meet individualized needs. In a world where people have come to expect instant responses from their queries using their personal electronic devices AI technologies provide a way to provide what people want, when they want it.
The cloud is emerging as a way to provide services such as storage and AI that can make content, including valuable video content, available for repurposing and monetization. Cloud storage allows moving a Capex expense to an ongoing Opex expense, it provides the ability to scale on demand and it makes other on-line resources available to understand and use this content. For video content Machine Learning can impact the entire supply chain using various cloud-based resources, as shown in the figure below:
Google Presentation at CS 2017
Google Slide Showing How Deep Learning Is Used in a Video Supply Chain
Google’s ML tools include TensorFlow and its Cloud Machine Learning Engine. These tools allow speed recognition and conversion to text, image recognition, translation capabilities and various APIs to use ML data for further actions. The company’s video intelligence API allows fining entities, content and moments, which can be broken down to the video shot or frame level—all available with a simple REST API call. ML technologies can even be used to look at the audience at a live event to create metadata (information about the data being displayed) reflecting human reactions to what is happening. This allows advanced video content recommendation and filtering as well as building a full media portal using a Content Delivery Network (CDN) as shown below.
Google Presentation at CS 2017
Google Media Portal with a Content Delivery Network
In addition to the Google keynote there was an entire session at CS 2017 looking at the role of AI in creating metadata for video content. Aaron Edell from GrayMeta said that according to a Harvard Business Review article in December 2013, knowledge workers were wasting up to 50% of their time hunting for data, identifying and correcting errors and seeking confirmatory sources for data they do not trust. Aaron suggested applying cognitive services to index, extract and classify data to make “dark data” discoverable. As for the Google keynote, GrayMeta said that much of this big data analysis will happen in the cloud.
CONTRIBUTOR
GrayMeta Preasentation at CS 2017
Tools for Turning Dark Data into Discoverable Data
No comments:
Post a Comment