Highlight

Những điều thú vị khi dùng Trí tuệ nhân tạo của Viettel

Những người dùng Internet tại Việt Nam thường lấy “chị Google” ra để… giải trí. Khi “chị” đọc văn bản hay chỉ đường cho người tham gia gi...

Saturday, November 26, 2016

Google Brings Machine Learning to the Staffing Industry


google-machine-learning-recruiting.jpg

Just a couple of weeks ago, Facebook made industry news when technology reporters discovered a Jobs tab on their business page. The social network later confirmed that it was experimenting with a suite of sourcing tools to capitalize on the boom in social recruiting. Workforce industry experts believe that Facebook may be trying to muscle in on LinkedIn’s sacred grounds. However, even bigger revelations came November 15 when Google announced its own foray into the realm of talent acquisition — a move that staffing insiders have been predicting for some time. The fascinating twist with Google is that the Internet giant has no plans to build a standalone technology, such as a branded applicant tracking system (ATS), online recruitment platform or vendor management system (VMS). Instead, Google is offering “Cloud Jobs API,” which allows workforce technology developers to integrate robust machine learning features into their systems.

What Is Cloud Jobs API?

Google is promoting its targeted API as a powerful job search and discovery platform dedicated to improving processes in the talent industry. Machine learning forms the core of the technology and its potential boon to staffing professionals. Google describes the innovation on its site:

Company career sites, job boards and applicant tracking systems can improve candidate experience and company hiring metrics with job search and discovery powered by sophisticated machine learning. The Cloud Jobs API provides highly intuitive job search that anticipates what job seekers are looking for and surfaces targeted recommendations that help them discover new opportunities.
To ensure that users receive the most relevant search results and recommendations, the API relies on Google’s advances in machine learning to understand how job titles and skills correlate. It then compiles the data to determine the closest match between job content, location and seniority.

The Value of Cloud Jobs API

Meaningful Search Results

When Google first launched its search functionality, it revolutionized the Internet. It masked cutting-edge algorithms in a simple interface, which delivered the most comprehensive and relevant results available — and disrupted an entire industry. Google is no longer just a technology solutions provider, it’s become a cultural institution. Now, it’s expanding that reach into our industry.
Recruiters today still face challenges with cumbersome job boards, muddled sourcing systems and attempts to cull vital information from a lot of digital noise. That’s precisely what Google intends to help them overcome.
As Google explains: “Job postings are often worded in industry- and company- specific jargon that job seekers don’t search for. Cloud Jobs API provides intuitive job search that surfaces relevant opportunities by leveraging a complete graph of how job titles, skills, and seniority relate to one another.”

Dynamic Job Discovery

The inherent machine learning in Cloud Jobs API promises to bring big data directly to users, while compacting it into right-sized data. Google’s platform will monitor job searching behavior and the data present in career path progressions to suggest opportunities. It will also recommend additional roles that are aligned to a job seeker’s skill sets and interests. As the API collects more information, it can render more accurate assessments.
Because cultural fit has taken precedence with candidates, recruiters, hiring managers and contingent workforce leaders alike, Cloud Jobs API could streamline the process of creating ideal matches.

Simple Integration and Other Features

Simplicity in user functionality has always been a hallmark of Google. The company assures staffing industry technology providers that integrating the API is a fairly effortless undertaking. Despite this ease, however, the API will continue to support a wide range of critical features.
  • Synonym and acronym expansion: The system can incorporate relevant results even when job postings are written in company specific terms, industry jargon or acronyms that candidates may not be familiar with.
  • Job enrichment: Results are optimized with enhanced content that includes additional details about location, employment type, benefits and more.
  • Geographical data: Tapping into the power of Google’s robust geolocation systems, Cloud Jobs API will interpret countless forms of location data to help users refine their searches. “From street address to colloquial regions (Bay Area, Research Triangle) to precise geo-coordinates,” Google writes, the platform will enable “fine grained job filtering based on distance and commute times.”
  • Seniority alignment: Google says the API understands the seniority requirements of positions and returns only the relevant results.
  • Dynamic recommendations: The API encourages users to mark which jobs they liked and which they considered poor fits. The “recommendation engine” records these inputs and factors them into future searches for optimized suggestions.

How Does Cloud Jobs API Work?

Ontological Structure

image: http://cdn2.business2community.com/wp-content/uploads/2016/11/google-cloud-jobs-api.png.png
google-cloud-jobs-api.png

“At the heart of Cloud Jobs API,” Google explains, “there are two main proprietary ontologies that encode knowledge about occupations and skills, as well as relational models between these ontologies.” For the sake of avoiding confusing tech terms, let’s just call these “ontologies” data models.
Recommended for You
The first data model deals with occupation. It includes about 30 general job categories (e.g., accounting and finance, human resources, hospitality, etc.), over 1,000 occupational families (e.g., database administrators), and 250,000 specific job titles.
The second data model focuses on skills. Google claims that it can define and organize about 50,000 hard and soft skill sets with different types of relationships. Basically, it seems to produce an analogy along the lines of “this hard skill is related to this soft skill.” Or, as an example, we could say “customer service is related to strong interpersonal communication skills.”

Machine Learning

Of course, the breakthrough of Cloud Jobs API comes from access to Google’s machine learning algorithms, which play a predominant role in driving the performance of talent acquisition strategies. The first step involves cleansing job titles. Google’s API is programmed to standardize the titles it sees, which represents a huge benefit to job seekers and job posters.
The APi initially removes language not directly related to the occupation’s definition. This includes location, employment type, salary details, company names, advertising phrases and “administrative jargon.”
Next, the API takes the scrubbed job descriptions and attempts to recognize actual occupations from vague expressions. Google illustrates the process with an example: “The title ‘retail sales’ maps to the broad category ‘Sales and Retail,’ while a ‘flooring installer’ job title encompasses ‘carpet installer,’ ‘floor layer,’ and ‘tile and marble setter’ occupation families.” The system can detect titles that are not occupations and assign a confidence score to the mappings.

After these processes, machine learning takes over and supports the following functions:
  • Identifying an occupation based on a candidate’s search queries and then matching them to the occupation data model with a confidence score.
  • Mapping job titles in postings to relevant roles in the occupation data model with a confidence score.
  • Detecting specific skills in candidate search queries and matching them to values in the skills data model.
  • Extracting relevant skills from job postings and matching them to values in the skills data model.
  • Computing the relationship between occupations and skills.

Who’s Already Using Cloud Jobs API?

According to ERE Editor-in-Chief Todd Raphael, three big industry firms have already incorporated the API into their systems: Dice, CareerBuilder and Jibe. In his article, Todd provides insights from this early batch of Google partners. The results enjoyed by CareerBuilder are particularly compelling and seem to hint at great potential for the API.
Dice. The company says it was selected “specifically for its technology focus and position in the technology community.”
CareerBuilder. Google says that CareerBuilder created “a prototype in just 48 hours, found improved, more accurate results when compared to its existing search algorithm.” Google gives an example of a search for “part time” and how the results are “richer” because of the use of synonyms like PT. CareerBuilder appears to be just testing the tool and later expanding it to greater use among its customers.
Jibe, whose CEO Joe Essenfeld says, “With the launch of Google Cloud Jobs API, Google machine learning will become the standard for career sites” and is a cornerstone of the company’s candidate experience platform. Jibe says career site/ATS users — job candidates, in other words — will get better search results as Jibe integrates Google with career sites.

A New, Data-Driven Frontier for Hiring and Staffing

Google’s Cloud Jobs API is definitely a gift to staffing industry professionals and candidates. Workforce solutions providers will be able to integrate the platform into their own systems to create a more vivid way to hire exceptional talent. However, I believe Google’s approach has accomplished something even more important.
Staffing technology companies are facing fierce competition these days. Big VMS providers, for example, are acquiring other tech offerings and growing to levels where smaller players struggle to stay afloat. Meanwhile, lean and savvy startups have circulated new offerings into the market, such as freelance management systems, online recruitment platforms and a variety of tools that target niche groups. Existing workforce technology providers are scrambling to incorporate the latest modules and remain relevant.
Google probably could have built its own machine and just dominated the space. It didn’t. Instead, Google chose to become a great equalizer. The API gives tech players of all sizes the chance to benefit from its enhanced functionality. It will be exciting to see how this offering helps reshape our industry in the coming months and drives performance to unprecedented heights.

Read more at http://www.business2community.com/human-resources/google-brings-machine-learning-staffing-industry-01709535#RoIu0yk4wqLPxpoo.99

Friday, November 25, 2016

Diese Jobs erledigt künftig die künstliche Intelligenz

Künstliche Intelligenzen werden künftig auch Arbeiten von Akademikern übernehmen

Nicht nur am Fließband ersetzen Roboter immer häufiger den Menschen. Auch Ingenieure, Ärzte oder Journalisten bekommen Konkurrenz. Der Vormarsch der Maschinen verändert die Arbeitswelt.
Auf den ersten Blick wirkt die kommende Generation des Verkehrsfliegers A320 von Airbus wie ein ganz normales Flugzeug. Doch hinter der Plastik-Verkleidung der hinteren Kabinen-Trennwand verbirgt sich eine Überraschung: ein seltsames Muster aus Dutzenden von ineinander verwobenen Streben.
Das Gebilde wirkt wie von einer Spinne gewebt, es könnte aber auch als Kulisse in einem Science-Fiction-Film über Aliens oder als Ausstellungsstück in einem Museum für sehr abstrakte Kunst dienen. Kein normaler Mensch würde vermuten, dass ein solches Gebilde ein Flugzeug formen kann. Und vermutlich käme auch kein menschlicher Ingenieur auf die Idee, ein tragendes Bauteil jemals derart komplex zu gestalten.
Genau das ist der Plan. Denn im neuen Flieger von Airbus wird tatsächlich kein Werk von Menschenhand getestet – sondern eine Konstruktion umgesetzt, die eine künstliche Intelligenz zuvor erdacht hat. Es ist, wenn man so will, der nächste Schritt hin zur Industrie 4.0.
Diese Trennwand hat der Computer ausgetüftelt. Sie hat viele Vorteile












Diese Trennwand hat der Computer ausgetüftelt. Sie hat viele Vorteile
Quelle: Airbus
Eine Welt, in der Maschinen nicht mehr bloß als Roboter dabei helfen, Autoteile zu lackieren oder Maschinen zusammenzusetzen, die Fließbandarbeit zu beschleunigen oder komplexe Datenbankabfragen zu bewältigen. Sondern in der die Software in weit größerem Maße als bisher das Denken übernimmt – und damit auch das Ausdenken.

In den USA könnte 47 Prozent der Jobs wegfallen

Sollte die Vision Wirklichkeit werden, dann würde die neue industrielle Revolution nicht mehr nur in der Produktion Menschen durch Maschinen ersetzen, wie das bisher in vielen Branchen der Fall ist. Auch viele vermeintlich sichere Akademiker-Jobs bekämen plötzlich unerwartete Konkurrenz: durch einen Kollegen, der niemals ermüdet, niemals schläft und niemals schlechte Laune hat.
Allein in den USA könnten in Zukunft 47 Prozent der Jobs wegdigitalisiert werden, hat eine Studie der Oxford-Universität unlängst herausgefunden. Je stärker die Kosten für Industrieroboter sinken und je leistungsstärker die Algorithmen werden, desto stärker wird sich vermutlich auch die Revolution der Arbeitswelt vollziehen.
Für Carl Bass eröffnet diese Welt ganz neue Möglichkeiten. Bass ist Vorstandschef der Softwarefirma Autodesk. Sein Unternehmen ist mit der Entwicklung von sogenannter CAD-Designsoftware groß geworden, die Abkürzung steht für computerunterstütztes Konstruieren. Bislang half seine Software menschlichen Ingenieuren dabei, ihre Ideen in Produkte umzusetzen.
Nun gehen die Autodesk-Entwickler den entscheidenden Schritt weiter: Ihre Software entwirft die Produkte gleich selbst. „Das ist ein fundamentaler Wandel darin, wie wir komplexe Probleme lösen“, preist Bass sein Modell im Gespräch mit der „Welt“. „Anstatt die Dinge selbst zu entwerfen, spezifizieren wir einfach Anforderungen und Restriktionen. Die künstliche Intelligenz liefert uns dann eine Lösung, die sehr nah am absolut möglichen Optimum liegt.“

Die vom Computer entworfene Struktur spart Gewicht

Die Trennwand im neuen Airbus ist für ihn so ein Beispiel. Entworfen hat das wirre Spinnennetz aus Streben ein Computer. Gegenüber dem klassischen Entwurf von Menschenhand spare die vom Computer erdachte Variante über 40 Prozent Gewicht und sei in Struktur-Tests dennoch deutlich stärker und belastbarer. Zu schön, um wahr zu sein? Oder vielleicht besser: gar nicht schön, wenn das wahr werden sollte?
Fest steht, dass künftig wohl nicht nur Ingenieure werden lernen müssen, mit dem Algorithmus zusammen zu arbeiten. Mediziner, Programmierer, Journalisten, Banker oder Manager – viele Berufsgruppen, deren Angehörige sich aufgrund ihrer hohen Qualifizierung bislang als unersetzlich sehen, werden zumindest einen Teil ihrer Aufgaben künftig an Algorithmen abgeben, sagt Bob Lord, Chief Digital Officer von IBM, im Gespräch mit der „Welt“. „Überall dort, wo auf Basis komplexer Datensammlungen nach Mustern gesucht wird, können Algorithmen eventuell effizienter Probleme lösen als Menschen.“
Lord will keine Ängste schüren, vielleicht schiebt er auch deshalb nach, dass der Software-Fortschritt nicht bedeute, dass künftig die Computer allein entscheiden. Im Gegenteil: solche kognitiven Computer seien als „Assistenten des Menschen“ gedacht. Einen solchen Assistenten hat auch IBM im Programm, passenderweise hört er auf den Namen „Watson“, nach dem Gehilfen des berühmten Romandetektivs Sherlock Holmes. Watson soll nach dem Willen seiner Erfinder Ärzten dabei helfen, Therapie-Entscheidungen zu treffen.

„Watson“ kann Patientendaten mit Studien vergleichen

Dahinter steckt die Überlegung, dass kein Mensch alle Patientendaten und Studien zu einer Krankheit wie beispielsweise Brustkrebs überblicken kann. Watson dagegen hat nach Durchsicht aller dazu verfügbaren Daten und Studien all diese Erkenntnisse im Blick und kann nach Sichtung der digitalen Krankenakte eines Patienten individuelle Therapiekonzepte empfehlen und damit die Ärzte im Krankenhaus entlasten und unterstützen.
Auf einer Internetkonferenz in Jianxing (China) wurden Mitte November die Vorzüge des Watson-Systems erläutert
Auf einer Internetkonferenz in Jianxing (China) wurden Mitte November die Vorzüge des Watson-Systems erläutert
Quelle: VCG via Getty Images/Visual China Group
Die IBM-Algorithmen werden nicht nur in der Medizin eingesetzt. Sie kommen auch bei der Produktionsplanung in Fabriken des Landmaschinenherstellers John Deere, im Online-Marketing des Outdoorherstellers The North Face oder bei der Überwachung von Compliance-Regeln in der Finanzbranche zum Einsatz – alles Aufgaben, die bislang von hochqualifizierten Managern ausgeübt wurden. Dennoch will Bob Lord darin keine Gefahr für die klassischen Akademiker-Jobs sehen: „Dasselbe wurde auch bei der Einführung des Computers im Büro behauptet – ich glaube, dass die Technologie eher neue Jobs schaffen wird.“
Eine aktuelle Studie der Unternehmensberatung Accenture zeigt: Bob Lord könnte mit seiner Prognose womöglich sogar recht haben. Mittelfristig sollte unsere Wirtschaft dank künstlicher Intelligenz deutlich schneller wachsen. Da die Software viele Routinetätigkeiten übernimmt, könnten sich die Menschen künftig auf Aufgaben mit einer höheren Wertschöpfung konzentrieren.

Akademische Routinejobs macht bald der Computer

Dennoch werden sich viele Jobs komplett wandeln. Wer aktuell – egal in welcher Branche – akademische Routinejobs erledigt, der muss mit dem Wertverlust seiner Fähigkeiten am Arbeitsmarkt rechnen. Oder sogar damit, dass eines Tages Algorithmen den kompletten akademisch gebildeten Mittelbau in Unternehmen ersetzen könnten. Die Arbeitswelt, wie wir sie heute kennen, wäre damit wohl Geschichte.
Betroffen sind auch Tätigkeiten, bei denen der Einsatz von künstlicher Intelligenz noch bis vor kurzem unmöglich schien. Der Flugzeugbau bei Airbus steht da durchaus stellvertretend für viele Branchen. Anstatt wie ein menschlicher Ingenieur Entwürfe zu zeichnen und im Modell zu bauen, probiert die eingesetzte Software Millionen möglicher Lösungen aus und tastet sich dabei an ein Optimum heran. Das Ergebnis ist so komplex, dass es nur mit einem Metallstaub-3D-Drucker umgesetzt werden kann. Airbus testet das fertige Bauteil mittlerweile in der Luft.
„Dutzende Firmen beschäftigen sich aktuell damit, wie sie diese Technologie einsetzen werden“, sagt Autodesk-Chef Carl Bass und zeigt weitere seltsam organisch wirkende Entwürfe des Algorithmus: Ein einfacher Wärmetauscher, der dank komplexer Leitungswindungen für das Kühlmittel im Inneren deutlich effizienter ist als menschliche Entwürfe. Oder ein Stuhl, der per künstlicher Intelligenz optimiert wurde und deutlich leichter und bequemer ist als der Ausgangsentwurf.
„Menschliche Ingenieure werden künftig neue Aufgaben bekommen. Sie müssen spezifizieren, welche Bauteile sie benötigen. Den eigentlichen Entwurf aber übernimmt die künstliche Intelligenz“, sagt Bass. Er ist überzeugt: „Die AI-Revolution hat bereits begonnen. Sie wird Hunderttausende neue Jobs schaffen – aber auch Millionen Jobs verändern oder obsolet machen.“

Künstliche Intelligenzen sind immer voll aufmerksam

Kein Arzt ist immer und für jeden Patienten gleich aufmerksam, kein Manager plant immer gleich sorgfältig, kein Ingenieur investiert auch in absolute Routine-Produkte den Hirnschmalz für dutzende Alternativ-Entwürfe. Die Algorithmen dagegen sind mit dem immer gleichen Willen zur Perfektion bei der Arbeit. Allein das ist bereits ein Argument für ihren Einsatz.
Die Autodesk-Ingenieurssoftware ist nur ein Indiz dafür, dass künstliche Intelligenz die nächste Revolution am Arbeitsmarkt auslösen könnte. „Die akademisch gebildete Mittelklasse steht vor einem Umbruch“, ist Bass überzeugt. „Nicht nur einfache, sondern auch hochqualifizierte Tätigkeiten könnten künftig von künstlicher Intelligenz übernommen werden.“
Darunter fallen nicht nur Ingenieurs-Aufgaben oder Managementjobs, sondern sogar kreative Tätigkeiten: Phillip Renger vom Stuttgarter Software-Spezialisten AX Semantics wirbt offensiv für einen Algorithmus, der künftig Journalisten und Übersetzer komplett ersetzen kann.

Journalistische Tätigkeiten werden wegfallen

„Fußball-Spielberichte, Wetterberichte oder Börsentexte können bereits jetzt komplett von unserer Software geschrieben werden“, erklärt Renger. AX Semantics hat sich darauf spezialisiert, eine multilinguale Autorensoftware zu schaffen. Das Programm schreibt alle Texte, die allein auf der Basis von Datenanalyse entstehen.
„Der Algorithmus schafft völlig neue Möglichkeiten für Online-Publisher. Er kann etwa individuelle Spielberichte für Anhänger unterschiedlicher Fußballmannschaften schreiben. Oder Produktbeschreibungen für Onlineshops formulieren, die individuell an den Kunden angepasst sind“, so Renger.
Für ihn liegen die Vorteile auf der Hand: die Software wird niemals müde, sie formuliert nie schlampig. Sie erlaubt sich keine Rechtschreibfehler – und ist auch beim tausendsten Dax-Börsenbericht genauso fokussiert und auf Perfektion bedacht wie beim ersten.
Ganz so weit will Bass noch nicht gehen, trotz der gewagten Flugzeug-Konstruktion seiner Software. Menschliche Kreativität könne der Computer dann doch noch nicht ersetzen, ist er überzeugt: „Entwürfe für komplett neue Produkte, bei denen noch keine Blaupause und kein Anforderungsprofil existiert, werden auch weiter von menschlichen Designern kommen.“ Vorerst.

Thursday, November 24, 2016

This 'major flaw' has been discovered in the 66-year-old Turing test

Artificial intelligence writes a book

Coventry University study discovers 'flaw' in famous test designed to distinguish humans from machines.

The Turing test, developed by legendary computer scientist Alan Turing and used to test the artificial intelligence of computers, has a major flaw.
A new study, published in the Journal of Experimental and Theoretical Artificial Intelligence, points out that the test, which was devised in the 1950s, could be successfully passed if the computer pleaded the Fifth Amendment and remained silent.
Authors Kevin Warwick and Huma Shah from Coventry University argue that a machine could plausibly pass the test by saying very little, or nothing at all. Previous attempts at defeating the Turing test have seen computers pretend to be children, but no matter how great their general knowledge, they are often unable to convince a human interacting with them (over typed messages) that they are also a human.
A critical point raised by the study is how the Turing test is based around a machine being discovered as a human based on what it does wrong, rather than what it does right. Its authors argue that a machine could know it isn't smart enough to act as a human, so pleads the Fifth Amendment as a way to hide its inabilities.
Warwick said: "This begs the question, what exactly does it mean to pass the Turing test? Turing introduced his imitation game as a replacement for the question 'Can machines think?' and the end conclusion of this is that if an entity passes the test then we have to regard it as a thinking entity."
However, Warwick argues, if artificial intelligence can pass the test by remaining silent, just as a human could choose to do in the same situation, this "cannot be seen as an indication it is a thinking entity, otherwise objects such as stones or rocks...could pass the test."
It isn't possible, the authors argue, for a human to determine whether or not they are talking to another human or a computer (or a stone wall) if they receive no reply. It could be a wall, or could be a human choosing to say nothing, or it could be a computer pretending to be a human by pleading the Fifth and saying no more.
To conclude, Warwick says that 'taking the Fifth' "fleshes out a serious flaw in the Turing test."

Wednesday, November 23, 2016

Google’s DeepMind AI can lip-read TV shows better than a pro

Watch my lips


Artificial intelligence is getting its teeth into lip reading. A project by Google’s DeepMind and the University of Oxford applied deep learning to a huge data set of BBC programmes to create a lip-reading system that leaves professionals in the dust.
The AI system was trained using some 5000 hours from six different TV programmes, including NewsnightBBC Breakfast and Question Time. In total, the videos contained 118,000 sentences.
First the University of Oxford and DeepMind researchers trained the AI on shows that aired between January 2010 and December 2015. Then they tested its performance on programmes broadcast between March and September 2016.
By only looking at each speaker’s lips, the system accurately deciphered entire phrases, with examples including “We know there will be hundreds of journalists here as well” and “According to the latest figures from the Office of National Statistics”.

Meet Mustafa Suleyman, co-founder of DeepMind, at the Reinventing Energy Summit

Here is a clip from the database without subtitles:
l3
And here’s the same clip with subtitles provided by the AI system:
l4

AI shows the way

The AI vastly outperformed a professional lip-reader who attempted to decipher 200 randomly selected clips from the data set.
The professional annotated just 12.4 per cent of words without any error. But the AI annotated 46.8 per cent of all words in the March to September data set without any error. And many of its mistakes were small slips, like missing an ‘s’ at the end of a word. With these results, the system also outperforms all other automatic lip-reading systems.
“It’s a big step for developing fully automatic lip-reading systems,” says Ziheng Zhou at the University of Oulu in Finland. “Without that huge data set, it’s very difficult for us to verify new technologies like deep learning.”
Two weeks ago, a similar deep learning system called LipNet – also developed at the University of Oxford – outperformed humans on a lip-reading data set known as GRID. But where GRID only contains a vocabulary of 51 unique words, the BBC data set contains nearly 17,500 unique words, making it a much bigger challenge.
In addition, the grammar in the BBC data set comes from a wide diversity of real human speech, whereas the grammar in GRID’s 33,000 sentences follows the same pattern and so is far easier to predict.
The DeepMind and Oxford group says it will release its BBC data set as a training resource. Yannis Assael, who is working on LipNet, says he is looking forward to using it.

Lining up the lips

To make the BBC data set suitable for automatic lip reading in the study, video clips had to be prepared using machine learning. The problem was that the audio and video streams were sometimes out of sync by almost a second, which would have made it impossible for the AI to learn associations between the words said and the way the speaker moved their lips.
But by assuming that most of the video was correctly synced to its audio, a computer system was taught the correct links between sounds and mouth shapes. Using this information, the system figured out how much the feeds were out of sync when they didn’t match up, and realigned them. It then automatically processed all 5000 hours of the video and audio ready for the lip-reading challenge – a task that would have been onerous by hand.
The question now is how to use AI’s new lip-reading capabilities. We probably don’t need to fear computer systems eavesdropping on our conversations by reading our lips because long-range microphones are better for spying in most situations.
Instead, Zhou thinks lip-reading AIs are most likely to be used in consumer devices to help them figure out what we are trying to say.
“We believe that machine lip readers have enormous practical potential, with applications in improved hearing aids, silent dictation in public spaces (Siri will never have to hear your voice again) and speech recognition in noisy environments,” says Assael.

Monday, November 21, 2016

Intel lays out its AI strategy until 2020



Intel has flexed its AI muscles and beefed up its services with a bunch of new products and collaborations, in an effort to adapt to the technological upheaval of intelligent software.
At Intel’s first “AI Day” in San Francisco, Brian Krzanich, CEO, said the company is “continuing to evolve” and working to provide an “end-to-end AI solution” to allow companies to easily integrate intelligence into their infrastructures.
As data generated by companies continues to pile up, the interest in analyzing that data using machine learning and AI has been piqued. The largest technology companies are all making big investments and staking their claims in AI.
But while companies such as Google and Microsoft have developed libraries of machine learning tools such as TensorFlow and Cognitive Toolkit, Intel is more focused on updating servers to cope with the intense computation required to process and train AI systems.
It has a long history in the semiconductor business, and is making a bold move to retain its "Chipzilla" status by developing its own AI chips. Unlike its main rival Nvidia, Intel has decided to steer clear of GPUs (graphics processing units) – an area where it has little influence – and is instead offering its “Lake Crest” chips, which will be available in 2017.
A more powerful chip code-named “Knights Crest” is also in development to be integrated with its Xeon processor series. Both chips are geared toward powering neural networks for deep learning, and promise to optimize performance.
Diane Bryant, EVP and GM of the Data Center Group at Intel, announced that Intel plans to slash the time it takes to train neural networks 100-fold by 2020 through its chips and processors.
It’s an ambitious goal, and heavily supported by Nervana – a deep learning company that Intel acquired in August.
A preliminary version of Intel’s latest Xeon processor, known as “Skylake,” has begun shipping to select cloud service providers. The next upgrade to its Xeon Phi processors code-named “Knights Mill” will be available in the next year.
With the help of Google, Intel is moving into the cloud space, as both companies announced a partnership to adapt Google’s open-source container cluster system Kubernetes and TensorFlow to Intel’s architecture. Working together will also strengthen security between Intel’s IOT and Google Cloud, Bryant said.
Following suit of other AI companies, Intel also announced it had launched an AI strategy board made up of four researchers: Yoshua Bengio (University of Montreal), Bruno Olshausen (University of California, Berkeley), Jan Rabaey (University of California, Berkeley) and Ron Dror (Stanford University).
To do its part in “democratizing” AI, Intel has partnered with Coursera, an online education website originally set up by Andrew Ng, Chief Scientist at Baidu Research, and Daphne Koller, AI researcher at Stanford University, to bring a range of AI courses to the public.
“Intel can offer crucial technologies to drive the AI revolution, but ultimately we must work together as an industry – and as a society – to achieve the ultimate potential of AI,” said Doug Fisher, SVP and GM of the Software and Services Group at Intel.

Sunday, November 20, 2016

DEEP LEARNING 101 WITH ORIOL VINYALS, RESEARCH SCIENTIST AT DEEPMIND

Original

In the last week alone DeepMind have hit the headlines for teaching computers to dream and encouraging AI to play like children, all to improve the learning capabilities and intelligence of machines. The company has become renowned for it's huge successes in the AI world, for seemingly simple tasks like teaching computers to play games, which can have impressive real-world applications - such as their work with the NHS to fight blindness.
Oriol Vinyals is a Research Scientist at DeepMind, having previously worked with the Google Brain team. This year, MIT Technology Review named him a pioneer in their 35 Innovators Under 35, for his pioneering work in creating new techniques for language translation, and pushing the edge of science. 
At the 2016 Deep Learning Summit in London, Oriol presented 'Generative Models 101', exploring how they can be used to help in guiding our intuitions towards better architectures, for text, images and beyond. We caught up with him at the summit to learn more, view his interview with Nathan Benaich, of Playfair Capital, below.

Couldn't attend the Deep Learning Summit in London? You can still view presentations, slides and interviews from the summit - purchase post-event presentation access here.