Highlight

Những điều thú vị khi dùng Trí tuệ nhân tạo của Viettel

Những người dùng Internet tại Việt Nam thường lấy “chị Google” ra để… giải trí. Khi “chị” đọc văn bản hay chỉ đường cho người tham gia gi...

Saturday, May 27, 2017

Google is reportedly launching yet another venture group to invest in A.I.

Google has established a new organization to invest in artificial intelligence startups, according to a new report. The new effort shows Google taking its experience with venture capital and applying it to AI, a type of computing that it has been increasingly using across its applications.
The new organization will be separate from Google parent companyAlphabet's funding activity within GV (formerly Google Ventures) and CapitalG (formerly Google Capital), Axios reported on Friday. Google itself has also made venture investments on its own.
Google chief executive Sundar Pichai in the past year has started positioning the company as being "AI first" as opposed to being "mobile first." Having an investment group focusing on that does make sense given that thesis.
Google has dedicated AI research groups including DeepMind, whose AlphaGo AI Go player earlier this week beat the top human Go player, Ke Jie. Google uses AI in its search engine, Gmail, Google Maps, Google Photos, and Google Translate, among other things.
Startups who take on funding from the new group can receive mentorship, as well as services and space to operate, Axios said.
AI startups have many places to go to for money, including other corporate venture arms and institutional venture capital firms. But having backing from the new group could provide a particular mark of legitimacy at a time when so many startups today claim to be using AI.
Google declined to comment.
Watch: Google's latest AI chip could be bad news for Nvidia
Google's latest AI chip could mean bad news for chip supplier Nividia

Apple Is Working on a Dedicated Chip to Power AI on Devices

Apple Inc. got an early start in artificial intelligence software with the 2011 introduction of Siri, a tool that lets users operate their smartphones with voice commands.
Now the electronics giant is bringing artificial intelligence to chips.
Apple is working on a processor devoted specifically to AI-related tasks, according to a person familiar with the matter. The chip, known internally as the Apple Neural Engine, would improve the way the company’s devices handle tasks that would otherwise require human intelligence -- such as facial recognition and speech recognition, said the person, who requested anonymity discussing a product that hasn’t been made public. Apple declined to comment.
Engineers at Apple are racing to catch their peers at Amazon.com Inc. and Alphabet Inc. in the booming field of artificial intelligence. While Siri gave Apple an early advantage in voice-recognition, competitors have since been more aggressive in deploying AI across their product lines, including Amazon’s Echo and Google’s Home digital assistants. An AI-enabled processor would help Cupertino, California-based Apple integrate more advanced capabilities into devices, particularly cars that drive themselves and gadgets that run augmented reality, the technology that superimposes graphics and other information onto a person’s view of the world.
“Two of the areas that Apple is betting its future on require AI," said Gene Munster, former Apple analyst and co-founder of venture capital firm Loup Ventures. “At the core of augmented reality and self-driving cars is artificial intelligence.”

Improved Performance

Apple devices currently handle complex artificial intelligence processes with two different chips: the main processor and the graphics chip. The new chip would let Apple offload those tasks onto a dedicated module designed specifically for demanding artificial intelligence processing, allowing Apple to improve battery performance.
Should Apple bring the chip out of testing and development, it would follow other semiconductor makers that have already introduced dedicated AI chips. Qualcomm Inc.’s latest Snapdragon chip for smartphones has a module for handling artificial intelligence tasks, while Google announced its first chip, called the Tensor Processing Unit (TPU), in 2016. That chip worked in Google’s data centers to power search results and image-recognition. At its I/O conference this year, Google announced a new version that will be available to clients of its cloud business. Nvidia Corp. also sells a similar chip to cloud customers.
The Apple AI chip is designed to make significant improvements to Apple’s hardware over time, and the company plans to eventually integrate the chip into many of its devices, including the iPhone and iPad, according to the person with knowledge of the matter. Apple has tested prototypes of future iPhones with the chip, the person said, adding that it’s unclear if the component will be ready this year.
Apple’s operating systems and software features would integrate with devices that include the chip. For example, Apple has considered offloading facial recognition in the photos application, some parts of speech recognition, and the iPhone’s predictive keyboard to the chip, the person said. Apple also plans to offer developer access to the chip so third-party apps can also offload artificial intelligence-related tasks, the person said.

Developer Conference

Apple may choose to discuss some of its latest advancements in AI at its annual developer’s conference in June. At the same conference, Appleplans to introduce iOS 11, its new operating system for iPhones and iPads, with an updated user-interface, people with knowledge of the matter said last month. The company is also said to discuss updated laptops with faster chips from Intel Corp.
An AI chip would join a growing list of processors that Apple has created in-house. The company began designing its own main processors for the iPhone and iPad in 2010 with the A4 chip. It has since released dedicated processors to power the Apple Watch, the motion sensors across its products, the wireless components inside of its AirPods, and the fingerprint scanner in the MacBook Pro. The company has also tested a chip to run the low-power mode on Mac laptops.
In 2015, Bloomberg reported that Apple’s culture of secrecy stymied the iPhone maker’s ability to attract top AI research talent. Since then, Apple has acquired multiple companies with deep ties to artificial intelligence, has begun publishing papers related to AI research, has joined a key research group and has made hires from the field. In October 2016, Apple hired Russ Salakhutdinov from Carnegie Mellon University as its director of AI research.

Curious AI learns by exploring game worlds and making mistakes



I wonder what will happen if I press this button? Algorithms armed with a sense of curiosity are teaching themselves to discover and solve problems they’ve never encountered before.
Faced with level one of Super Mario Bros, a curiosity-driven AI learned how to explore, avoid pits, and dodge and kill enemies. This might not sound impressive – algorithms have been thrashing humans at video games for a few years now – but this AI’s skills all were all learned thanks to an inbuilt desire to discover more about the game world.
Conventional AI algorithms are taught through positive reinforcement. They are rewarded for achieving some kind of external goal, like upping the score in a video game by one point. This encourages them to perform actions that increase their score – such as stomping on enemies in the case of Mario – and discourages them from performing actions that don’t increase the score, like falling into a pit.
This type of approach, called reinforcement learning, was used to create AlphaGo, the Go-playing computer from Google DeepMind that beat Korean master Lee Sedol by four games to one last year. Over thousands of real and simulated games, the AlphaGo algorithm learned to pursue strategies that led to the ultimate reward: a win.
But the real world isn’t full of rewards, says Deepak Pathak, who led the study at the University of California, Berkeley. “Instead, humans have an innate curiosity which helps them learn,” he says, which may be why we are so good at mastering a wide range of skills without necessarily setting out to learn them.
So Pathak set out to give his own reinforcement learning algorithm a sense of curiosity to see whether that would be enough to let it learn a range of skills. Pathak’s algorithm experienced a reward when it increased its understanding of its environment, particularly the parts that directly affected it. So, rather than looking for a reward in the game world, the algorithm was rewarded for exploring and mastering skills that led to it discovering more about the world.
This type of approach can speed up learning times and improve the efficiency of algorithms, says Max Jaderberg at Google’s AI company DeepMind. The company used a similar technique last year to teach an AI to explore a virtual maze. Its algorithm learned much more quickly than conventional reinforcement learning approaches. “Our agent is far quicker and requires a lot less experience from the world to train, making it much more data efficient,” he says.

Fast learner

Imbibed with a sense of curiosity, Pathak’s own AI learnt to stomp on enemies and jump over pits in Mario and also learned to explore faraway rooms and walk down hallways in another game similar to Doom. It was also able to apply its newly acquired skills to further levels of Mario despite never having seen them before.
But curiosity could only take the algorithm so far in Mario. On average, it explored only 30 per cent of level one as it couldn’t find a way past a series of pits that could only be overcome through a sequence of more than 15 button presses. Rather than jump to its death, the AI learned to turn back on itself and stop when it reached that point.
The AI may have been flummoxed because it had no idea that there was more of the level to explore beyond the pit, says Pathak. It didn’t learn to consistently take useful shortcuts in the game either, since they led it to discovering less of the level so didn’t satiate its urge for exploration.
Parker is now working on seeing whether robotic arms can learn through curiosity to grasp new objects. “Instead of it acting randomly, you could use this to help it move meaningfully,” he says. He also plans to see whether a similar algorithm could be used in household robots similar to the Roomba vacuum cleaner.
But Jaderberg isn’t so sure that this kind of algorithm is ready to be put to use just yet. “It’s too early to talk about real-world applications,” he says.

Friday, May 26, 2017

Artificial Intelligence: Authors and titles for recent submissions

Fri, 26 May 2017

[1]  arXiv:1705.09231 [pdfother]
Neural Attribute Machines for Program Generation
Subjects: Artificial Intelligence (cs.AI); Programming Languages (cs.PL)
[2]  arXiv:1705.09218 [pdfother]
Finding Robust Solutions to Stable Marriage
Comments: Accepted for IJCAI 2017
Subjects: Artificial Intelligence (cs.AI)
[3]  arXiv:1705.09058 [pdfother]
An Empirical Analysis of Approximation Algorithms for the Euclidean Traveling Salesman Problem
Comments: 4 pages, 5 figures
Subjects: Artificial Intelligence (cs.AI)
[4]  arXiv:1705.09045 [pdfother]
Cross-Domain Perceptual Reward Functions
Comments: A shorter version of this paper was accepted to RLDM (this http URL)
Subjects: Artificial Intelligence (cs.AI)
[5]  arXiv:1705.08997 [pdfother]
State Space Decomposition and Subgoal Creation for Transfer in Deep Reinforcement Learning
Comments: 5 pages, 6 figures; 3rd Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2017), Ann Arbor, Michigan
Subjects: Artificial Intelligence (cs.AI); Learning (cs.LG); Machine Learning (stat.ML)
[6]  arXiv:1705.08968 [pdfother]
Logic Tensor Networks for Semantic Image Interpretation
Comments: 14 pages, 2 figures, IJCAI 2017
Subjects: Artificial Intelligence (cs.AI)
[7]  arXiv:1705.08961 [pdfother]
Efficient, Safe, and Probably Approximately Complete Learning of Action Models
Journal-ref: International Joint Conference on Artificial Intelligence (IJCAI) 2017
Subjects: Artificial Intelligence (cs.AI)
[8]  arXiv:1705.08926 [pdfother]
Counterfactual Multi-Agent Policy Gradients
Subjects: Artificial Intelligence (cs.AI); Multiagent Systems (cs.MA)
[9]  arXiv:1705.09279 (cross-list from cs.LG) [pdfother]
Filtering Variational Objectives
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML)
[10]  arXiv:1705.09026 (cross-list from cs.LG) [pdfpsother]
Online Edge Grafting for Efficient MRF Structure Learning
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
[11]  arXiv:1705.09011 (cross-list from cs.LG) [pdfother]
Principled Hybrids of Generative and Discriminative Domain Adaptation
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI)
[12]  arXiv:1705.08982 (cross-list from cs.LG) [pdfother]
Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks
Comments: Accepted at Thirty-First AAAI Conference on Artificial Intelligence (AAAI17)
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
[13]  arXiv:1705.08927 (cross-list from quant-ph) [pdfother]
Compiling Quantum Circuits to Realistic Hardware Architectures using Temporal Planners
Journal-ref: related to proceedings of IJCAI 2017, and ICAPS SPARK Workshop 2017
Subjects: Quantum Physics (quant-ph); Artificial Intelligence (cs.AI); Emerging Technologies (cs.ET); Systems and Control (cs.SY)
[14]  arXiv:1705.08804 (cross-list from cs.IR) [pdfpsother]
Beyond Parity: Fairness Objectives for Collaborative Filtering
Subjects: Information Retrieval (cs.IR); Artificial Intelligence (cs.AI); Learning (cs.LG); Machine Learning (stat.ML)

Thu, 25 May 2017 (showing first 11 of 14 entries)

[15]  arXiv:1705.08844 [pdfother]
How a General-Purpose Commonsense Ontology can Improve Performance of Learning-Based Image Retrieval
Comments: Accepted in IJCAI-17
Subjects: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Information Retrieval (cs.IR)
[16]  arXiv:1705.08807 [pdfother]
When Will AI Exceed Human Performance? Evidence from AI Experts
Subjects: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)
[17]  arXiv:1705.08690 [pdfother]
Continual Learning with Deep Generative Replay
Comments: Submitted to NIPS 2017
Subjects: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Learning (cs.LG)
[18]  arXiv:1705.08520 [pdf]
An effective algorithm for hyperparameter optimization of neural networks
Subjects: Artificial Intelligence (cs.AI); Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)
[19]  arXiv:1705.08509 [pdf]
Predictive Analytics for Enhancing Travel Time Estimation in Navigation Apps of Apple, Google, and Microsoft
Subjects: Artificial Intelligence (cs.AI)
[20]  arXiv:1705.08492 [pdfother]
Uplift Modeling with Multiple Treatments and General Response Types
Subjects: Artificial Intelligence (cs.AI)
[21]  arXiv:1705.08868 (cross-list from cs.LG) [pdfother]
Flow-GAN: Bridging implicit and prescribed learning in generative models
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML)
[22]  arXiv:1705.08850 (cross-list from cs.LG) [pdfother]
Improved Semi-supervised Learning with GANs using Manifold Invariances
Comments: 16 pages, 7 figures
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
[23]  arXiv:1705.08551 (cross-list from stat.ML) [pdfother]
Safe Model-based Reinforcement Learning with Stability Guarantees
Subjects: Machine Learning (stat.ML); Artificial Intelligence (cs.AI); Learning (cs.LG); Systems and Control (cs.SY)
[24]  arXiv:1705.08508 (cross-list from cs.CY) [pdfother]
Vehicle Traffic Driven Camera Placement for Better Metropolis Security Surveillance
Comments: 10 pages, 2 figures, under review for IEEE Intelligent Systems
Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI); Computer Science and Game Theory (cs.GT)
[25]  arXiv:1705.08500 (cross-list from cs.LG) [pdfother]
Selective Classification for Deep Neural Networks
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI)

Monitoring with Artificial Intelligence and Machine Learning

Artificial intelligence and machine learning (AI and ML) are so over-hyped today that I usually don’t talk about them. But there are real and valid uses for these technologies in monitoring and performance management. Some companies have already been employing ML and AI with good results for a long time. VividCortex’s own adaptive fault detection uses ML, a fact we don’t generally publicize.
AI and ML aren’t magic, and I think we need a broader understanding of this. And understanding that there are a few typesof ML use cases, especially for monitoring, could be useful to a lot of people.
Artificial Intelligence and Machine Learning
I generally think about AI and ML in terms of three high-levelresults they can produce, rather than classifying them in terms of how they achieve those results.

1. Predictive Machine Learning

Predictive machine learning is the most familiar use case in monitoring and performance management today. When used in this fashion, a data scientist creates algorithms that can learn how systems normally behave. The result is a model of normal behavior that can predict a range of outcomes for the next data point to be observed. If the next observation falls outside the bounds, it’s typically considered an anomaly. This is the basis of many types of anomaly detection.
Preetam Jinka and I wrote the book on using anomaly detection for monitoring. Although we didn’t write extensively about machine learning, machine learning is just a better way (in some cases) to do the same techniques. It isn’t a fundamentally different activity.
Who’s using machine learning to predict how our systems should behave? There’s a long list of vendors and monitoring projects. Netuitive, DataDog, Netflix, Facebook, Twitter, and many more. Anomaly detection through machine learning is par for the course these days.

2. Descriptive Machine Learning

Descriptive machine learning examines data and determines what it means, then describes that in ways that humans or other machines can use. Good examples of this are fairly widespread. Image recognition, for example, uses descriptive machine learning and AI to decide what’s in a picture and then express it in a sentence. You can look at captionbot.ai to see this in action.
What would descriptive ML and AI look like in monitoring? Imagine diagnosing a crash: “I think MySQL got OOM-killed because the InnoDB buffer pool grew larger than memory.” Are any vendors doing this today? I’m not aware of any. I think it’s a hard problem, perhaps not easier than captioning images.

3. Generative Machine Learning

Generative machine learning is descriptive in reverse. Google’s software famously performs this technique, the results of which you can see on their inceptionism gallery.
I can think of a very good use for generative machine learning: creating realistic load tests. Current best practices for evaluating system performance when we can’t observe the systems in production are to run artificial benchmarks and load tests. These clean-room, sterile tests leave a lot to be desired. Generating realistic load to test applications might be commercially useful. Even generating realistic performance data is hard and might be useful.

Artificial intelligence on Hadoop: Does it make sense?



MapR just announced QSS, a new offering that enables the training of complex deep learning algorithms. We take a look at what QSS can offer, and examine AI on the Hadoop landscape.
distributed-deep-learning-mapr.jpg
Hadoop is becoming a substrate for artificial intelligence
Getty Images/iStockphoto -- MapR
This week MapR announced a new solution called Quick Start Solution (QSS), focusing on deep learning applications. MapR touts QSS as a distributed deep learning (DL) product and services offering that enables the training of complex deep learning algorithms at scale.
Here's the idea: deep learning requires lots of data, and it is complex. If MapR's Converged Data Platform is your data backbone, then QSS gives you what you need to use your data for DL applications. It makes sense, and it is in line with MapR's strategy.
MapR is the first Hadoop vendor with an offering that is marketed as what we'd call artificial intelligence (AI) on Hadoop. But does AI on Hadoop make sense more broadly? And what are other Hadoop vendors doing there?

MAPR DOES DEEP LEARNING

No hype, just fact: Artificial intelligence in simple business terms
AI has become one of the great, meaningless buzzwords of our time. In this video, the Chief Data Scientist of Dun and Bradstreet explains AI in clear business terms.
Remember when Hadoop first came out? It was a platform with many advantages, but required its users to go the extra mile to be able to use it. That has changed. Now Hadoop is a burgeoning ecosystem, and a big part of its success is due to what we call SQL-on-Hadoop.
Hadoop has always been able to store and process lots of data for cheap. But it was not until support for accessing that data via SQL became good enough that Hadoop became a serious contender as the enterprise data backbone. SQL was, and still is, the de-facto standard for accessing data. So supporting it meant that Hadoop could be used by mostly everyone.
AI and SQL are different. It's not a backwards compatibility, commodity feature. AI is a forward-looking, trending field. But even if today AI is a differentiator for those who have it, it looks like it will soon be somewhat of a commodity as well: those who do not have will not be able to compete.
AI and SQL are also similar: If you are a Hadoop vendor, this is not really what you do. This is something others do -- you just need to make sure that it can run on your platform, where all the data is. This is what MapR is out to achieve with QSS too.
MapR leverages open source container technology (think Docker), and orchestration technology (think Kubernetes) to deploy deep learning tools (think TensorFlow) in a distributed fashion. None of this technology has to do with MapR, but the value QSS brings is in making sure everything works together seamlessly.
reference-architecture.png
The distributed deep learning MapR's QSS proposes has three layers. The bottom layer is the data layer, the middle layer is the orchestration layer, and the top layer is the application layer.
Image: MapR
Ted Dunning, MapR chief application architect, explains: "The best approach for pursuing AI/Deep learning is to deploy a scalable converged data platform that supports the latest deep learning technologies with an underlying enterprise data fabric with virtually limitless scale."
He also notes that "almost all of the machine learning software is being developed independently of Hadoop and Spark. This requires a platform like MapR that is capable of supporting both Hadoop/Spark workloads as well as traditional file system APIs."
And since that works, why don't you also use MapR-DB and MapR Streams and MapR-FS to feed your data and MapR Persistent Application Client Container (PACC) to deploy your model? Oh and we've got services for you too -- we'll help you. That is MapR's message with QSS.
Anil Gadre, MapR chief product officer, says: "DL can provide profound transformational opportunities for an organization. Our expertise...coupled with [our] unique design...form the foundation for [QSS]. QSS will enable companies to quickly take advantage of modern GPU-based architectures and set them on the right path for scaling their DL efforts."

AI ON HADOOP

So, is AI on Hadoop a thing? Unlike SQL, there is no standard for AI. There is no widely accepted and understood definition even. DL is only a part of machine learning (ML) which is only a part of AI. And even within DL, while there may be some shared concepts, there is no such thing as a common API. So QSS is DL on Hadoop, but not really AI on Hadoop.
deeplearningiconsr5png-jpg.png
There is more to AI than machine learning, and there is more to machine learning than deep learning.
Image: Nvidia
The notion of using a data and compute platform like Hadoop as the substrate for AI is a natural one. But being able to run ML or DL on Hadoop does not really make a Hadoop vendor an AI vendor too. This is a discussion we've been having with many Hadoop vendor executives over the last few months.
For Cloudera CEO Tom Reilly, "ML is very real and very active, it's here and now and it's doing great things in practice. Our customers are trying to understand AI and what lies in their journey to the future. We are helping them with ML, our platform already supports ML and will continue to add support for it. We think of our platform as the host of the data people will use for AI".
​Hortonworks founder: Ambari 2.0 is as big a deal as Hadoop 2.0
From the Atlas security project, to Ambari 2.0 and SequenceIQ, Hadoop veteran and Hortonworks co-founder Arun Murthy discusses some big-data themes of the moment.
Cloudera has been criticized for trying to pose as an AI company in its recent IPO filing. To the best of our knowledge, Cloudera does not have extensive internal expertise on AI. There is a data science team, comprised of a handful of people, and there is also the recent acquisition ofsense.io.
Sense.io has been integrated in Cloudera's stack and repurposed as Cloudera Data Science Workbench (CDSW). In a recent discussion with Sean Owen, Cloudera Data Science Director, Owen compared sense.io to IBM's DataWorks.
"By providing ready access to data, CDSW decreases time to value of AI applications delivered with our automated ML platform," notes Jeremy Achin, DataRobot CEO. This is great, but it's not really AI, is it?
For Scott Gnau, Hortonworks CTO, AI is comprised of two key components: loads of data plus packaging and algorithms to traverse the data. Hortonworks supports both, and as AI wins, Hortonworks wins as well. Gnau, however, emphasizes what he sees as Hortonworks' strengths, namely enterprise governance and security.
Gnau believes we are yet to see emerging technology in AI that we have not yet dreamt of. So Hortonworks' approach is to invest in infrastructure and to be the trusted purveyor of data, while keeping an eye on emergent killer technology and applications it can plug in from an application perspective.
Each vendor's approach has to be seen in the context of where they are now and how they see themselves evolving. AI is a new battlefield that vendors approach in line with their philosophy and goals. We will continue with an analysis of how these are manifested in AI in a subsequent post.