Highlight

Những điều thú vị khi dùng Trí tuệ nhân tạo của Viettel

Những người dùng Internet tại Việt Nam thường lấy “chị Google” ra để… giải trí. Khi “chị” đọc văn bản hay chỉ đường cho người tham gia gi...

Saturday, December 31, 2016

These Were The Best Machine Learning Breakthroughs Of 2016



What were the main advances in machine learning/artificial intelligence in 2016? originally appeared on Quorathe knowledge sharing network where compelling questions are answered by people with unique insights.
Answer by Xavier Amatriain, VP Engineering at Quora, former Netflix recommendations, researcher, professor, on Quora.
2016 may very well go down in history as the year of “the Machine Learning hype”. Everyone now seems to be doing machine learning, and if they are not, they are thinking of buying a startup to claim they do.
Now, to be fair, there are reasons for much of that “hype”. Can you believe that it has been only a year since Google announcedthey were open sourcing Tensor Flow? TF is already a very active project that is being used for anything ranging from drug discovery to generating music. Google has not been the only company open sourcing their ML software though, many followed lead. Microsoft open sourced CNTKBaidu announced the release of PaddlePaddle, and Amazon just recently announced that they will back MXNet in their new AWS ML platform. Facebook, on the other hand, are basically supporting the development of not one, but two Deep Learning frameworks: Torch and Caffe. On the other hand, Google is also supporting the highly successful Keras, so things are at least even between Facebook and Google on that front.
Besides the “hype” and the outpour of support from companies to machine learning open source projects, 2016 has also seen a great deal of applications of machine learning that were almost unimaginable a few months back. I was particularly impressed by the quality of Wavenet’s audio generation. Having worked on similar problems in the past I can appreciate those results. I would also highlight some of the recent results in lip reading, a great application of video recognition that is likely to be very useful (and maybe scary) in the near future. I should also mention Google’s impressive advances in machine translation. It is amazing to see how much this area has improved in a year.
As a matter of fact, machine translation is not the only interesting advance we have seen in machine learning for language technologies this past year. I think it is very interesting to see some of the recent approaches to combine deep sequential networks with side-information in order to produce richer language models. In “A Neural Knowledge Language Model”, Bengio’s team combines knowledge graphs with RNNs, and in “Contextual LSTM models for Large scale NLP Tasks”, the Deepmind folks incorporate topics into the LSTM model. We have also seen a lot of interesting work in modeling attention and memory for language models. As an example, I would recommend “Ask Me Anything: Dynamic Memory Networks for NLP”, presented in this year’s ICML.
Also, I should at least mention a couple of things from NIPS 2016 in Barcelona. Unfortunately, I had to miss the conference the one time that was happening in my hometown. I did follow from the distance though. And from what I gathered, the two hottest topics were probably Generative Adversarial Networks(including the very popular tutorial by Ian Goodfellow) and thecombination of probabilistic models with Deep Learning.
Let me also mention some of the advances in my main area of expertise: Recommender Systems. Of course Deep Learning has also impacted this area. While I would still not recommend DL as the default approach to recommender systems, it is interesting to see how it is already being used in practice, and in large scale, by products like Youtube. That said, there has been interesting research in the area that is not related to Deep Learning. The best paper award in this year’s ACM Recsys went to “Local Item-Item Models For Top-N Recommendation”, an interesting extension to Sparse Linear Methods (i.e. SLIM) using an initial unsupervised clustering step. Also, “Field-aware Factorization Machines for CTR Prediction”, which describes the winning approach to the Criteo CTR Prediction Kaggle Challenge is a good reminder that Factorization Machines are still a good tool to have in your ML toolkit.
I could probably go on for several other paragraphs just listing impactful advances in machine learning in the last twelve months. Note that I haven’t even listed any of the breakthroughs related to image recognition or deep reinforcement learning, or obvious applications such as self-driving cars, chat bots, or game playing, which all saw huge advances in 2016. Not to mention all the controversy around how machine learning is having or could have negative effects on society and the rise of discussions around algorithmic bias and fairness.

AI Technology can Help NASDAQ Monitor the Market

nasdaq

The NASDAQ exchange creates intense data traffic with an impressive 14 million trades a day. As a result, it is extremely difficult to track down market abuses, but Nasdaq is finding a unique solution to that problem with the use of AI technology.

It’s easy to go unnoticed in a crowd, and with 14 million trades a day it is incredibly difficult for a major exchange like NASDAQ to monitor the market and catch everyone who isn’t following the rules. This is a major issue being faced by NASDAQ’ssenior vice president and head of risk managementValerie Bannert-Thurner. In finding market rule violators, Bannert-Thurner knows what to look for and how to find it. Yet, the sheer number of transactions means only a portion of potential violators can be caught.
Until now, NASDAQ has used their proprietary SMART trade surveillance program, and it has proven to be successful. According to Bannert-Thurner, they look for people that are “excessively profitable,” analyze their market data, and determine if profits are good fortune or bad business.

The NASDAQ SMART Platform

Without some assurance of the integrity and fairness of their market, the NASDAQ would not be as successful. It operates 25 exchanges between the U.S., Canada, and Europe, and this is in no small part to their reputation for reliability and integrity.
Their SMART trade surveillance program is at the heart of that reputation, and its use by 45 outside exchanges and 13 regulators are a testament to its quality.
The SMART trade surveillance program is at the heart of NASDAQ's reputationCLICK TO TWEET
It works by using the SMART platform to detect red flags in behavior by simultaneously analyzing what traders are saying in their emails or chats and what their actual trades are. Large teams will pair up the two sources of data and cross-reference them to find any wrongdoing.
The system works, but it requires a lot of manpower and it still can’t analyze the market fast enough to catch everything. To address this, Bannert-Thurner is turning to AI technology to help.

How AI can Help NASDAQ to be More Secure

The success of the SMART program notwithstanding, there is still room for improvement. AI might be the perfect way to lighten the workload on SMART analysts.
AI use language processing software combined with machine intelligence, and this means that it can learn as it analyzes market data and communications. If nefarious traders were using certain code language in their communications, an AI could recognize key terms and trends in the communication and alert the SMART platform of the issue.
“IF NEFARIOUS TRADERS WERE USING CERTAIN CODE LANGUAGE IN THEIR COMMUNICATIONS, AN AI COULD RECOGNIZE KEY TERMS AND TRENDS.”
The AI would not replace the human members of SMART teams. In fact, they will be able to improve the efficiency of every worker. An AI would do this by doing the grunt work of filtering out loads of data. People can do this, too, but an AI will have it done in nanoseconds.
The SMART teams would rely on the AI to present alerts and evidence of wrongdoing so that further action can be pursued. This would boost efficiency and security for the exchange.
NASDAQ has a high standard, but such standards can be compromised with time. For NASDAQ’s sake, let’s hope that modern AI technology will help them fight back.

Dear engineers: Please build friendly robots

IEEE pushes for ethics in AI design

160623 spot mini 3

Anyone who has read science fiction can tell you that killer robots are a problem. The Institute of Electrical and Electronics Engineers, better known as the IEEE, wants to do something about it.
On Tuesday, the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems released a report aimed at encouraging engineers and researchers to think ethically when they’re building new intelligent software and hardware.
The report, titled “Ethically Aligned Design,” is aimed at bringing the care of humans into the creation of artificial intelligence. It clocks in at over a hundred pages in length, but there are a few key themes that surface from the report. It calls for more transparency about how automated systems work, increased human involvement, and care for the consequences of systems design.
Ethical considerations in artificial intelligence and automated systems will become increasingly important as companies and governments make increasing use of that technology. While a lot of pop cultural discussion revolves around incredibly intelligent systems, there are already algorithms in use today that can have significant impacts on business and political decision making.
Raja Chatila, the chair of the initiative, said in an interview that he didn’t think engineers and companies were aware of the issues at play.
“I think — and this is my personal opinion — most engineers and companies are not yet really aware or really open to those ethical issues,” he said. “It’s because they weren’t trained like that. They were trained to develop efficient systems that work, and they weren’t trained in taking into account ethical issues.”
One of the key issues that’s already showing up is algorithmic bias. Computer systems can and do reflect the worldview of their creators, which can become an issue when those values don’t match a customer’s.
When it comes to transparency, the report repeatedly calls for the creation of automated systems that can report why they made particular decisions. That’s difficult to do with some of the state of the art machine learning techniques in use today.
What’s more, it’s not uncommon for companies to keep the exact details of their machine learning algorithms’ inner workings under wraps, which also flies in the face of a push for transparency.
Transparency is key for not only understanding things like image recognition algorithms but also the future of how we fight wars. The IEEE report’s discussion of autonomous weapon systems is full of calm yet terrifying language like a description of effectively anonymous death machines leading to “unaccountable violence and societal havoc.”
To a similar end, the report pushes for greater human involvement in those intelligent systems. The IEEE group wants people to be able to appeal to another human in the case of autonomous systems making decisions, among other things.
In the future, this work should lead to the creation of IEEE standards around ethics and intelligent systems. One such standard around addressing ethical concerns in system design is already in the works, and two more are on the way.
But even if this process leads to the creation of standards, organizations will still have to choose to adopt them. It could be that the creation of ethical systems would be seen as a marketing differentiator — a robot that’s 65 percent less likely to murder someone should be more appealing to customers.
It’s also possible that the tech industry could just sidestep the ethics question, and continue on its merry way.
The IEEE team is now opening the document up for feedback from the public [PDF]. Anyone can submit thoughts on its contents, though the group is looking for referenced, actionable feedback that’s less than two pages long. Submissions are due on March 6.

Big Data To Help Public Sector Build 360 Degree View of Consumer

Big Data to support Public Sector

In today’s age and era, every customer is there on social media or is interacting in some way or the other through some means which might be social commenting on FB, or posting a tweet about a company’s product or sharing a feedback of an organisation’s services online. That simply means that the consumer is engaged with a business in one or the other way and that business has always have an opportunity to capture this personal feedback of the customer and utilize it in its future strategy building frameworks to achieve growth. These big data sets can be of great importance if optimally utilized and can lead to better customer satisfaction and profitability. There are many firms providing Big Data Development Services these days considering its expanding horizon worldwide.

What is 360 Degree View of a Customer
360 degree view of a customer means that the business organisations should possess complete knowledge of customer’s preferences and habits while they do purchasing with them. Customers these days usually expect personalized services to be given to them on the basis of their previous purchasing behaviour and preferences. They expect that the firm must be well versed with their previous purchasing history and it will treat them accordingly.

Challenges faced by the Public Sector in Utilizing Big Data
While the Private Sector has been somewhat successful in utilizing these useful personal data files of customers by interacting with them in accordance with the feedback shared on social media and other platforms by their customers, Public Sector has failed miserably.

Public Sector departments and organisations hold a huge relational database of customers but are lacking such Big Data processing applications which can handle such huge chunks of data at once. However, they need to be more data driven as the customer satisfaction in the public sector is almost zero.

Hadoop can provide a solution in this regard as it is capable of ingesting huge amounts of data about citizens coming from different sources. These various sources include departments like Tax Department, Health, Law, Public Welfare etc. Spark too is capable of processing and providing automation for these huge data volumes to so as to derive value out of them.

How 360 Degree View Methodology Can Help The Public Sector
Analytics, Data Science, Machine Learning and Spark are a few names which could be implemented in public sector so as to incorporate 360 Degree View Methodology and to make its services customer-centric. Government agencies can utilize electronic information of citizens by making efforts to capture and store every single citizen’s personal info in their electronic databases so as to provide better public services like those of some private sector firms. Later, it could share this unique citizen’s personal info across all government and public sector agencies electronically to maximize its use and to achieve optimum results.

There are many Offshore Software companies having expertise in Big Data offering Big Data Application Development Services to clients all around the world so as to mutually grow along with their clients in the technology space.
  
Citizen centric thinking in Governmental agencies could also prove to be beneficial for implementing the economic schemes related to public welfare. Big Data Discovery & Analytics Platform services if incorporated strictly and efficiently has the capability to completely transform the Public Sector Space with Customer-Centric approach been applied to its services across all public agencies.