Highlight

Những điều thú vị khi dùng Trí tuệ nhân tạo của Viettel

Những người dùng Internet tại Việt Nam thường lấy “chị Google” ra để… giải trí. Khi “chị” đọc văn bản hay chỉ đường cho người tham gia gi...

Saturday, June 3, 2017

Google looks to machine learning to boost security in Gmail

On Wednesday, Google announced a host of new, machine-learning enabled tools to help keep Gmail data more secure. In a blog post by Andy Wen, senior product manager for Counter Abuse Technology, the search giant unveiled its new offerings, including early phishing detection, click-time warnings for fraudulent links, and "unintended external reply warnings and built-in defenses against new threats."
According to the post, 50-70% of the emails your Gmail inbox receives are spam. By using a machine learning algorithm, Google can pinpoint and block these messages with more than 99.9% accuracy. With its new early phishing detection, the company "selectively delays messages (less than 0.05 percent of messages on average) to perform rigorous phishing analysis and further protect user data from compromise." Its spam detection integrates with Google Safe Browsing—which spots suspicious-looking URLs. According to Google, the models produce click-time warnings for phishing and malware links—and because they're using machine learning algorithms, the accuracy improves over time.
In order to support employees who want to be proactive in securing their data, another feature Gmail now offers can display unintended external reply warnings in order to prevent data loss. How does it work? If someone tries to email someone outside of the company, that user will receive a warning that asks if that message was intentional. If it's a regular contact from outside the company, Gmail will remember the information, and will stop sending warnings.
Google's new tools also offer protections against millions of messages that threaten users, through its built-in defenses against ransomware and polymorphic malware. According to Wen's post, the company can "classify new threats by combining thousands of spam, malware and ransomware signals with attachment heuristics (emails that could be threats based on signals) and sender signatures (already marked malware)."
This addition to Gmail's security offerings comes on top of several other recent measures, which include (according to the release):
While Gmail has millions of individual users across the globe, it is also a staple for the enterprise. Companies like CBS, Whirlpool, PWC, and Woolworths are a handful of businesses that use the program for work-related purposes.
gmail-hero.jpg
Image: ZDNet

The 3 big takeaways for TechRepublic readers

  1. On Wednesday, Google announced new security features for Gmail to keep emails safer from phishing and threats, enabled by machine learning.
  2. With its early phishing detection, Gmail can pinpoint and block spam messages with more than 99.9% accuracy—which improves over time.
  3. The update is the latest Google initiative to make the platform safer for the enterprise. Other efforts include data loss prevention, encrypted email, and other alerts.

Why AI will force businesses to rethink balance between the work of humans and machines



At the MIT CIO Symposium in Cambridge, MA, thought leaders in AI explained how we have entered 'the second wave of the second machine age' and what it means for enterprise leaders.
In their groundbreaking book The Second Machine Age, Erik Brynjolfsson and Andrew McAfee pointed to employment trends to illustrate a workforce that is inarguably affected by automation—and how companies must transform in order to remain relevant.
Brynjolfsson, director of the MIT Initiative on the Digital Economy (IDE) and McAfee, principal research scientist and co-director at the MIT IDE, have follow-up book out in June. And this one pinpoints specific qualities of the "second machine age," which the authors argue is maturing to a point at which technologies are now replacing workplace tasks once considered routine. We are now, they write in Machine, Platform, Crowd: Harnessing Our Digital Future, at the "second wave of the second machine age."
In a keynote panel at MIT's CIO Symposium in Cambridge, MA, Jason Pontin, editor in chief and publisher of the MIT Technology Review, moderated a session with Brynjolfsson and McAfee that addressed questions such as: How can businesses harness AI and machine learning to stay ahead of the curve? What is the importance of platforms in a company's overall strategy? And what is the role of the CIO in ensuring the smooth transition towards an innovative future?
So, what is the "second wave" of the second machine age? It's when machines get smart enough to learn on their own.
"We don't have to specify step-by-step how to recognize a face, or how to understand speech," said Brynjolfsson. "Instead, machine learning systems are beginning to open up a much broader set of activities for machines to be able to do. This is the most important thing affecting the economy and society over the coming decade."
McAfee believes the power behind these forces are currently underestimated. "Even though we're all really enamored of machine learning and artificial intelligence and autonomous vehicles, I think we're still low-balling what's actually coming at us," he said. For example, McAfee brought up Go—the highly-complex strategy game, based mainly on intuition, that is now mastered by a machine.
"Go has been intently studied by people for 3,000 years," he said. "And after playing AlphaGo, the Chinese Go champion said, 'I don't think that a single human has touched the edge of the game of Go,'" said McAfee. "Basically, what he's trying to say is that 3,000 years of accumulated knowledge and study have got us to this level, and the machines are telling us that there is this entire additional stakes up above over here."
Why is machine learning success in Go so important? Because, according to McAfee, this game isn't the only domain where that's the case.
Over the last ten years, "we basically went from talking to machines, to them routinely talking to us," said McAfee. At Google I/O recently, the company said voice recognition has improved from 8 1/2% error rate to about 4% error rate, he said. The catch? This time, it wasn't over a span of ten years—it was over the past ten months.
20170524101501.jpg
Jason Pontin, Eric Brynjolfsson, and Andrew McAfee discussed the future of work in the "second wave of the second machine age" during MIT's CIO Symposium in Cambridge, MA (May 2017).
Image: Hope Reese/TechRepublic
The panel also touched on the problem of biases in machine learning (LINK). The algorithms, based on massive data sets, have "biases that appear in the system, and are hard to disentangle," said Bryjolfsson. "It's hard to get a machine to explain what it's doing," he said, which is a primary reason so many researchers are working on explainable AI.

Researchers design AI system to assess pain levels in sheep



An artificial intelligence system designed by researchers at the University of Cambridge is able to detect pain levels in sheep, which could aid in early diagnosis and treatment of common, but painful, conditions in animals. 

You can see a clear analogy between these actions in the sheep’s faces and similar facial actions in humans when they are in pain.
Marwa Mahmoud
The researchers have developed an AI system which uses five different facial expressions to recognise whether a sheep is in pain, and estimate the severity of that pain. The results could be used to improve sheep welfare, and could be applied to other types of animals, such as rodents used in animal research, rabbits or horses.
Building on earlier work which teaches computers to recognise emotions and expressions in human faces, the system is able to detect the distinct parts of a sheep’s face and compare it with a standardised measurement tool developed by veterinarians for diagnosing pain. Their results will be presented today (1 June) at the 12th IEEE International Conference on Automatic Face and Gesture Recognition in Washington, DC.
Severe pain in sheep is associated with conditions such as foot rot, an extremely painful and contagious condition which causes the foot to rot away; or mastitis, an inflammation of the udder in ewes caused by injury or bacterial infection. Both of these conditions are common in large flocks, and early detection will lead to faster treatment and pain relief. Reliable and efficient pain assessment would also help with early diagnosis.
As is common with most animals, facial expressions in sheep are used to assess pain. In 2016, Dr Krista McLennan, a former postdoctoral researcher at the University of Cambridge who is now a lecturer in animal behaviour at the University of Chester, developed the Sheep Pain Facial Expression Scale (SPFES). The SPFES is a tool to measure pain levels based on facial expressions of sheep, and has been shown to recognise pain with high accuracy. However, training people to use the tool can be time-consuming and individual bias can lead to inconsistent scores.
In order to make the process of pain detection more accurate, the Cambridge researchers behind the current study used the SPFES as the basis of an AI system which uses machine learning techniques to estimate pain levels in sheep. Professor Peter Robinson, who led the research, normally focuses on teaching computers to recognise emotions in human faces, but a meeting with Dr McLennan got him interested in exploring whether a similar system could be developed for animals.
“There’s been much more study over the years with people,” said Robinson, of Cambridge’s Computer Laboratory. “But a lot of the earlier work on the faces of animals was actually done by Darwin, who argued that all humans and many animals show emotion through remarkably similar behaviours, so we thought there would likely be crossover between animals and our work in human faces.”
According to the SPFES, when a sheep is in pain, there are five main things which happen to their faces: their eyes narrow, their cheeks tighten, their ears fold forwards, their lips pull down and back, and their nostrils change from a U shape to a V shape. The SPFES then ranks these characteristics on a scale of one to 10 to measure the severity of the pain.
“The interesting part is that you can see a clear analogy between these actions in the sheep’s faces and similar facial actions in humans when they are in pain – there is a similarity in terms of the muscles in their faces and in our faces,” said co-author Dr Marwa Mahmoud, a postdoctoral researcher in Robinson’s group. “However, it is difficult to ‘normalise’ a sheep’s face in a machine learning model. A sheep’s face is totally different in profile than looking straight on, and you can’t really tell a sheep how to pose.”
To train the model, the Cambridge researchers used a small dataset consisting of approximately 500 photographs of sheep, which had been gathered by veterinarians in the course of providing treatment. Yiting Lu, a Cambridge undergraduate in Engineering and co-author on the paper, trained the model by labelling the different parts of the sheep’s faces on each photograph and ranking their pain levels according to SPFES.
Early tests of the model showed that it was able to estimate pain levels with about 80% degree of accuracy, which means that the system is learning. While the results with still photographs have been successful, in order to make the system more robust, they require much larger datasets.
The next plans for the system are to train it to detect and recognise sheep faces from moving images, and to train it to work when the sheep is in profile or not looking directly at the camera. Robinson says that if they are able to train the system well enough, a camera could be positioned at a water trough or other place where sheep congregate, and the system would be able to recognise any sheep which were in pain. The farmer would then be able to retrieve the affected sheep from the field and get it the necessary medical attention.
“I do a lot of walking in the countryside, and after working on this project, I now often find myself stopping to talk to the sheep and make sure they’re happy,” said Robinson.
Reference
Yuting Lu, Marwa Mahmoud and Peter Robinson. ‘Estimating sheep pain level using facial action unit detection.’ Paper presented to the IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC. 30 May – 3 June, 2017. http://www.fg2017.org/.
Inset image: Left: Localised facial landmarks; Right: Normalised sheep face marked with feature bounding boxes. 

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Physicists uncover similarities between classical and quantum machine learning

quantum learning
Diagram representing a generic quantum learning protocol. Credit: Monràs et al. ©2017 American Physical Society
(Phys.org)—Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. Classical machine learning algorithms are currently used for performing complex computational tasks, such as pattern recognition or classification in large amounts of data, and constitute a crucial part of many modern technologies. The aim of quantum learning algorithms is to bring these features into scenarios where information is in a fully quantum form.
The scientists, Alex Monràs at the Autonomous University of Barcelona, Spain; Gael Sentís at the University of the Basque Country, Spain, and the University of Siegen, Germany; and Peter Wittek at ICFO-The Institute of Photonic Science, Spain, and the University of Borås, Sweden, have published a paper on their results in a recent issue of Physical Review Letters.
"Our work unveils the structure of a general class of  learning algorithms at a very fundamental level," Sentís told Phys.org. "It shows that the potentially very complex operations involved in an optimal quantum setup can be dropped in favor of a much simpler operational scheme, which is analogous to the one used in classical algorithms, and no performance is lost in the process. This finding helps in establishing the ultimate capabilities of quantum learning algorithms, and opens the door to applying key results in  to quantum scenarios."
In their study, the physicists focused on a specific type of machine learning called inductive supervised learning. Here, the  is given training instances from which it extracts general rules, and then applies these rules to a variety of test (or problem) instances, which are the actual problems that the algorithm is trained for. The scientists showed that both classical and quantum inductive supervised learning algorithms must have these two phases (a training phase and a test phase) that are completely distinct and independent. While in the classical setup this result follows trivially from the nature of classical information, the physicists showed that in the quantum case it is a consequence of the quantum no-cloning theorem—a theorem that prohibits making a perfect copy of a quantum state.
By revealing this similarity, the new results generalize some key ideas in classical statistical learning theory to quantum scenarios. Essentially, this generalization reduces complex protocols to simpler ones without losing performance, making it easier to develop and implement them. For instance, one potential benefit is the ability to access the state of the learning algorithm in between the training and test phases. Building on these results, the researchers expect that future work could lead to a fully quantum theory of risk bounds in quantum statistical learning.
"Inductive supervised quantum learning algorithms will be used to classify information stored in quantum systems in an automated and adaptable way, once trained with sample systems," Sentís said. "They will be potentially useful in all sorts of situations where information is naturally found in a quantum form, and will likely be a part of future quantum information processing protocols. Our results will help in designing and benchmarking these algorithms against the best achievable performance allowed by quantum mechanics."
More information: Alex Monràs et al. "Inductive Supervised Quantum Learning." Physical Review Letters 118, 190503DOI: 10.1103/PhysRevLett.118.190503 


Read more at: https://phys.org/news/2017-05-physicists-uncover-similarities-classical-quantum.html#jCp

Artificial intelligence, robotics and biometrics on JetBlue’s technology watch list

Maryssa Miller, JetBlue’s Head of Digital Commerce, was speaking at the SITA Air Transport IT Summit in Brussels.
FTE was in attendance at the SITA Air Transport IT Summit in Brussels last week, where digital transformation was among the key topics of conversation.
Among the speakers was Maryssa Miller, JetBlue’s Head of Digital Commerce, who spoke openly during a panel discussion about some of the technologies and trends that have the potential to reshape the customer experience in the coming years. Here are some of the highlights from that discussion.

Accelerating innovation 

Miller suggested that while most organisations recognise the potential of various new and emerging technologies, the speed at which they’re being deployed could be accelerated. Referring to the industry as a whole, she said: “I don’t think anyone would say that we’re implementing technology fast enough because we keep getting requests for it to be faster, for the time to market (to be) faster. We all want that, whether it’s the customer that wants it or the internal folks that want it. We all believe there is a faster way to do it.”
She highlighted the fact that Amazon claims to be doing 50 million deployments per year, and contrasted this to the air transport sector, where quarterly releases are not uncommon for industry suppliers. “We’re far away from being able to get to that Amazon model, but I think that we are getting there and we have to keep focusing on getting the code from the keyboard to the customer much quicker,” Miller added.

Artificial intelligence

It was no surprise that artificial intelligence (AI) was a hop topic at the Summit. During the past year or so, a number of airlines have invested in developing AI-powered chatbots for platforms such as Facebook Messenger and Amazon Echo. JetBlue’s Miller revealed that AI is very much on JetBlue’s agenda. AI, she explained, can streamline the process of answering passengers’ more basic questions, therefore freeing up members of staff to spend more time dealing with more complex issues.
She said the air transport industry must be aware of the fact that people are now becoming used to interacting with the likes of Google Home and Amazon’s Alexa. “That’s really going to change the overall airline travel industry in other ways as well,” she said.

Robotics and automation

During the panel discussion, which was led by APEX and IFSA CEO Joe Leader, the future role of robotics in the industry was explored. While recognising the fact that robots and automation are likely to become increasingly prevalent in both operational and customer-facing roles in the future, Miller played down concerns about any potential impact on jobs. Instead, the technology will “transform the jobs that are being created”, she said.
The introduction of self-tagging, for instance, has empowered JetBlue’s crew members to roam the check-in hall and pro-actively assist passengers, she explained. Also, in back-of-house roles such as baggage handling, robotics could provide assistance by easing the burden on staff members, she suggested. “I think (robotics) is actually going to increase jobs in other areas, as much as it might eliminate the more transactional ones,” Miller added.

Biometrics

As biometric technology continues to gain traction, and airlines, airports and suppliers further explore the idea of single token travel, the technology is on JetBlue’s agenda. “I think overall we’d like to use biometrics from the beginning of the customer journey all the way through to the end,” Miller said, hinting at the potential of biometrics to eventually replace physical boarding passes and identification documents.
Following the conference, JetBlue announced that it will launch a trial of biometric-enabled self-boarding at Logan International Airport in a project that will make use of facial recognition technology. Something tells us we will be hearing a lot more about biometric processing in the coming months.