Highlight

Những điều thú vị khi dùng Trí tuệ nhân tạo của Viettel

Những người dùng Internet tại Việt Nam thường lấy “chị Google” ra để… giải trí. Khi “chị” đọc văn bản hay chỉ đường cho người tham gia gi...

Thursday, August 30, 2018

Facial Recognition in Stores Can Spot Angry Customers



When shopping at almost any large retail store, we know security cameras are watching us. Understandably these are there to help stop shoplifting and keep workers and shoppers safe.
Some stores are taking things to the next level by using facial recognition technology. Facial recognition can give security a heads up by identifying people who have previously shoplifted or are banned outright from the store.
Facial Recognition in Retail - YellRobot
credit: Dominic Lorrimer/https://www.smh.com.au

Facial Recognition With Your Coffee

One company is turning to facial recognition to reward good customers not to catch the bad ones. Sidney’s Bahista Café has implemented the Noahface tool which quickly recognizes customers faces. Attached to an ipad, it tracks whenever a customer comes in repeatedly and signals workers to reward them with things such as a free coffee or a special deal. It also helps employees remember the names and faces of their best customers. Customers have been receptive so far as they don’t have to carry loyalty cards and can opt out of the program if they want to.
Walmart Facial Recognition in Retail - YellRobot

Walmart is Developing Facial Recognition Tech to Spot Angry Shoppers

For Walmart, it’s not just about security or rewarding customers. It’s about collecting information and keeping customers happy which ensures that they are spending money in their stores. According to a patent filing, the largest brick-and-mortar retailer in the world is developing a facial recognition technology that can identify whether customers are unhappy or frustrated.
Reported by Business Insider, Walmart hopes the new solution will help stores be proactive in responding to customers’ issues. For instance, if the tech notices an unhappy shopper, it can alert a store associate to go over and investigate. They can attempt to fix the problem before they complain or go spend their money someplace else. Walmart actually trialed facial-recognition technology in 2015 to detect shoplifters and prevent theft but abandoned the program because it proved ineffective.

Tracking Spending Trends

The patent filing also revealed that the technology will allow for analysis of shoppers’ purchasing behavior over time, which will help stores detect changes in spending habits due to whether they are satisfied or not.
A company like Walmart utilizing this tech can ask “What in the store is making customers angry? Or perhaps “What is making them happy?”.  This would ultimately influence how they do business and arrange their stores. Over time they can collect information on their customers to help tailor the shopping experience to fit specific needs.
Walmart Facial Recognition in Retail - YellRobot
Photo By: L.A. Shively

1984 or Business as Usual?

The problem is that artificial intelligence may be able to detect emotions but cannot accurately tell what is causing those emotions. A person may look frustrated but it might because they just lost their job not because they are waiting on a long line. A person could appear happy but it could have nothing to do with their shopping experience.
This goes far beyond the acceptance of security cameras to deter crime.  Will customers be comfortable with retailers possibly knowing almost everything about them. Eventually, their emotions and spending habits will be connected to real-world identities.
After collecting a large amount of info over time, could some stores act on data and preconceived notions about their customers based on appearance, gender, race or income levels. What if you resemble another person who shoplifted and are mistakingly kicked out of the store? This could not only be embarrassing for the customer but open a retailer up for a barrage of lawsuits.

Robots May Soon Be Able to Smell Your Body Odor and Taste Your Food



In the near future, devices in your home may be able to tell if you haven’t showered in a few days or if your lunch has gone bad. Robots that can taste and smell could also be running around supermarkets making sure our groceries are perfect.
We’re already at the point that artificial intelligence can accurately recognize faces and voices. But how about smell and taste? It seems almost impossible,  Smell O Vision never really made it at the movies after all. Jokes aside, artificial intelligence recognizing pixels or sound waves is one thing, to process and identify something like smell or taste takes the perfect combination of engineering, programming, biology, and chemistry.

Aromyx - Robots that Can Taste and Smell YellRobot
credit: Aromyx

Aromyx and the EssenceChip™

Palo Alto-based Aromyx might be accomplishing the seemingly impossible. Partnered with the robotics-focused venture studio Rewired and working with scientists from Stanford, they have seemingly figured how to reproduce the signals that our organs send to the brain when we smell or taste something. Called the EssenceChip™, it’s a disposable biochip which clones the 400 plus receptors from the noise and tongue. It then measures the taste and smell and gives the information in a digital readout.
As far as the process goes, a well plate is exposed to a smell or taste, the bio-assay absorbs the smell or taste molecules and activates a signal cascade exactly the same as in the human nose. To put it simply, Aromyx has put a nose and tongue on a computer chip. For an in-depth overview of how it works check out the Aromyx website.

Aromyx - Robots that Can Taste and Smell YellRobot
credit: Aromyx

Robots that Can Taste and Smell?

The purpose of the EssenceChip™ is for customers to be able to measure, digitally capture, archive and edit tastes or scents. It can be used locally or in the field. There is no need to send anything out of house to be analyzed as the data is read with a standard commercial plate reader. The data is then uploaded to the Aromyx cloud which is a database of scents and smells. The Aromyx Allegory software can be used for taste and scent identification, creation, comparison, ingredient substitution, etc. Some of Aromyx’s initial customers include chemical, agriculture, food, defense, and automotive companies.

“Aromyx is Google Maps for Taste and Scent.  It provides the flavor directions to get to any scent or taste.” – Dr. J. Bruce German, Professor, and Chemist, Food Science and Technology Dept., the University of California at Davis 


Aromyx - Robots that Can Taste and Smell YellRobot
credit: Aromyx

The Next Breakthrough in Robotics and Artificial Intelligence?

The possibilities for something like the EssenceChip™ are endless. Just like pictures are edited in photoshop, food chains could be able to digitally manipulate taste and smell to get that perfect flavor. Supermarkets and restaurants could use the technology for impeccable quality control.
Imagine robots that can taste and smell working in your home. Your fridge could tell you the second your milk goes bad or Alexa can let you know when your baby has soiled his or her diaper.
Along with improving the quality of our food and fragrances, the technology could be used to make the products less harmful to ourselves and the environment without impacting taste and smell. For instance, a popular perfume can be reformulated with less dangerous chemicals without changing the smell. Those french fries you get at McDonald’s could be made healthier and cheaper without sacrificing taste.

“As with the personal computer revolution of the 1980’s, and the birth of the world wide web of the 1990s, the digital taste and smell revolution is set to advance.” – Aromyx



Wednesday, August 29, 2018

This AI system can identify cancer tumours better than humans



This AI system can identify cancer tumours better than humans
Washington, Aug 25: Scientists have developed an artificial intelligence system that can accurately detect tiny specks of lung cancer in CT scans, which radiologists often have a difficult time identifying.

The AI system is about 95 per cent accurate, compared to 65 per cent when done by human eyes, researchers said.

"We used the brain as a model to create our system," said Rodney LaLonde, a doctoral candidate at University of Central Florida in the US.
The approach is similar to the algorithms that facial-recognition software uses. It scans thousands of faces looking for a particular pattern to find its match.


The group fed more than 1,000 CT scans into the software they developed to help the computer learn to look for the tumours.


They had to teach the computer different things to help it learn properly. The system was taught to ignore other tissue, nerves and other masses it encountered in the CT scans and analyse lung tissues.


Researchers are fine-tuning the AI's ability to identify cancerous versus benign tumours.

Rise of the algorithms: How AI is already shaping our everyday lives

Sci-Fi movies have long played on our fear of artificially intelligent robot armies taking over the world, but hidden and often secret algorithms are already busy at work in society, delivering sometimes life-changing consequences.
Humanoid robots, capable of thinking, moving and talking, is the most common imagery when it comes to Artificial Intelligence, but incredibly powerful computer algorithms, which silently churn through oceans of data, also fall under the AI umbrella.
Computers are already making important decisions and influential judgements in the key societal pillars of justice, policing, employment and finance.
NSW Police are known to be using a controversial algorithm which claims to predict youth crime before it happens. But critics of NSW Police's Suspect Target Management Plan (STMP) system argue the software is racist, and unfairly blacklists and targets Aboriginal youths.
In the US an algorithm used by judges to assist parole and bail decisions has also been accused of race-bias against black defendants.
Facial recognition software is now helping recruitment firms decide job postings and if a candidate's facial movements reveal the desired levels of trust, confidence and an ability to cope under pressure.
Algorithms and big data will go way beyond your subconscious facial tics to try and assist companies and officialdom learn more about you and your personality, and predict your behaviours.
Spelling mistakes in your emails, and the time of day you send those emails, can be harvested by highly complex algorithms to assess whether you can be trusted to repay a bank loan or not.
Ellen Broad, an Australian data expert and specialist in AI ethics, believes the biggest concern about the relentless rise of the algorithm is transparency.
Speaking to nine.com.au, Broad said many of the algorithms which make potentially life-altering decisions are developed by private companies who will fight tooth and nail to keep their proprietary software top secret.
The problem with that, Broad says, is people should have the right to know how and why these tightly guarded algorithms make a decision.
Private companies armed with trade secret algorithms are increasingly enmeshed in public institutions whose inner-workings have traditionally always been open to scrutiny by media and society.
A computer program called Compas, operating in the US justice system, recently became a lightning rod over this issue.
Designed to help judges assess bail applications, Compas drew fierce criticism after an investigation by Pro Publica in 2016 claimed the algorithm was racist and generated negative outcomes for black defendants.
Northpointe Inc, the company which built Compas, rejected the report but refused to divulge what datasets the algorithm analysed, or how it calculated if a judge should deem someone was a reoffending risk.
"One of the key criticisms of Compas was this is a software tool being used to make decisions that affects a person's life," Broad said.

Ellen Broad is an independent data consultant who wrote the book, Made by Humans: The AI Condition. (Supplied)
Ellen Broad is an independent data consultant who wrote the book, Made by Humans: The AI Condition. (Supplied) ()
"Our justice system is built on principles like you have a right to know the charges against you, to know the evidence that will be laid out in relation to those charges, and you need the opportunity to challenge and present your own evidence.
"Compas uses hundreds of variables and you don't know how it works."
Northpointe said its algorithm had learned from a large dataset of convicted criminals to make predictions about who was likely to reoffend.
The Pro Publica investigation alleged the Compas program was found to over-predict black defendants as being likely to reoffend, and under-predict on white defendants.
NSW Police have run into similar accusations of racial bias with its STMP software, an intelligence-based risk assessment program that aims to prevent crime by targeting recidivist offenders.
Research led by UNSW in 2017 said STMP generated a kind of secret police blacklist which disproportionately singled out youth under the age of 25 and Aboriginal people.
The report said people, some as young as 13, were being repeatedly detained and searched while going about their everyday lives, and being visited at home at all hours of the day.
UNSW researchers said NSW Police would not divulge the inner workings of STMP. Nine.com.au contacted NSW Police for comment for this story, but have not received a response.
"We need to know how accurate these systems are before we allow programs to make highly important decisions for us," Broad said.
A computer algorithm called Security Risk Assessment Tool is used in Australian immigration detention centres to assess the security risk of asylum seekers, criminals and overstayers.
Because algorithms learn from the data humans feed it, there is a real risk of automating the exact cultural biases these programs are supposed to eliminate, Broad says.
"If there are structural inequalities in your underlying data, and unfair trends in real-world demographics and behaviours, there is only so much a machine can do."
Australian company Lodex is just one of many start-ups which have developed algorithms that can trawl through people's social media and email accounts to make predictive decisions.
Lodex founder Michael Phillipou told nine.com.au his company's algorithm analyses 12,000 "data points" to calculate credit scores that help banks and lenders decide if a customer will default on a loan.
Accurate credit scoring is all about data, Phillipou says.
Phillipou says someone's email inbox and the way they operate their email account are important factors in the Lodex algorithm.
If given access by loan applicants, Lodex will make judgements on your character and financial risk based on grammar and spelling mistakes in your emails, how long it takes you to reply to an email, what time of day you send mail and if you populate a subject field.
"If someone is not very timely in the way they respond, it provides an indication of something," Phillipou says.
"If someone is very active between the hours of 1am to 4am, it indicates something.
"Because of the proliferation of information available, be it on people's mobile phones, their laptops or social media, we can build highly predictive models," Phillipou says.
Phillipou says digital footprint data, which is used to create a "social score", is currently used to complement robust information held by traditional credit bureaus.
However, Phillipou forecasts digital footprint data is set to become more and more important – and influential - to banks and lending institutions.
Since launch in late 2017, Phillipou says more $400m in loan auctions have been run on the Lodex app.
Broad, author of a book which explores our increasing reliance on AI, Made By Humans, is wary of companies using endless pools of data to make specific, targeted decisions.
She acknowledges a link could exist between spelling errors in emails and defaulting on a financial loan, but Broad warns: "That is not causation".
"Sometimes the more we try to use a dataset that is unrelated to the thing we are actually trying to measure the more risk we have of introducing noise or making [far-reaching] correlations."
Algorithms that can decide our liberty, shape our career prospects or bolster our financial security will only become more prevalent in the future.
Against that backdrop, one of the biggest AI questions we should be asking, Broad says, is what kind of Artificial Intelligence do we want making decisions?
"AI can't necessarily offer us a better and fairer future," she says.
One thing Artificial Intelligence is very good at, Broad says, is helping humans quantify and understand if our systems and institutions are biased.
Garbage In, Garbage Out is an old maxim in the computing world, expressing the idea that poor quality input will result in poor quality output.
Broad says ensuring and increasing the transparency of algorithms will be vital for modern society.

Monday, August 27, 2018

Changes in Technology and Human Life Because of AI.

Technology and Human Life Because

Have you seen the films IRobot and Terminator? Both the films have one common storyline – the rise of machines and robots versus the human mind. Yes, they got designed by the human brain and mind. Then the same robots outsmart and become enemies to human race. But that is a long way in the future. At present, AI promises to save manual labor and valuable time. In this article on changes in Technology and Human life because of AI, we give you the details.
The AI industry is going to grow huge. As per a estimate, it will touch $46 billion in the year 2020. And the most important fact, the rise shows no signs of slowing down. In the next few years, AI will be the next best step to advance the parameters of  products/services. There will be also an improvement in performance.
Does it mean AI will cause changes in technology and human life in the future? Let us look into the points one by one.
  1. Improvement In Efficiency and Output Because of AI
There may exist flaws in new technologies. Let us take, for example,  the automobile industry. Many years went by, before they could put regulation for human safety. But when AI comes to the market, it can enhance the result because of efficiency. It can also give rise to new opportunities for generation of revenue.
  1. Manual Labor Time
There exists different strata of society. In the form of manual labor. With due insights in technology, machines can complete half of the human tasks. So, human brain can focus on creative and interpersonal issues.
Businesses stand to benefit as there will be a major decrease in operation costs. Automation will make human labor look for greener pastures. It will be more related to saving of nature and development of greener technology.
  1. Strengthens The Economy
On one side are the optimists who say AI will create more jobs. Their opposite, the pessimists counter attack by saying AI will destroy humane jobs. Looks like a fact of fiction. Doesn’t it? The industry is still in its nascent stage. So, a gradual evolution will come in the job market. With right preparation, people can still work. But it will be with greater efficiency and better precision. The human mind, brain and AI will become a potent combination. The two, combined, is sure to become the workforce for the future.
  1.  Can Lead To Monitoring With Precision
It is a fact, that AI can get used in places where human life cannot survive. Just imagine the iron rod manufacturing industries. Do you think a worker can do manual jobs in the hot weather? But a machine coupled with AI can, with ease, work on the same environment. The reason – development and design. But yes, there exists a challenge. It is a machine developed by man. You can program the working system to execute a task. If a minor error occurs, then there will be faults in all products. So, a close monitoring is essential to make the AI give the best performance.
  1. Human Lifestyle
In the future, more advanced technology will arise. There are personal virtual assistants such as Siri and the Snooze option in emails. In the western countries, AI has helped home improvement industry in all quarters. Example – building of greener homes, better security and many more. Even human health stands to benefit. The reason – there may be chances of robot nurses and brothers in hospitals.
  1. Telemedicine
AI is also used in medicine. To prescribe treatment at speed. In US, many hospitals advice telemedicine screenings. The reason – patients in remote areas get the facility to have doctors. Even patients in remote areas can get the best medical treatment.
By this smart machines, the lifespan of humans will also increase along with the quality of life.
  1. Social Media Platforms
Social media and digital marketing is here to stay. Period. But there are reports of sabotage in data. The design of AI is by humans. A small twist, here and there and the entire product goes for a toss. Have you seen the famous Tamil film Robot? The scientists designs a robot for positive reasons, while the villain turns it to be a monster. This should not happen. Do you want an example? The algorithm of Facebook influenced a election in one of the powerful nations of the world.
      8. AI in Agriculture
Are you, our loyal reader surprised that we have included agriculture in the list? Cmon, it must be a joke. Where is AI and agriculture? The combination. No, it is not a joke. In fact, agriculture faces a shortage of manual laborers in many Asian countries. The future will have agricultural tools With automation and efficiency. Then, it will sure make a positive impact on the agriculture market.
      9. AI in Call Centers
In few years, there may exist no call centers. The reason – AI. Chatbots have come, and they have made a tremendous change in customer care service. Many of the companies with call centers have automated 50 % of the customer’s questions. The other fifty percent gets handled by professionals because of new situations.

      10. Energy and Mining

There exists the deep sea oil refineries. Coupled with machine learning, this industry can partake drilling operations. They can also get perfect info from seismic vibration. In all, the new industry can help scientists have perfect analysis. They do not have to identify repair or failure in equipment and determine location of new oil wells.
      11. AI in Retail
AI can definitely store information about the customer preferences to the businesses. This will give them a competitive advantage over others who have not yet embraced the new technology. Retailers can tailor their advertising strategy as per the customer preferences. They can then make more number of customers.
Conclusion
There is a famous proverb – Change is permanent and stagnation is death. The proverb applies more in the business field. To be honest, the mantra “innovate or perish” exists in all facts of life, not to mention any business. Now AI has stepped into even the home improvement industry. Visualize the smart home appliances, for two minutes. You have the smart fridge, smart washing machine, and smart mobile. Now, why are they needed? You bet. Villages have become cities. Urbanites have short time for themselves. In these circumstances, it is lucky human race has these appliances.
The lifespan of elders has risen all over the globe. In old age, memory loss is high volatile issue. So, these smart home appliances can help old age live their life peacefully. There is no reason for them to remember the time to switch off the machine. They can program the alerts. In recent times, home maintenance companies have sprung up in major cities such as Bangalore, Pune and Hyderabad. So, if the washing machine suffers a problem, then you do not have to search for referrals. You can call the home maintenance company, a chatbot will answer your questions. You can then place the request for the repair or service. Then, a qualified professional from the best washing machine service center in Hyderabad will come to your home to rectify the problem.
Please note, adoption of AI will vary as per the industry. Till we meet again, please subscribe to our newsletter for future updates.

More efficient security for cloud-based machine learning

Novel combination of two encryption techniques protects private data, while keeping neural networks running quickly

A novel encryption method devised by MIT researchers secures data used in online neural networks, without dramatically slowing their runtimes. This approach holds promise for using cloud-based neural networks for medical-image analysis and other applications that use sensitive data.
Outsourcing machine learning is a rising trend in industry. Major tech firms have launched cloud platforms that conduct computation-heavy tasks, such as, say, running data through a convolutional neural network (CNN) for image classification. Resource-strapped small businesses and other users can upload data to those services for a fee and get back results in several hours.
But what if there are leaks of private data? In recent years, researchers have explored various secure-computation techniques to protect such sensitive data. But those methods have performance drawbacks that make neural network evaluation (testing and validating) sluggish -- sometimes as much as million times slower -- limiting their wider adoption.
In a paper presented at this week's USENIX Security Conference, MIT researchers describe a system that blends two conventional techniques -- homomorphic encryption and garbled circuits -- in a way that helps the networks run orders of magnitude faster than they do with conventional approaches.
The researchers tested the system, called GAZELLE, on two-party image-classification tasks. A user sends encrypted image data to an online server evaluating a CNN running on GAZELLE. After this, both parties share encrypted information back and forth in order to classify the user's image. Throughout the process, the system ensures that the server never learns any uploaded data, while the user never learns anything about the network parameters. Compared to traditional systems, however, GAZELLE ran 20 to 30 times faster than state-of-the-art models, while reducing the required network bandwidth by an order of magnitude.
One promising application for the system is training CNNs to diagnose diseases. Hospitals could, for instance, train a CNN to learn characteristics of certain medical conditions from magnetic resonance images (MRI) and identify those characteristics in uploaded MRIs. The hospital could make the model available in the cloud for other hospitals. But the model is trained on, and further relies on, private patient data. Because there are no efficient encryption models, this application isn't quite ready for prime time.
"In this work, we show how to efficiently do this kind of secure two-party communication by combining these two techniques in a clever way," says first author Chiraag Juvekar, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). "The next step is to take real medical data and show that, even when we scale it for applications real users care about, it still provides acceptable performance."
Co-authors on the paper are Vinod Vaikuntanathan, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory, and Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.
Maximizing performance
CNNs process image data through multiple linear and nonlinear layers of computation. Linear layers do the complex math, called linear algebra, and assign some values to the data. At a certain threshold, the data is outputted to nonlinear layers that do some simpler computation, make decisions (such as identifying image features), and send the data to the next linear layer. The end result is an image with an assigned class, such as vehicle, animal, person, or anatomical feature.
Recent approaches to securing CNNs have involved applying homomorphic encryption or garbled circuits to process data throughout an entire network. These techniques are effective at securing data. "On paper, this looks like it solves the problem," Juvekar says. But they render complex neural networks inefficient, "so you wouldn't use them for any real-world application."
Homomorphic encryption, used in cloud computing, receives and executes computation all in encrypted data, called ciphertext, and generates an encrypted result that can then be decrypted by a user. When applied to neural networks, this technique is particularly fast and efficient at computing linear algebra. However, it must introduce a little noise into the data at each layer. Over multiple layers, noise accumulates, and the computation needed to filter that noise grows increasingly complex, slowing computation speeds.
Garbled circuits are a form of secure two-party computation. The technique takes an input from both parties, does some computation, and sends two separate inputs to each party. In that way, the parties send data to one another, but they never see the other party's data, only the relevant output on their side. The bandwidth needed to communicate data between parties, however, scales with computation complexity, not with the size of the input. In an online neural network, this technique works well in the nonlinear layers, where computation is minimal, but the bandwidth becomes unwieldy in math-heavy linear layers.
The MIT researchers, instead, combined the two techniques in a way that gets around their inefficiencies.
In their system, a user will upload ciphertext to a cloud-based CNN. The user must have garbled circuits technique running on their own computer. The CNN does all the computation in the linear layer, then sends the data to the nonlinear layer. At that point, the CNN and user share the data. The user does some computation on garbled circuits, and sends the data back to the CNN. By splitting and sharing the workload, the system restricts the homomorphic encryption to doing complex math one layer at a time, so data doesn't become too noisy. It also limits the communication of the garbled circuits to just the nonlinear layers, where it performs optimally.
"We're only using the techniques for where they're most efficient," Juvekar says.
Secret sharing
The final step was ensuring both homomorphic and garbled circuit layers maintained a common randomization scheme, called "secret sharing." In this scheme, data is divided into separate parts that are given to separate parties. All parties synch their parts to reconstruct the full data.
In GAZELLE, when a user sends encrypted data to the cloud-based service, it's split between both parties. Added to each share is a secret key (random numbers) that only the owning party knows. Throughout computation, each party will always have some portion of the data, plus random numbers, so it appears fully random. At the end of computation, the two parties synch their data. Only then does the user ask the cloud-based service for its secret key. The user can then subtract the secret key from all the data to get the result.
"At the end of the computation, we want the first party to get the classification results and the second party to get absolutely nothing," Juvekar says. Additionally, "the first party learns nothing about the parameters of the model."
Story Source:
Materials provided by Massachusetts Institute of Technology. Original written by Rob Matheson. Note: Content may be edited for style and length.