Highlight

Những điều thú vị khi dùng Trí tuệ nhân tạo của Viettel

Những người dùng Internet tại Việt Nam thường lấy “chị Google” ra để… giải trí. Khi “chị” đọc văn bản hay chỉ đường cho người tham gia gi...

Friday, September 14, 2018

How to extract building footprints from satellite images using deep learning

I work with our partners and other researchers inside Microsoft to develop new ways to use machine learning and other AI approaches to solve global environmental challenges. In this post, we highlight a sample project of using Azure infrastructure for training a deep learning model to gain insight from geospatial data. Such tools will finally enable us to accurately monitor and measure the impact of our solutions to problems such as deforestation and human-wildlife conflict, helping us to invest in the most effective conservation efforts.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

Applying machine learning to geospatial data


When we looked at the most widely-used tools and datasets in the environmental space, remote sensing data in the form of satellite images jumped out.

Today, subject matter experts working on geospatial data go through such collections manually with the assistance of traditional software, performing tasks such as locating, counting and outlining objects of interest to obtain measurements and trends. As high-resolution satellite images become readily available on a weekly or daily basis, it becomes essential to engage AI in this effort so that we can take advantage of the data to make more informed decisions.

Geospatial data and computer vision, an active field in AI, are natural partners: tasks involving visual data that cannot be automated by traditional algorithms, abundance of labeled data, and even more unlabeled data waiting to be understood in a timely manner. The geospatial data and machine learning communities have joined effort on this front, publishing several datasets such as Functional Map of the World (fMoW) and the xView Dataset for people to create computer vision solutions on overhead imagery.

An example of infusing geospatial data and AI into applications that we use every day is using satellite images to add street map annotations of buildings. In June 2018, our colleagues at Bing announced the release of 124 million building footprints in the United States in support of the Open Street Map project, an open data initiative that powers many location based services and applications. The Bing team was able to create so many building footprints from satellite images by training and applying a deep neural network model that classifies each pixel as building or non-building. Now you can do exactly that on your own!

With the sample project that accompanies this blog post, we walk you through how to train such a model on an Azure Deep Learning Virtual Machine (DLVM). We use labeled data made available by the SpaceNet initiative to demonstrate how you can extract information from visual environmental data using deep learning. For those eager to get started, you can head over to our repo on GitHub to read about the dataset, storage options and instructions on running the code or modifying it for your own dataset.

Semantic segmentation


In computer vision, the task of masking out pixels belonging to different classes of objects such as background or people is referred to as semantic segmentation. The semantic segmentation model (a U-Net implemented in PyTorch, different from what the Bing team used) we are training can be used for other tasks in analyzing satellite, aerial or drone imagery – you can use the same method to extract roads from satellite imagery, infer land use and monitor sustainable farming practices, as well as for applications in a wide range of domains such as locating lungs in CT scans for lung disease prediction and evaluating a street scene.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials
Illustration from slides by Tingwu Wang, University of Toronto

Satellite imagery data


The data from SpaceNet is 3-channel high resolution (31 cm) satellite images over four cities where buildings are abundant: Paris, Shanghai, Khartoum and Vegas. In the sample code we make use of the Vegas subset, consisting of 3854 images of size 650 x 650 squared pixels. About 17.37 percent of the training images contain no buildings. Since this is a reasonably small percentage of the data, we did not exclude or resample images. In addition, 76.9 percent of all pixels in the training data are background, 15.8 percent are interior of buildings and 7.3 percent are border pixels.

Original images are cropped into nine smaller chips with some overlap using utility functions provided by SpaceNet. The labels are released as polygon shapes defined using well-known text (WKT), a markup language for representing vector geometry objects on maps. These are transformed to 2D labels of the same dimension as the input images, where each pixel is labeled as one of background, boundary of building or interior of building.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

Some chips are partially or completely empty like the examples below, which is an artifact of the original satellite images and the model should be robust enough to not propose building footprints on empty regions.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

Training and applying the model


The sample code contains a walkthrough of carrying out the training and evaluation pipeline on a DLVM. The following segmentation results are produced by the model at various epochs during training for the input image and label pair shown above. This image features buildings with roofs of different colors, roads, pavements, trees and yards. We observe that initially the network learns to identify edges of building blocks and buildings with red roofs (different from the color of roads), followed by buildings of all roof colors after epoch 5. After epoch 7, the network has learnt that building pixels are enclosed by border pixels, separating them from road pixels. After epoch 10, smaller, noisy clusters of building pixels begin to disappear as the shape of buildings becomes more defined.

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

A final step is to produce the polygons by assigning all pixels predicted to be building boundary as background to isolate blobs of building pixels. Blobs of connected building pixels are then described in polygon format, subject to a minimum polygon area threshold, a parameter you can tune to reduce false positive proposals.

Training and model parameters


There are a number of parameters for the training process, the model architecture and the polygonization step that you can tune. We chose a learning rate of 0.0005 for the Adam optimizer (default settings for other parameters) and a batch size of 10 chips, which worked reasonably well.

Another parameter unrelated to the CNN part of the procedure is the minimum polygon area threshold below which blobs of building pixels are discarded. Increasing this threshold from 0 to 300 squared pixels causes the false positive count to decrease rapidly as noisy false segments are excluded. The optimum threshold is about 200 squared pixels.

The weight for the three classes (background, boundary of building, interior of building) in computing the total loss during training is another parameter to experiment with. It was found that giving more weights to interior of building helps the model detect significantly more small buildings (result see figure below).

Azure Guides, Azure Learning, Azure Certifications, Azure Tutorial and Materials

Each plot in the figure is a histogram of building polygons in the validation set by area, from 300 square pixels to 6000. The count of true positive detections in orange is based on the area of the ground truth polygon to which the proposed polygon was matched. The top histogram is for weights in ratio 1:1:1 in the loss function for background : building interior : building boundary; the bottom histogram is for weights in ratio 1:8:1. We can see that towards the left of the histogram where small buildings are represented, the bars for true positive proposals in orange are much taller in the bottom plot.

Last thoughts


Building footprint information generated this way could be used to document the spatial distribution of settlements, allowing researchers to quantify trends in urbanization and perhaps the developmental impact of climate change such as climate migration. The techniques here can be applied in many different situations and we hope this concrete example serves as a guide to tackling your specific problem.

Another piece of good news for those dealing with geospatial data is that Azure already offers a Geo Artificial Intelligence Data Science Virtual Machine (Geo-DSVM), equipped with ESRI’s ArcGIS Pro Geographic Information System. We also created a tutorial on how to use the Geo-DSVM for training deep learning models and integrating them with ArcGIS Pro to help you get started.

Finally, if your organization is working on solutions to address environmental challenges using data and machine learning, we encourage you to apply for an AI for Earth grant so that you can be better supported in leveraging Azure resources and become a part of this purposeful community.

Thursday, September 13, 2018

Industrial Internet of Things: what, how and why?

Industrial-Internet-of-Things-what,-how-and-why

There’s a rather new concept floating around and if you had a tough time understanding the Internet, then the Internet of Things and Industrial Internet of Things might pose an even bigger challenge. Also, no final definition of these concepts has seen the light of day yet, even though people have been talking about them for quite some time.
Having access to the world through a device is yesterday’s news, but what if this device could connect to another device and ‘talk’ back and forth? That is gathering, working with and finally exchanging information even without active human intervention. Does this sound like a good idea for industry or/and everyday life? Connectivity, all around connectivity that is, seems to be what the future has in store for us.
The final definition of IoT will not be given by what it can do, but by what it can be made to do. Namely, connecting a couple of machines in a network to the internet is no big deal with today’s technology; the goal is to have them achieve something as a result of this interconnectivity. Anything or indeed everything could be sent through this channel, what matters is what we receive back.
From tracking the whereabouts of your cat to automatizing complex industrial processes, all could be done from this platform. As long as your device is connected to the internet and its configuration permits it to gather, send and process data, we could say that it is a part of the IoT.
All of this leads of course to a digital transformation. The field of interest for us is, as always, the industrial one. Bosch and SAS for example describe the IoT as more or less a system based on devices (be they wearable or big production machines) that are fitted with sensors that can gather data and intelligently act on it in order to produce new business models, new knowledge and finally as a result: new services.
Think about the Industrial Internet of Things as something less personal, more like large scale implications. The sectors that it usually covers are manufacturing, transportation, oil and gas, energy/utilities and many others which deal with big business solutions. It all started with the focus on optimization and automation and it’s expected that the whole of the Industrial Internet of Things market will reach USD 123.89 billion by 2021. Imagine this is a mix of machines, computers and human operators all working together with data in order to transform their business into something better.
The biggest industry to be impacted by this trend is manufacturing, also the biggest spender on software, hardware, services and connectivity. On second place we have the transportation sector, largely investing in advanced communication and monitoring systems. Last but not least, the energy and utilities sector focuses on oil and gas exploration and smart grid which is the key in supply and network transmission/distribution. Apart from the big three, it’s worth noting the application of Industrial Internet of Things in healthcare, robotics and mining.
Since we’re talking about internet and industry, another concept that hits home is Industry 4.0. Regarded as the 4th major industrial revolution it really is about the aforementioned digital transformation, a decided shift towards cyber-physical systems.
The real goal of this new concept is finally customization, presenting every industry with means to personalize production, servicing and producer/consumer interaction, apart from the already mentioned cost efficiency and innovative services. In this aspect the IoT serves as the binding interface between all these systems: cloud computing, big data, artificial intelligence, data communication, programmable logic controllers and many others.
But is everybody on board with this new era, or an even better question: why wouldn’t they be? Apparently, the reason is a lack of skills. Seeing as the digital revolution is already here, maybe it’s time to invest in better training strategies or even look outside the box, as strategic partnerships could provide access to all necessary know how. Another concern is that of security. In a free for all network you should be right to worry about who may have access to your data.

New MIT Robot Can Delicately Handle Objects It’s Never Seen Before



Robots in factories are really good at picking up objects they’ve been pre-programmed to handle, but it’s a different story when new objects are thrown into the mix. To overcome this frustrating inflexibility, a team of researchers from MIT devised a system that essentially teaches robots how to assess unfamiliar objects for themselves.
As it stands, engineers have basically two options when it comes to developing grasping robots: task-specific learning and generalized grasping algorithms. As the name implies, task-specific learning is connected to a particular job (e.g. pick up this bolt and screw it into that part) and isn’t typically generalizable to other tasks. General grasping, on the other hand, allows robots to handle objects of varying shapes and sizes, but at the cost of being unable to perform more complicated and nuanced tasks.
Today’s robotic grasping systems are thus either too specific or too basic. If we’re ever going to develop robots that can clean out a garage or sort through a cluttered kitchen, we’re going to need machines that can teach themselves about the world and all the stuff that’s in it. But in order for robots to acquire these skills, they have to think more like humans. A new system developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) takes us one step closer to that goal.
It’s called Dense Object Nets, or DON. This neural network generates an internal impression, or visual roadmap, of an object following a brief visual inspection (typically around 20 minutes). This allows the robot to acquire a sense of an object’s shape. Armed with this visual roadmap, the robot can then go about the task of picking up ther object—despite never having seen it before. The researchers, led by Peter Florence, will present this research next month at the Conference on Robot Learning in Zürich, Switzerland, but for now you can check out their paper on the arXiv preprint server.
During the learning phase, DON views an object from multiple angles. It recognizes specific spots, or points, on the object, and maps all of an object’s points to form an overall coordinate system (i.e. the visual roadmap). By mapping these points together, the robot gets a 3D impression of the object. Importantly, DON isn’t pre-trained with labeled datasets, and it’s able to build its visual roadmap one object at a time without any human help. The researchers refer to this as “self-supervised” learning.
The DON system picking up a brown shoe.
Image: Tom Buehler/CSAIL
Once training is complete, a human operator can point to a specific spot on a computer screen, which tells the robot where it’s supposed to grasp onto the object. In tests, for example, a Kuka IIWA LRB robot arm lifted a shoe by its tongue and a stuffed animal by its ear. DON is also capable of classifying objects by type (e.g. shoes, mugs, hats), and can even discern specific instances within a class of objects (e.g. discerning a brown shoe from a red shoe).
In the future, a more sophisticated version of DON could be used in a variety of settings, such as collecting and sorting objects at warehouses, working in dangerous settings, and performing odd clean-up tasks in homes and offices. Looking ahead, the researchers would like to refine the system such that it’ll know where to grasp onto an object without human intervention.
Researchers have been working on computer vision for the better part of four decades, but this new approach, in which a neural net teaches itself to understand the 3D shape of an object, seems particularly fruitful. Sometimes, the best approach is make machines think like humans.

A plan to advance AI by exploring the minds of children

Photo of Josh Tenenbaum in front of a busy whiteboard

The next big breakthroughs in artificial intelligence may depend on exploring our own minds.

The project brings computer scientists and engineers together with neuroscientists and cognitive psychologists to explore research that might lead to fundamental progress in artificial intelligence. Tenenbaum outlined the project, and his vision for advancing AI, at EmTech, a conference held at MIT this week by MIT Technology Review.So says Josh Tenenbaum, who leads the Computational Cognitive Science lab at MIT and is the head of a major new AI project called the MIT Quest for Intelligence.
"Imagine we could build a machine that starts off like a baby and learns like a child," he said. "If we could do this it’d be the basis for artificial intelligence that is actually intelligent, machine learning that could actually learn.”
Some stunning advances have been made in AI in recent years, but these have largely been built upon a handful of key breakthroughs in machine learning, especially large, or deep, neural networks. Deep learning has, for instance, given computers the ability to recognize words in speech and faces in images as accurately as a person can. Deep learning also underpins spectacular progress in game-playing programs, including DeepMind’s AlphaGo, and it has contributed to improvements in self-driving vehicles and robotics. But they are all missing something.
"None of these systems are truly intelligent," he said. "None of them have the flexible, common sense, general intelligence of a two year old, or even a one year old. So what’s missing? What’s the gap?"
Tenenbaum’s research focuses on exploring cognitive science in order to understand human intelligence. His work has, for example, explored how even small children are able to visualize aspects of the world using a kind of innate 3-D model. This gives humans greater instinctive understanding of the physical world than a computer or robot has. "Children’s play is really serious business," he said.  "They’re experiments. And that’s what makes humans the smartest learners in the known universe.”
Tenenbaum has also done groundbreaking work developing computer programs capable of mimicking some of the more elusive aspects of the human mind, often using probabilistic techniques. For instance, in 2015 he and two other researchers created computer programs capable of learning to recognize new handwritten characters, as well as certain objects in images, after seeing just a few examples. This is important because the best machine-learning programs typically require huge quantities of training data. iSee, a self-driving-car company that draws inspiration from this research, was spun out of Tenenbaum’s lab last year.
The Quest for Intelligence, announced in February, also seeks to explore the societal impact of artificial intelligence. This means accounting for the technology’s fundamental limitations or shortcomings, as well as issues such as algorithmic bias and explainability. 
Tenenbaum notes that the original vision for artificial intelligence, a vision that is now more than 50 years old, sought to draw inspiration from human intelligence, but without much scientific grounding. “The fields of cognitive science and neuroscience are now more mature,” he says. “This should make this project special.”

Running quantum algorithms in the cloud just got a lot faster




Quantum computers could one day perform calculations beyond the reach of even the most powerful classical supercomputer, but for now building and maintaining these machines remains immensely expensive and difficult.

A startup called Rigetti Computing has just taken the wraps off a new Quantum Cloud Service (QCS) that builds on its existing offering, which includes Forest, a software toolkit for quantum programming in the cloud. There’s a $1 million prize for the first person or team using QCS to demonstrate that a quantum machine is capable of showing what the company calls "quantum advantage".So over the past few years, the nascent industry has begun to make some of the relatively few quantum machines in existence available to researchers and businesses via the computing cloud.
Rigetti defines this as showing that a quantum machine can come up with a higher quality, faster, or cheaper solution to an important and valuable  problem than a classical one can. (Rigetti says details of the competition will be unveiled on October 30.)
The company also recently unveiled the world’s most powerful quantum processor, a 128-qubit model that tops the previous record holder, Google’s 72-qubit Bristlecone chip (see the MIT Technology Review qubit counter). QCS users will initially be limited to a 16-qubit chip, however. The service will also be limited to certain customers and partners of Rigetti at first, before becoming more widely available later this year.
The reason there’s so much excitement around quantum computing is that unlike traditional machines, which use standard digital bits that can represent either or 0, qubits can be both at the same time. Adding just a few extra qubits to a machine—and linking them via a phenomenon known as “entanglement”—creates exponential leaps in computing power (see “Serious quantum computers are finally here. What are we going to do with them?”).
As the technology develops, quantum computing could lead to significant advances in numerous fields, from chemistry and materials science to nuclear physics and machine learning.

Speed trap

Thomas Papenbrock, a nuclear physicist at the University of Tennessee, has used both IBM’s and Rigetti’s cloud services to compute the binding energy of the deuteron, a particle consisting of a proton and neutron that forms the center of the deuterium (or heavy hydrogen) atom. Although it’s possible to do this with classical computers, Papenbrock says he’s keen to use quantum machines through the cloud in order to learn more about their potential.
To run such experiments, researchers often program their own classical computers with hybrid quantum algorithms that then use application programming interfaces, or APIs, to call on quantum machines in the cloud for specific bits of a calculation. The results are then shipped back to the classical machines.
Rigetti says its own team, and users of its existing cloud service, found that this approach created latency issues, slowing down algorithms’ performance.
QCS tackles the problem with a data center containing both quantum computers and classical ones in a system optimized to run entire hybrid algorithms. The firm says that over the next few months, quantum algorithms will run 20 to 50 times faster on its QCS than on its current cloud setup, and significantly faster beyond that.
The service also comes pre-configured with Forest and other tools to make it easy for researchers to get experiments up and running. “We’re really shortening the learning loop so people can just start testing and programming today,” says Chad Rigetti, the company’s CEO and founder.
Though QCS will initially give researchers access only to the company’s 16-qubit chip, eventually its latest one will be accessible via the cloud too, the company says. The prospect of more powerful quantum processors excites researchers like Papenbrock. “With access to a 128-qubit chip, we could solve some fantastic problems,” he says.

Cloud moves

The new service will give even more researchers access to relatively advanced quantum computing and keep Rigetti in the front row of a rapidly expanding quantum cloud movement.
IBM already lets members of its business-focused QNetwork community access 20-qubit machines via the cloud, and provides free access to 5- and 16-qubit machines through an initiative called the IBM Quantum Experience. Dario Gil, the chief operating officer of IBM Research, says that some 97,000 users have run a total of 5.8 million experiments on the latter service since it launched in 2016.
At a recent conference, Diane Greene, the head of Google’s cloud business, said the company is letting a few customers have access to a cloud-based quantum service, and Asian tech firms like Japan’s Fujitsu and China’s Alibaba have also joined the quantum cloud club.

Pentagon pledges $2 billion for AI research

Getty Images
The US Department of Defense will put up to $2 billion towards artificial intelligence research over the next five years, the Washington Post reports. Steven Walker, director of the Defense Advanced Research Projects Agency (DARPA), announced the plan today at a symposium outside of Washington, DC. He said the agency wants to look into "how machines can acquire human-like communication and reasoning capabilities" and will fund dozens of new research projects going forward.
"This is a massive deal. It's the first indication that the United States is addressing advanced AI technology with the scale and funding and seriousness that the issue demands," Gregory Allen, an adjunct fellow with the think tank the Center for a New American Security, told the Washington Post. "We've seen China willing to devote billions to this issue, and this is the first time the US has done the same."
DARPA said it's looking to fund AI projects tackling a range of issues including security clearance vetting, reducing power needs for military machines and explainable AI, which will allow individuals to better understand the AI they're using. The move follows the recent establishment of the Joint Artificial Intelligence Center, created to help the Department of Defense "pursue AI applications with boldness and alacrity while ensuring a strong commitment to military ethics and AI safety."
While the Pentagon has worked with leading tech companies on its AI efforts, pushback has led some projects to be discontinued. Notably, Google's Project Maven generated outcry inside the company and it ultimately chose not to renew its contract with the Pentagon.

Tuesday, September 11, 2018

Increasing Importance of AI in Customer-Facing Industries Like Banking, Retail, Media, Cosmetics and Healthcare

Increasing Importance of AI in Customer-Facing Industries Like Banking, Retail, Media, Cosmetics and Healthcare

Emerging technology trends clearly point to a future encompassing screen-less interactions between businesses and consumers, with voice, augmented and virtual reality, wearable devices, and artificial intelligence, gradually but definitely removing the traditional graphic user interface (GUI) from the equation. The next decade is expected to be even more disruptive based on the methodologies used by customers to interact with brands.

A closer glimpse of the consumer landscape, reveals irrefutable enthusiasm for artificial intelligence (AI) as compared to other upcoming technologies. However, the technology is still in the experimental phase. Even though the majority of enterprise leaders consider AI to be a business advantage, many organizations are taciturn to trust AI to the extent of deferring implementation and hence are yet to benefit from the technology’s promising capabilities. For AI to be fully functional, it needs investment at enterprise level, and many professionals believe their organizations are not setting aside sufficient budget for this, much needed step-up.

Though adoption of AI has not been universal so far, the technology is already changing the way customers interact with brands, and it is emerging as a critical component to how forward-thinking companies are delivering exceptional customer experiences. Consider the growth in deployment of chatbots and AI-powered virtual assistants, both of which are helping companies learn more about their customers to enhance personalization and the overall experience. In fact, by 2025, roughly 95% of customer interactions are estimated to be aided by AI technology.

It’s clear that AI is going to be at the focal point of most organizations’ future customer experience strategies, but many companies are struggling today to keep up with ever-growing consumer expectations. The banking industry is unique, which considers AI as more differentiating as compared to the all-industries average. If any given banking organization has, say, 50,000 customers for each of its individual commercial branches, it’s next to impossible to get to know and/or interact with each one. And yet, with banks closing branches at the fastest pace in decades, can the industry really afford to cut down on customer experience efforts? Perhaps the clear answer is no.

Among a bank’s millions of customers, there are likely hundreds of loyal, revenue-generating customers, but humans’ limited ability to scan through customer data and interact at scale, prevents banks from responding at a pace that keeps up with customer expectations, in order to properly nurture those relationships.  

Application of Real Customer Data is Vital

For organizations in customer-facing industries in particular, such as Banking, Retail, Media, Cosmetics and Healthcare, AI is an essential technology that needs to be implemented across all customer experience initiatives. Through automation of customer experiences via AI, organizations can serve more customers at a higher scale and more effectively, also solve the manual, and even impossible, time-consuming work specified above. Customer interactions can be automated through AI-powered virtual assistants and self-service websites or simply used to augment support for human resources at call centres.

HDFC Bank Ltd., for example, is using AI-powered banking chatbot Eva to allow customers to ask questions about financial matters. The service, allows customers to generate questions for the AI assistant to answer in order to boost engagement with customers.

Later this year, L’Oreal will roll out a new service, a digital beauty assistant who helps you test products to give you a sense of what would work for your face. Using augmented reality and live streaming technologies, the company is aiming to extend and digitize its relationship with consumers by bringing the personalized makeup counter experience into your home. With a demonstration at the Cannes Lions Festival of Creativity, L’Oreal showcased how it will soon allow consumers of its NYX Professional Makeup brand, the first of the company’s brands to try this new approach, to book live streaming consulting sessions with beauty assistants (likely the ones who work for the brand’s stores already) who would then work with those consumers just as they would at the makeup counter, only digitally.

No matter the application, AI-powered customer interactions that are based on true knowledge and a thorough understanding of a customer’s wants and needs can be a huge competitive advantage.

Maintaining relationships with valuable customers depends on making sure there’s a consistent progression of activities related to each individual customer, and it’s equally important that customers gain convenience and control over their experiences. By having immediate access to perpetually relevant customer data and using that data to guide interactions with virtual assistants, organizations can transform their customers’ experiences and promote a positive brand image by eliminating frustrating wait times, delivering helpful answers to queries and even offering apt new product or service suggestions, all on the fly in real time.

Use of Omnichannel Data is Valuable

One of the most expensive mistakes an organization can make when applying AI to customer experience initiatives is failing to personalize customer interactions in a timely and contextual manner. For example, if virtual assistants aren’t equipped with sufficient customer data, they are compelled to ask basic, stored queries. This leads to poor, limited answers in response, leaving customers dissatisfied and organizations with little or no new data to improve customer experiences in the future.

Equally important is the ability to go beyond using historical data and to make use of real-time data to make better, informed business decisions, which is something we still see companies struggling with today. Another critical aspect is taking the time to collect and construct a customer’s preferences from all available data sources. For instance, both real-time and behavioral data need to be harnessed to drive AI actions and make decisions that lead to more connected customer experiences. In fact, feeding omnichannel intelligence into AI applications is what can help organizations better anticipate future needs and deliver the experiences customers are seeking. Companies with extremely strong omnichannel customer engagement retain on average 89% of their customers, compared to 33% for companies with weak omnichannel customer engagement. 77% of strong omnichannel companies store customer data across channels, compared to 48% for weak omnichannel companies.

Furthermore, by applying integrated customer preferences to AI, the technology can gain the ability to become more intelligent over time, pulling data from web interactions to automatically power more relevant, personalized interactions.

Adopt AI Practices to Deliver Enriched Experiences

An ultra critical component of an effective customer experience is anticipating the future needs of each customer. By seeking out proactive measures based on behavior patterns, market trends and user interactions, organizations can deliver more personalized, unique and memorable experiences across multiple channels. In turn, customers will feel more valued and understood.

Research shows that, so far, successful companies are more than twice as likely as their peers to be incorporating AI for marketing purposes. That’s why we’re witnessing an increasing number of customer-facing organizations adopting AI practices. The technology’s ability to scalably deliver actionable customer experience measures in real time delivers enormous business value. By removing any instances of human subjectivity or data intervention, AI has the unique capability to power relevant, timely and contextually aware interactions with each and every customer.