If you have seen the movie ‘Iron Man’, you might remember Tony Stark’s home computing system J.A.R.V.I.S; an artificially intelligent home computing system that took care of everything from the home’s heating and cooling systems to Stark’s hot rod in the garage. It looked pretty great on screen, but unlike to AI technologies today, Stark’s AI assistant:
- Could talk to people in natural language (kind of like people speak to each other)
- Was significantly intelligent enough to use context (to crack jokes for example)
- Exhibited seamless integration with the objects it could control
Recently, Mark Zuckerberg created a code for another virtual assistant (also named Jarvis) to automate his home. While Mark’s tool (controlled via iOS app) has a long way to go before it can emulate movie’s J.A.R.V.I.S, it does come close and in the process, acknowledges the challenges that app developers face trying to integrate AI capabilities in their projects.
In this post, we will discuss what these challenges are while taking the development of Mark’s AI- powered digital assistant in context.
1 – Processing Natural Language
Current machine learning software uses natural language processing (NLP) to convert text into data. However to an artificially intelligent machine, natural language is nothing but patterns of data that it learns from and responds to. We can’t truly say that it ‘comprehends’ phrases or sentences as naturally as people do, at least not yet.
In his blog detailing development specifics for Jarvis, Mark Zuckerberg explains that to make the artificial assistant talk with him “like anyone else”, he had to make it understand word based patterns by:
- Using text messages to communicate
- Giving it the ability to speak
- Having it translate the speech into text that it could ‘read’
Any context that couldn’t be ‘taught’ to Jarvis in this structured code, was incomprehensible to it. This is a glaring example of the challenges, similar technologies face today. For instance, voice controlled tools, like ‘Ok Google’ on the Google app, has to be taught how to decipher phonetic differences between accents. Even after it learns, it still can’t comprehend certain phrases (like ‘Lumos’ that turns on an Android phone’s flashlight) if they are spoken in other accents. Until, as Mark opines, a fundamental breakthrough in AI happens, we will have to be content with software that is limited in its ability to learn as we do.
2 – Understanding Context
AI’s current inability to understand natural language in its entirety brings us to another challenge; context. A typical computing system can define and identify inputted data to solve specific tasks. For AI enabled applications, this limitation can be a handicap since it would require more context, other than the data that they have already been fed with, to work more seamlessly.
Compare artificial intelligence to how a two year old is taught to make sense of the world and interact with it. A toddler has biological neural networks that give him natural ability to speak and to eventually talk cohesively. While artificially intelligent programs do incorporate deep learning to “learn” to communicate with users, the neural networks they have are artificial, which severely limits their ability to use logic to comprehend and interact with the world around them.
For example, in order for Jarvis to play music in a certain room, Mark had to specify the room in question, otherwise it would get the location wrong. Similarly, Google Assistant will sometimes answer a simple query with a web link.
3 – A lack of Common APIs and Standards
In order for an artificially intelligent environment to work, the objects within it must be able to “talk” to one another. Unfortunately, for AI technologies today, that is easier said than done. Systems like Jarvis depend on IoT (Internet of Things) concepts. To bridge the gap between AI and IoT, developers must ensure that the objects in an environment:
- Can connect to the internet
- Run on similar API and standards
The first point is case sensitive. For example, applying sensor technology to objects that need to operate in remote areas is useless where it is difficult to provide online connectivity. Even if internet access is guaranteed, there needs to be common standards and APIs that will let connected devices talk to one another. Even while developing Jarvis, the Facebook founder had to write code for systems while keeping the different protocols and languages they run on in perspective.
Wrapping Up
In conclusion, Mark Zuckerberg’s Jarvis gives us a deeper look into artificially intelligent environments and the possibilities of further development for the software that runs them.
No comments:
Post a Comment