Highlight

Những điều thú vị khi dùng Trí tuệ nhân tạo của Viettel

Những người dùng Internet tại Việt Nam thường lấy “chị Google” ra để… giải trí. Khi “chị” đọc văn bản hay chỉ đường cho người tham gia gi...

Friday, September 23, 2016

Artificial Intelligence has become the next big thing – again

Android-woman

Back in 2012, a team at Google built a state-of-the-art artificial intelligence network and fed it ten million randomly selected images from YouTube. The computer churned through them, and announced that it kept finding these strange things with furry faces. It had, in other words, discovered cats.
Artificial intelligence has, all of a sudden, become the next big thing again. It is not so much sweeping across our world as seeping into it, with a combination of enormous computing power and the latest ‘deep learning’ techniques promising to give us better medical diagnoses, better devices, better recipes and better lives. Soon, it might even be able to give us new Beatles songs.
At the same time, however, we are growing increasingly alarmed about what it can — or might — do. Decades ago, Norbert Wiener, the father of cybernetics, warned:
The world of the future will be an ever more demanding struggle against the limitations of our own intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.
As fears grow about the automation of the labour market, many are asking the same question as Bertrand Russell, reviewing one of Wiener’s books back in 1951: ‘Are human beings necessary?’
Our conflicted, co-dependent relationship with our devices really took root, argues Thomas Rid in Rise of the Machines, in the second world war. Faster planes and better bombs meant that it was no longer possible for gunners just to point and shoot. To anticipate the enemy’s path, humans needed mechanical crutches: radar stations that could spot incoming targets, guns that could be automatically pointed at their predicted location, shells containing tiny radars of their own which would explode when they detected metal objects nearby. (These were arguably the most effective and certainly the least known wonder weapon of the war.)
Rid, a professor in the war studies department at King’s College London, is as good on the military stuff as you’d expect: his account of Russian attempts to hack America’s defence systems in the Clinton era is similarly definitive (and terrifying). The problem with Rise of the Machines is that the journey between these two points is more of a meander.
Wiener’s theories on human-machine interaction (he took ‘cybernetics’ from the Greek kybernan, meaning to navigate, steer, or govern) were derived from his wartime work on anti-aircraft fire. Rid attempts to trace his influence to the techno-hippies of the 1970s and 1980s, or to the writings of William Gibson. It’s all interesting stuff, but it’s hard to see what cybernetics, cyberspace and cyberwarfare have in common apart from their nebulously defined prefix. Rid’s introduction semi-acknowledges the problem, but defends itself on the grounds that this is work on myth. Sadly, it is one that is too often, in Rid’s own words, obscuring rather than clarifying.
There is a much crisper focus to Margaret A. Boden’s AI, a brief introduction to artificial intelligence (which also offers a clearer definition of cybernetics in one throwaway paragraph than Rid does in 400 pages). Sadly, those seeking to understand the modern world will probably emerge equally baffled. Boden, as an academic in the field of AI, really knows her stuff, and you get a clear understanding from her book of the various different kinds of AI, and their enduring limitations — in particular regarding the emergence of a self-aware Skynet or HAL 9000 clone that will scour us puny humans from the planet. But once she gets technical, she offers perilously little purchase for the general reader. (There are also brackets. Lots of them.)
In short, if you’re interested in learning more about our robotic soon-to-be overlords, your best bet is Thinking Machines by the British journalist Luke Dormehl. Yes, it has its flaws — a feature of Dormehl’s writing is an inability to explain that the first AI conference was in 1956 without adding that this was the year when
Elvis Presley was scandalising audiences with his hip gyrations, Marilyn Monroe married playwright Arthur Miller, and President Dwight Eisenhower authorised ‘In God we trust’ as the US national motto.
But overall, this is an accessible primer to the state of the digital art — how the field of AI grew and shrank and grew again, what the robots’ ever-increasing strengths are, and where they are still weak. He also teases out, as do Rid and Boden, the ways in which it is impossible to separate machines from their masters, how we bring our own fleshy biases to their design and work.
Forecasts of an AI takeover have been with us from the dawn of the computing age. Back in 1960, the computing pioneer Herbert Simon announced that ‘duplicating the problem-solving and information-handling capabilities of the brain is not far off; it would be surprising if it were not accomplished within the next decade’. In the 1970s, one researcher was chastised by a subordinate for his giddy prophecy about how soon robots would be picking up our socks. ‘Notice all the dates I’ve chosen were after my retirement,’ he retorted.
Today’s forecasts of an AI revolution may be similarly premature — but perhaps not. During our long dance with our artificial partners, humans and robots have moved closer and closer together, become more and more entwined. Surprisingly soon, we may find them starting to take the lead.

Google open sources image captioning model in TensorFlow


Pretty much 100 percent of my generation is obsessed with Instagram. Unfortunately, I left the platform (sorry all) back in 2015. Simple reason, I am way too indecisive about which photos to post and what pithy caption to give them.
Google TensorFlow Captioning
Provided by Google
Fortunately, with ample spare time, those who share my problem can now use an image captioning model in TensorFlow to caption their photos and put an end to the pesky first-world problem. I can’t wait for the beauty on the right to start rolling in the likes with the ever-creative  “A person on a beach flying a kite.”
Jokes aside, the technology developed by research scientists on Google’s Brain Team is actually quite impressive. Google is touting a 93.9 percent accuracy rate for “Show and Tell,” the cute name Google has given the project. Previous versions fell between 89.6 percent and 91.8 percent accuracy. For any form of classification, a small change in accuracy will have a disproportionately large impact on usability.
To get to this point, the team had to train both the vision and language frameworks with captions created by real people. This prevents the system from simply naming objects in a frame. Rather than just noting sand, kite and person in the above image, the system can generate a full descriptive sentence. The key to building an accurate model is taking into account the way objects relate to one another. The man is flying the kite, it’s not just a man with a kite above him.
Google TensorFlow Image Caption
Provided by Google
The team also notes that their model is more than just a really complex parrot that spits back entries from its training set of images. From the image on the left, you can see how patterns from a synthesis of images are combined to create original captions in previously unseen images.
Prior versions of the image captioning model took three seconds per training step on an Nvidia G20 GPU, but the version open sourced today can do the same task in a quarter of that time, or just 0.7 seconds. That means that today’s version is even more sophisticated than the version that tied for first in last year’s Microsoft COCO image captioning challenge.
Earlier this year at the Computer Vision and Pattern Recognition conference in Las Vegas, Google discussed a model they had created that could identify objects within an image and build a caption by aggregating disparate features from a training set of images captioned by humans. The key strength of this model is its ability to bridge logical gaps to connect objects with context. This is one of the features that will eventually make this technology useful for scene recognition when a computer vision system needs to differentiate from, let’s say, a person running from police and a bystander fleeing a violent scene.

Thursday, September 22, 2016

Watson, wir haben ein Problem

Künstliche Intelligenz: Unsere Arbeitswelt wird schon in wenigen Jahren eine völlig andere sein.
Lesen, schreiben, zuhören und verstehen – intelligente Maschinen können immer mehr Dinge, die bisher nur Menschen konnten. Was bedeutet das für unsere Jobs? Und für uns?

ReplyBuy brings an AI concierge to the sports and entertainment market



Whether you’re a high school student or an NFL team owner, everybody texts. ReplyBuy, a finalist in the 1st and Future competition, wants to use the text message to get you tickets for sporting events. 1st and Future is a sports-centric startup competition produced as a joint effort between the NFL, Stanford’s Graduate School of Business and TechCrunch.
The current version of ReplyBuy works like this — the company sends a text message to all San Francisco 49ers fans; whoever replies “Buy Now” the fastest gets the tickets. Today, the company is making the platform immensely more useful with the launch of ReplyBuy.ai.
Indeed, ReplyBuy is introducing artificial intelligence to the sports and entertainment vertical. Dubbed ReplyBuy.ai, the AI is a VIP concierge service that will make it even easier for users to get their hands on tickets to major events.
Instead of just receiving text messages when tickets are available, users will now be able to send a text message with a request to buy tickets for whichever event they want; the chatbot will ask a few follow-up questions, like “how many tickets do you want?” and “what’s your price range.” From there, it will automatically buy tickets for you and deliver them instantly via text.

ReplyBuy’s client list includes several top NFL, NBA, NHL and MLS teams using the service, as well as several major universities like UCLA and the University of Arizona. You can check out the full roster of current clients on the company’s website.
ReplyBuy plans to enhance the ReplyBuy.ai experience so it does more than just buy tickets. CEO Josh Manley also tells TechCrunch that in the future, ReplyBuy.ai will be able to be leveraged not only through SMS, but can also be integrated into apps with chat capability and messaging based services like iMessage and Facebook Messenger, along with IoT devices like Amazon Echo and others.
Since the company was founded in 2011, they’ve raised $2.65M. The company was recently nominated for the “Best in Mobile Fan Experience” award held by the Sports Business Awards, and also for the “Move to Mobile” and “Product Innovation” categories at the Ticketing Technology Awards.
1st & Future event at Stanford University in Palo Alto, CA on February 6, 2016. Photo by Max Morse for TechCrunch
We’re thrilled to see former TechCrunch event alums making splashes in their respective industries, and we can’t wait to see what the next batch of startups have in store for us in the Startup Battlefield at Disrupt London 2016. Applications to participate in the Battlefield are open now through October 5, so as long as your company meets the eligibility criteria, you can apply here to participate in the Battlefield.
Disrupt London 2016 takes place December 5-6 at London’s Copper Box Arena. We can’t wait to see all you fabulous innovators, investors and tech enthusiasts at the show.

Wednesday, September 21, 2016

DeepMind wants its healthcare AI to charge by results — but first it needs your data

DeepMind wants its healthcare AI to charge by results — but first it needs your data
Mark your Google calendars because from today ‘Don’t be evil’ rides again, via the DeepMind AI division of the Alphabet ad giant, as a Hippocratic assurance to ‘Do no harm’. 
It’s no small irony that DeepMind’s new mantra for its healthcare push, voiced by co-founder Mustafa Suleyman at an outreach event today for patients to hear what the Google-owned company wants to build with U.K. National Health Service data, is uncomfortably close to its old one — i.e. the one that embarrassingly fell out of favor.
Suleyman cited the Hippocratic oath when discussing his takeaways from patient feedback on the company’s plans.
“[Do no harm] has to be a mantra we repeat and becomes an inherent part of our process,” he said towards the end of the three hour discussion session which was live streamed onYouTube (with a call for comments via a #DMHpatients Twitter hashtag).
“And [do no harm] should be the first measure of success before any deployment or before we attempt to demonstrate any utility and patient benefit,” he added.
After taking questions and listening to views from the small group of patients, health professionals and members of the public selected by the company to be in the audience, Suleyman flagged other takeaways. One of which was the need to widen access to the patient engagement channel DeepMind has now opened up.
He conceded it was unfortunate the event had been held in Google’s shiny, central London offices.
“As you say this is a fancy, intimidating building and I’m sorry for that, in some ways, it’s a shame that that’s the tone. I really agree with you that we have to find other spaces, community spaces that are more accessible to a more diverse group of people,” he said.
“As we formalize the process [of listening to patients] we want to make sure that there are other people being paid around the table and patients’ contributions should also be paid, and we’ll make sure that that’s the case. Potentially we should be thinking about how to run sessions like these on the weekends or in the evenings, when different stakeholders might have more time to get involved,” he added.
Alphabet’s AI division also said today it is intending to “define” what it dubs a “patient involvement strategy” by 2017.
Although DeepMind kicked off data-sharing collaborations with the NHS last fall — inking a wide-ranging data-sharing agreement with London’s Royal Free NHS Trust in September 2015 — and only publicly revealing the DeepMind Health initiative this February, two months after beginning hospital user tests of one of the apps it’s co-developing with the Royal Free… So it’s hard not to see its attitude towards patient engagement and involvement as something of an afterthought up to now.
Controversy and scrutiny 
It also looks like a response to the controversy generated earlier this year by DeepMind’s first publicly announced collaboration with an NHS Trust (the Royal Free) — given that criticism of that project (Streams, an app for identifying acute kidney injury) has focused on how much patient identifiable data the Google-owned company is being given access to power the app, without patient knowledge, let alone consultation or consent. (DeepMind and the Royal Free maintain they do not need patient consent to share the data in that instance as they say the app is for direct patient care — a point the company now reiterates on its website, in a section labeled ‘Information Governance‘.)
The UK’s data protection watchdog, the ICO, is investigating complaints about the Streams app. The National Data Guardian, which is tasked with ensuring citizens’ health data is safeguarded and used properly, is also taking a closer look at how data is being shared. Streams was also not registered as a medical device prior to being tested in hospitals — but should have been, according to the MHRA regulatory body. So DeepMind Health’s modus operandi has already rocked a fair few boats — even as Suleyman was at pains to stress it’s “very early days” for DeepMind Health in his public comments today.
Tellingly the Google-owned company also now has a section of its Health website labeled ‘For Patients‘, where it describes its intention to create “meaningful patient involvement” and claims it is “incorporating patient and public involvement (PPI) at every stage of our projects”. (Although here, again, it notes another future intention: to create a patient advisory group to “contribute more extensively to our projects” — suggesting it could have done much more to involve patients in its first wave of NHS projects and research partnerships.)
“What we’re really doing today is to try and invite people openly to come and help us design the mechanism of interaction,” said Suleyman, summing up DeepMind’s intention for the outreach event. “Many people in this room have much more expertise and experience than we do and we recognize that we have a lot to learn here, and so today I think is an opportunity for us to learn. We’re really grateful for people’s time. We recognize that it’s valuable and we really think this is potentially an opportunity to do this the right way.”
He did not directly reference the Streams app data-sharing controversy, although the entire session was structured to illustrate (as DeepMind views it) the benefits of sharing health data for patients and health outcomes — and thus create a strong narrative to implicitly defend its actions — with much talk of the economic squeeze on the publicly funded NHS and the need to move towards earlier diagnosis of conditions to save resources as well as lives. Tl;dr: DeepMind’s sales pitch to grease the NHS health data funnel is that AI could automate efficiency savings for a chronically cash-strapped NHS. Ergo: you can’t afford not to give us your data!
And while Google’s podium included speakers who do not work directly for Alphabet, all speakers at the event were selected by the company to speak, so unsurprisingly aligned with its views. For example, we heard from Graham Silk of health data sharing advocacy group, Empower: Data4Health, rather than — say — Phil Booth from health data privacy advocacy groupMedConfidential, which has been critical of DeepMind’s handling of NHS data.

Microsoft wants to crack the cancer code using artificial intelligence

Cancer is like a computer virus and can be ‘solved’ by cracking the code, according to Microsoft. The computer software company says its researchers are using artificial intelligence in a new healthcare initiative to target cancerous cells and eliminate the disease.
One of the projects within this new healthcare enterprise involves utilizing machine learning and natural language processing to help lead researchers sift through all the research data available and come up with a treatment plan for individual cancer patients.
IBM is working on something similar using a program called Watson Oncology, which analyzes patient health info against research data.
Other Microsoft healthcare initiatives involve computer vision in radiology to note the progress of tumors over time and a project which Microsoft refers to as its “moonshot” aims to program biology like we program computers using code. The researchers plan to discover how to reprogram our cells to fix what our immune system hasn’t been able to figure out just yet.
Microsoft says its investment in cloud computing is a “natural fit” for this type of project and plans to invest further in ways to provide these types of tools to its customers.
“If the computers of the future are not going to be made just in silicon but might be made in living matter, it behooves us to make sure we understand what it means to program on those computers,” Microsoft exec Jeanette M. Wing said.
Indeed, with all the research data available, the Microsoft project, like many others in the healthcare machine learning space — including in cancer cure discovery — could help speed up medical discovery for this debilitating disease.

Artificial intelligence: preparing lawyers for new technology in practice - speech by Catherine Dixon at the IBA conference


On 19 September the chief executive of the Law Society, Catherine Dixon, delivered a keynote address at the International Bar Association conference in Washington DC.

The work of the Law Society of England and Wales

The Law Society is the professional body that represents more than 170,000 solicitors in England and Wales.
Our vision is to be valued and trusted as a vital partner to represent, promote and support solicitors while upholding the rule of law, legal independence, ethical values and the principle of justice for all.
Last year we launched our new strategy which sets out our three aims:
  • representing solicitors: we represent solicitors by speaking out for justice and on legal issues
  • promoting solicitors: we promote the value of using a solicitor - at home and abroad, and
  • supporting solicitors: we support solicitors to develop their expertise and their businesses, whether they work in firms, in-house or for themselves. This includes working to ensure that the best candidates can join the profession - irrespective of their background and supporting equality, diversity and inclusion in the solicitor profession.
We also fulfil an important public interest function which aims to ensure access to justice, protect human rights and freedoms and uphold the rule of law.

The future of legal services

As part of our work to develop our strategy, we published a piece of research to identify what the future will bring for the legal profession.
The main finding of this research is that solicitors face a future of change. These changes are dynamic and happening at an unprecedented scale and speed. These drivers of change are:
  • economic change and increasing globalisation
  • technological change including Artificial Intelligence (AI)
  • increased competition
  • changes in buyer behaviour
  • wider policy agendas
This session will focus on technology and innovation as a driver of change. For more information see our report The Future of Legal Services.

Technology and innovation

New forms of technology are having an impact on the way we practice law, our firms' workforce, skills and talent and our business models. Our research showed that technology is:
  • enabling lawyers to become more efficient at some procedural work which can be commoditised
  • reducing costs by using technology - including artificial intelligence systems
  • supporting changes to client purchasing decision-making
I will briefly refer to each of these aspects.

Driving efficiencies in legal practice

In the past decade we have seen an increase in the use of machine learning and artificial intelligence in firms. For example:
  • KIRA is a software tool for identifying relevant information from contracts. It has a pre-built machine learning model for common contract review tasks such as due diligence and general commercial compliance.
  • IBM Watson: many firms have improved the way they conduct legal research by using systems like Watson that uses natural language processing and machine learning to gain insights from large amounts of unstructured data. It provides citations and suggests topical reading from a variety of sources more quickly and comprehensively than ever before leading to better advice and faster problem solving.
  • Luminance software: The firm Slaughter and May announced last week that it has been testing software on merger and acquisition matters. The programme aims to automatically read and understand hundreds of pages of detailed and complex legal documents every minute.
  • Technology-assisted review (TAR) - ThoughtRiver: This mechanism is used in litigation for electronic disclosure (or discovery) in many jurisdictions including the US, Ireland and England and Wales. In the US it has been used since 2012, in Ireland since 2015, but in the UK its use was only accepted by the courts this year. In the case of Pyrrho Investments Ltd and another v MWB Property Ltd, the High Court accepted the use of predictive coding in electronic disclosure for the first time. Master Matthews listed a number of reasons that predictive coding was beneficial and found 'no factors of any weight pointing in the opposite direction'. One of the benefits he highlighted was that 'there will be greater consistency in using the computer to apply the approach of a senior lawyer towards the initial sample (as refined) to the whole document set, than in using dozens, perhaps hundreds, of lower-grade fee-earners, each seeking independently to apply the relevant criteria in relation to individual document.'
Therefore, technology can play a facilitative role in helping law firms achieve productivity-driven growth by increasing accuracy, saving time and driving efficiencies. This has been the case in larger firms that provide business to business services.
The use and offer of these tools and services is increasing. Legal technology companies are one of the biggest new groups of players mixing up the dynamics of the market - for example, next month the Law Society is co-hosting an event with Thompson Reuters, Freshfields and Legal Geek aimed at legal technology start-ups to promote innovative solutions for firms and in-house lawyers.
The courts are also embracing technology. In England and Wales they are starting to move away from paper based systems towards online - albeit slowly! The Lord Chief Justice's report to parliament indicated that: 'outdated IT systems severely impede the delivery of justice.' In response, hundreds of millions of pounds have been allocated for the modernisation of IT in courts and tribunals and to modernise their procedures.
As part of this modernisation programme, proposals have been put forward for an online court for claims up to £25,000. Although the Law Society fully supports the need for the judiciary to have fully functional IT systems, there are concerns that will need to be addressed to ensure access to justice is maintained.

Reducing costs - job automation and workforce

Concerns have been raised about job automation in the legal services sector. Research by Deloitte, for example, on the effects on technology on the profession in England and Wales suggests that about 114,000 jobs are likely to become automated in the next 20 years. Leading academics such as Remus and Levy (2015) and McKinsey (2015), are also predicting between 13 per cent and 23 per cent of automation in legal work - mainly for routine and procedural work.
However, projections by the Warwick Institute for Employment Research, estimate that 25,000 extra workers will be needed in the legal activities sector between 2015 and 2020. So, the evidence is inconclusive - the picture is much more complex than the media headlines suggest.
Technological change, right from the industrial revolution, has raised questions that jobs would be decimated. And while some jobs were eliminated following the introduction of machinery, new types of jobs were created.
We are experiencing the same phenomenon today and we believe that the profession will continue to learn, to evolve and to reinvent itself - as we have done so far.
It is expected that the push towards automation of routine work will be levelling off by 2020, and instead we might expect to see technology fuelling innovative models of delivery or service solutions.
There will be an impact on strategic workforce planning. Specifically, human resources departments and learning and development teams need to:
  • Identify the skills, knowledge and aptitudes that are needed for new lawyers. For example, junior lawyers are being encouraged by firms and prospective employers to hone skills on social media, marketing, business management and even coding in addition to their technical abilities and knowledge.
  • Support mid-career lawyers to rethink their development and progression in the firm to ensure that they are prepared for the future.
  • Think about the future of legal practice - some academics, such as Professor Richard Susskind have mooted the idea that the future of law firms and legal practice could be in the hands of other professionals or some hybrid species of 'lawyer-software engineer'.

Changes to client purchasing decision-making

Technology is also having an effect on legal consumer buying behaviours. To remain competitive, firms are increasing their online presence enabling them to interact with clients online.
Law firms, advice agencies and non-profit organisations have made great strides in the development and use of web-based delivery models, including websites with interactive resources, smart forms and general information aimed at existing and potential clients.
Some examples are:
  • The Solution Explorer: This is the first step in the British Columbia Civil Resolution Tribunal. The goal is to make expert knowledge available to everyone through the internet using a smart questionnaire interface process aimed at individuals with small claims or condominium disputes. This interactive tool will be available online, 24 hours a day, seven days a week. The tool is in an online beta test as of June 2016.
  • Online platform Rechtwijzer [Reshtwaiser] (Roadmap to Justice): It has been available in the Netherlands since 2007 for couples who are separating or divorcing. It handles about 700 divorces yearly and is expanding to cover landlord-tenant and employment disputes. At first, Dutch lawyers were wary of the platform as they feared a loss of billable hours, but now many view it as an efficient way to process simpler cases, leaving lawyers to focus their expertise on more complicated matters.
  • CourtNav: An online tool developed by the Royal Courts of Justice in England and Wales in partnership with Freshfields. It helps individuals complete and file a divorce petition.
  • Do-not pay - an AI system challenging parking tickets: It was reported that a 'chatbot lawyer' overturned 160,000 parking tickets in London and New York. The programme was created by a 19-year-old student at Stanford University, and works by helping users contest parking tickets in an-easy to-use chat-like interface. The programme identifies whether an appeal is possible through a series of simple questions - such as were there clearly visible parking signs - and then guides users through the appeals process.
The majority of these web-based models are being tested or have just started to be implemented, therefore their success has not yet been fully proven. However, the 'do-not pay' parking programme has taken on 250,000 cases and won 160,000, giving it a success rate of 64 per cent..

What will the future bring?

Based on this data, the following facts have been identified:
  • The legal market is embracing the use of artificial intelligence in their work. This is the case in firms and in-house.
  • Research published earlier this month shows that 'the major law firms that publicly acknowledge making use of artificial intelligence (AI)-driven systems is now at least 22':
    - nine based in the US
    - nine based in the UK
    - one based in Europe
    - two based in Canada
    - one with international headquarters (Dentons)
This number could be higher as there may be firms that use this type of technology but have not made it public, and others may still be in a testing and pilot phase.
The types of firms taking up AI are mostly large commercial firms but there are also some medium-sized firms such as Dentons, DLA Piper, Reed Smith, Clifford Chance, Macfarlanes and Davies (in Canada). This research suggested that the level of adoption of AI shows that 'we have now moved beyond the 'early adopter' phase and are seeing a broader use of the technology.'
Leading academics and commentators have also said that there are some emerging trends:
  • Machine learning is a top strategic trend for 2016 (Gartner).
  • AI will be a necessary element for data preparation and predictive analysis in businesses moving forward (Ovum).
  • There will be a market for algorithms as businesses learn that they can purchase algorithms rather than programme them from scratch (Forrester).
  • The impact of technology is being felt where firms largely service mass or process-driven needs rather than specialist cases.
  • Technological innovation has led to more standardised solutions for the delivery of legal processes and the ability to commoditise many legal services.
However, some clients remain sceptical about the ability of technology to do a better job at legal service delivery than an actual lawyer. While they accept technology as part of assisted document review, some do not yet fully trust technology to analyse a legal situation or offer legal options. Whilst we expect to see some shift in this attitude by 2020, the human-to-human relationship will still be favoured by many large corporations.
Many clients (including in-house lawyers) still rely on major law firms (Top 200, City, Magic Circle) for their complex and specialised legal issues and are willing to pay the higher premium for these, despite the fact that technological solutions are available to address some of these complex legal issues at a much lower cost and with greater accuracy. We need to separate specialist advice from commoditisation.

Conclusion

To conclude, I want to reiterate that change is driven by:
  • economic change and increasing globalisation
  • technological change (including Artificial Intelligence- AI)
  • increased competition
  • changes in buyer behaviour
  • wider policy agendas
Gottfried Wilhelm Leibniz said in the 17th century that 'It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used.' Now that excellent types of technology, machine learning and artificial intelligence are available, we should embrace it and build on it to deliver better services.

Tuesday, September 20, 2016

Salesforce Einstein delivers artificial intelligence across the Salesforce platform



Say what you will about Salesforce, the company is always looking ahead. This afternoon, it announced Salesforce Einstein, its artificial intelligence (AI) initiative.
The timing, which comes just ahead of rival Oracle’s Open World keynote address, is probably not a coincidence. Regardless, the larger AI theme is something Salesforce has been working on across various pieces of its platform over the last couple of years. Today’s announcement is about tying it together to show the breadth of this approach.
Every year has its leading technology trends and clearly this year, artificial intelligence and its close cousin, machine learning are our favorite flavors. When the biggest companies including Google, Microsoft and AWS are building AI and machine learning tools, it is more than simply a buzzword.
Salesforce has always tried to stay ahead of the curve. It was after all, one of the first true cloud offerings (even though we didn’t call it that then). When it announced an Internet of Things cloud last year — when IoT was itself flavor the year — it caused some raised eyebrows, but Salesforce recognized that devices and sensors would be giving signals that its users could collect to understand customers and markets better.
A year later we have Salesforce Einstein, which isn’t so much a product as a technology umbrella under which all of Salesforce’s artificial intelligence pieces live. Being a Salesforce announcement, it is of course broad and includes lots of individual components, but the gist here is that Salesforce wants to use every aspect of its platform to take the complexity out of AI, giving Salesforce CRM an intelligence blast, while exposing the AI APIs to let customers build intelligent apps on the Salesforce platform.
The company pulled together 175 data scientists to help create Salesforce Einstein, while leveraging acquisitions such as MetaMind, PredictionIO and RelateIQ. In fact, MetaMind founder Richard Socher, holds the title of Chief Data Scientist at Salesforce now. Salesforce Einstein will touch every one of its products in some way eventually.
Among the AI pieces it is including in the platform are advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery.
Ultimately, it’s not very different from exposing any other parts of the platform to customers, but it’s focused on making a smarter CRM tool, one that surfaces the information that matters. Sometimes this information may seem apparent, signals any reasonably good sales person would be looking for. Salesforce’s goal here is to put this key data front and center, and it believes even the most skilled sales pros will benefit from this approach.
For inside sales teams making cold phone calls all day long, the system can surface the most likely candidate as the next call automatically. For sales people working territories, it can keep them apprised of key information such as when a competitor’s interest shows up in the news. While you could argue that an astute sales person would be tracking this information, the Salesforce Einstein approach is designed to leave nothing to chance.
One of the issues for companies trying to build smarter applications is having access to a quality data set. Salesforce has access to a large body of information about customers, territories, and so forth, and it can provide aggregated data from customers who choose to share that information (without exposing any competitive details). It made clear it won’t share information if a company opts out, and it can assure that through the concept of data tenancy — that is, that each company has its own place of residency on the platform.
Pricing and availability will vary, and may involve an additional charge, or could be included with the cost of the license, depending on the service. Einstein AI tools are built in across the platform such as Community Cloud with automated community case escalation and recommended experts, files and groups; Analytics Cloud with smart data discovery and Commerce Cloud with product recommendations. All of these are all generally available today, while others will be announced over the coming months. Prediction.IO is an open source machine learning tool, and is available for free download.
It’s important to note that Salesforce didn’t necessarily invent these AI approaches, but is building a version of many AI and machine learning tools. That said, what Salesforce is attempting to do here is highly ambitious, and this is just the start. Being somewhat early, it could be some time before it really begins to take hold with customers in a broad way, but in typical Salesforce fashion, it wants to be there whenever customers are ready.

IBM and MIT partner up to create AI that understands sight and sound the way we do



When you see or hear something happen, you can instantly describe it: “a girl in a blue shirt caught a ball thrown by a baseball player,” or “a dog runs along the beach.” It’s a simple task for us, but an immensely hard one for computers — fortunately, IBM and MIT are partnering up to see what they can do about making it a little easier.
The new IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension — we’ll just call it BM3C — is a multi-year collaboration between the two organizations that will be looking specifically at the problem of computer vision and audition.
It’ll be led by Jim DiCarlo, head of MIT’s Department for Brain & Cognitive Science; that department and CSAIL will contribute members to the new lab, as will IBM’s Watson team. No money will change hands and no specific product is being pursued; the idea is to engender jolly and hopefully fruitful mutual aid.
The problem of computer vision spans multiple disciplines, so it has to be attacked from multiple directions. Say your camera is good enough to track objects minutely — what good is it if you don’t know how to separate objects from their background? Say you can do that — what good is it if you can’t identify the objects? Then you need to establish relationships between them, intuit physical rules… all stuff our brains are especially good at.
image_descriptions
Google too is very interested in this space; these are from a recent research paper on identifying parts of photographs.
Handy, that last part, and also the reason why “brain-inspired” is in the name of the lab. Using virtual neural networks modeled on how our own real-life neural networks operate, researchers have produced all kinds of interesting advances in how computers interpret the world around them.
The MIT partnership is one of several IBM has established lately; the company’s VP of Cognitive Computing, Guru Banavar, details the rest in a blog post. Other collaborations are pursuing AI in decision making, cybersecurity, deep learning for language, and so on. IBM is definitely making a huge investment in foundational AI work and it makes sense to cover their bases. All together the group of partnerships comprises what’s called the Cognitive Horizons Network.
And yes, they’re working to make sure the machines don’t come for us all later:
“We are in the process of building a system of best practices that can help guide the safe and ethical management of AI systems,”wrote Banavar, “including alignment with social norms and values.”
Whatever those might be. At the rate social norms and values are changing, it’s as difficult a bet to figure what they’ll be in 10 years as it is to guess what AIs will be getting up to.