Voice Assistant in 2019: What Does the Future Hold?

In 2018 we were writing about a massive adoption of voice assistants in smart home projects. Now we are in 2019, and home voice control has become everyday’s reality, not a theory.

Where is voice assistant integration heading today? What is hindering its wider adoption – and what will ensure its wider implementation?

Today’s Context: Factors Influencing Further Voice Assistant Adoption

In the opinion of Zakhar Bessarab, Head of Web Development Unit at R-Style Lab, the main obstacle to further expansion of voice assistants lies in mutual scepticism of both product owners and product users towards its functionality:

Users think that today’s voice assistant systems are not smart enough to perform complex tasks, hence users don’t trust them enough. Add to this security concerns – and you will see why they are not hurrying up to buy a new product with an integrated voice assistant functionality. Products owners, in their turn, don’t see enough interest from potential customers, so they don’t feel motivated enough to invest their time and financial resources into further development.

The stats from Microsoft’s 2019 Voice Report confirm this allegation. 41% of users have concerns over how their personal information is processed and whether it is securely stored and used.

AI is still not advanced enough to make a voice assistant work like it should.

Voice recognition, speech synthesis, and natural language processing (NLP) – these are the three pillars of an effectively functioning voice assistant. Major tech companies have made big advances in this direction – but not big enough to make a smart voice assistant truly smart.

We are still far from the reality shown in the ‘Her’ movie, with a voice assistant communicates at the most natural level and falls in love, or ‘Iron Man’, where the AI-enabled J.A.R.V.I.S. and FRIDAY systems communicate freely with human beings and demonstrate their personal sense of humor.

Voice assistants of today are great at performing voice search or setting simple functions like alarms – but far off from achieving complex daily tasks. To develop more in this direction, voice automation solutions need to rely on a more advanced AI basis.

Voice assistants of today are great at performing voice search or setting simple functions like alarms - but far off from achieving complex daily tasks.

In what direction should this development go to simplify a voice assistant app integration?

Need for More Context Awareness

As was put by Lin Nie, UX researcher at AnswerLab, humans are always context-aware and context-dependent.

A voice assistant imitating a human being, will also depend greatly on the context: it will identify speakers and analyze their emotions, classify events, and determine speakers’ aspirations to correlate its functioning in accordance with the situation.

Amazon’s Echo Wall Clock, capable of adjusting the clock to a user’s time zone, setting timers and alarms, and aligning with a time change, was a great step forward in building a context-aware connected device. Since then, however, the device was pulled due to connectivity issues – but many IoT products have stumbled over this issue. The product was welcomed both by consumers and technical specialists, which signifies that the product was heading in the right direction.

Hands-off Experience

According to Gartner, by 2020, 30% of all searches and web browsing sessions will be done without a screen. This marks a huge shift in marketing repositioning: well-written content and an eye-pleasing interface won’t be taken to voice assistant-enabled future in the way we know them today.

Something which Microsoft calls ‘the age of touch as the primary user interface’ in its 2019 Voice Report, will change soon, moving to a VUI – Voice User Interface.

Wide spreading of 5G

Today people expect a 4-second delay when talking to smart assistants. A new speed provided by 5G, will be a complete game changer in terms of custom loyalty and performance quality.

Owen Brown, CTO at Starbutter AI, thinks that ‘the first app to break the 1-second barrier will have a remarkable impact’.

Growing number of Skills & Actions

The launch of Alexa’s CanFulfillIntentRequest function in 2018 hasn’t demonstrated its full potential yet.

This function allows users to send a request to Alexa without naming a specific skill’s invocation name. That’s, there is no more need to say ‘Alexa, what does Any.do say about my plans for today?’ – ‘Alexa, what are my plans for today?’ will be enough.

This makes voice assistant performance much more simple and user-centric – the quality to be named first for a technology to become the next big thing.

Have any questions? Ask our team!

Voice Assistant: Scope of Application

Auto manufacturing

In February, 2019 we saw a one-minute ad by Mercedes at the Super Bowl, where they were promoting  – their new A-class sedan? Not really. The whole ad was concentrated on the newly integrated ‘Hey Mercedes’ voice system.

The car became a locus for voice assistants long ago. However, car manufacturers have been resisting ceding control over in-car software to tech giants like Google or Amazon.

As a result, today we see 3 approaches on the market:
  • Integration with voice assistants. In practice, nearly every carmaker has recognized a huge demand for voice commands while driving from their users. BMW, Toyota, Audi – now all of them allow Apple’s and Google’s voice assistant systems work alongside their software.
  • Development of built-in assistants. Kia, Hyundai, now Mercedes – these automakers are investing into the development of their own on-board voice assistant systems.
  • Standalone in-vehicle devices. Voice assistants like Apple’s CarPlay (using Siri voice control) and Android Auto feature more functionality and more accurate functioning than separate Siri and Google Assistant, but showing compatibility with quite a few car models.

Add to this multiple startups coming to the market with more customized products (like the Robin product, searching for a nearby parking place and receiving local traffic alerts; the Nuance Communications app, detecting when a driver is tired and sending an alarm to the in-car’s infotainment system) – and we’ll see how much investments is getting concentrated in the industry.

As was put by Lin Nie, UX researcher at AnswerLab, humans are always context-aware and context-dependent.

vCommerce in its full development

Doing the shopping simply by placing a voice command will change the shopping landscape to the same extent, as Internet (eCommerce) and mobile phones (mCommerce) did before.

When will it happen?

Despite growing smart assistant sales numbers, vCommerce hasn’t been of much interest to wide public yet. In 2018, it accounted for $2.1 billion last year, which makes only 0.4% of online sales. Specialists admit that the story of voice commerce hasn’t started yet, with no infrastructure supporting the emerging technology. Consumers can hardly imagine doing the shopping online without seeing what they are buying.

However, today the most tech advanced brands have already launched skills (for Alexa) or actions (for Google) to accompany consumers in their shopping experience. Tide has a skill providing needed information on how to deal with stains; Purina has a skill teaching users about dog breeds – and so on. These companies promote themselves by giving practical advice to users, thus offering a value-added service during online shopping.

Smart Earbuds

Due to the interface revolution made by voice assistants, smart earbuds have a potential to hit the market now.

They can account for processing compound requests and delivering contextual understanding and personalization, based on the collected biometric and behavioral data. Their potential is so high that hearables are already called the future of wearables – and the future of voice assistants.

Spontaneous translators, personalized radio station, constant monitoring of a user’s emotional and physiological state, and providing personalized solutions – smart earbuds can be much more than that, so no wonder that the major tech companies are actively recruiting top-level specialists for their voice technologies departments.  

Add to this shifts in healthcare regulation, moving from treating hearables as a medical device needed to be FDA approved – and we are ready for the upcoming mass market adoption of the technology.
Did you like our post?

Subscribe to our monthly IoT digest!

Please, enter a valid email address
Please, agree with our privacy policy

Check out services we provide!

Get a Free Quote Now!

Popular Posts

Link copied to clipboard

Get 3 bonuses now!

Just Share Your Project Idea
(We'll Keep It Safe!)
& Get Your Bonuses for Free!

get for free now