Conversation Design User Experiences for SiriKit and iOS

hero_image.jpg

A lot of articles, our site included, have focused on helping readers create amazing iOS apps by designing a great mobile user experience (UX). 

However, with the emergence of the Apple Watch a few years ago, alongside CarKit, and more recently the HomePod this year, we are starting to see a lot more apps and IoT appliances that use voice commands instead of visual interfaces. The prevalence of IoT devices such as the HomePod and other voice assistants, as well as the explosion in voice-assistant enabled third-party apps, has given rise to a whole new category of user experience design methodologies, focusing on Voice User Experiences (VUX), or Conversational Design UX.

This has led to Apple focusing on the development of SiriKit a few years ago and providing third-party developers the ability to extend their apps to allow users to converse with their apps more naturally. As SiriKit opens up more to third-party developers, we are starting to see more apps becoming part of SiriKit, such as prominent messaging apps WhatsApp and Skype, as well as payment apps like Venmo and Apple Pay. 

SiriKit’s aim is to blur the boundaries between apps through a consistent conversational user experience that enables apps to remain intuitive, functional and engaging through pre-defined intents and domains. This tutorial will help you apply best practices to create intuitive conversational design user experiences without visual cues. 

Objectives of This Tutorial

This tutorial will teach you to design audibly engaging SiriKit-enabled apps through best-practices in VUX. You will learn about:

  • designing for voice interactions
  • applying conversational design UX best practices
  • testing SiriKit-enabled apps

Assumed Knowledge

I'll assume you have worked with SiriKit previously, and have some experience coding with Swift and Xcode. 

Designing for Voice Interactions

Creating engaging apps requires a well thought-out design for user experience—UX design for short. One common underlying principle to all mobile platforms is that design is based on a visual user interface. However, when designing for platforms where users engage via voice, you don’t have the advantage of visual cues to help guide users. This brings a completely new set of design challenges.

The absence of a graphical user interface forces users to understand how to communicate with their devices by voice, determining what they are able to say as they navigate between various states in order to achieve their goals. The Interaction Design Foundation describes the situation in conservational user experience: 

“In voice user interfaces, you cannot create visual affordances. Users will have no clear indications of what the interface can do or what their options are.” 

As a designer, you will need to understand how people communicate with technologies naturally—the fundamentals of voice interaction. According to recent studies by Stanford, users generally perceive and interact with voice interfaces in much the same way they converse with other people, irrespective of the fact that they are aware they are speaking to a device. 

The difficulty in being able to anticipate the different ways in which people speak has led to advances in machine learning over the past few years, with natural language processing (NLP) allowing platforms to understand humans in a more natural manner, by recognizing intents and associative domains of commands. One prominent platform is Apple’s Siri, and its framework for third-party developers, SiriKit. 

Read the rest of the article, exclusively on envatoTuts+