"Interacting With Intelligent Assistants to Predict Consumer Satisfaction"
With the rise in popularity of intelligent assistants, there is an increasing need to understand and evaluate both strengths and shortcomings of the technology, in order to define specific areas for improvement and to understand where these interfaces are ideally suited. We describe the current state of personal digital assistants and evaluate performance by testing voice activated queries in four distinct categories including Translation, Current/Real Time Events, "How to" questions and General Knowledge. Experiments show that Microsoft's Cortana beat the two competitors with an impeccable accuracy of 100% followed by Amazon's Alexa with an average accuracy of 74% and Apple's Siri with only 49.8% accuracy. Siri was fastest to respond on the few questions it correctly answered, with an average speed of 2.09 seconds followed by Cortana with an average speed of 2.35 seconds and Alexa at the average speed of 2.63 seconds. Cortana had the highest accuracy and overall effectiveness. Analysis of these three assistants illustrates the current ability of intelligent assistants to aid consumers. This work also demonstrates tremendous potential of voice activated interfaces in the future. Evaluating which category each assistant performed best (or worst) can be a strong predictor of user satisfaction; essential for the future development of effective intelligent assistants. This research also reinforced concerns about a relatively poor ability of some voice-activated assistants to interpret the accents of non-native English speakers.
G Elera, Rashmi and Grant, D.C., ""Interacting With Intelligent Assistants to Predict Consumer Satisfaction"" (2018). School of Engineering and Technology Publications. 194.