Pat is an automated bot that guides the visitors of the Mater Hospital through the service they need.
The Problem Statement
Many people who go to the Emergency Department don't actually have an emergency. They are there due to a lack of easy access to alternative options resulting in many feeling they have nowhere else to go.
As a consequence, the A&E of the Mater Hospital is overcrowded.
The Project Goals
The Research Method
We interviewed 18 patients at 7 different locations. We conducted 15 guerrilla interviews, and interviewed both experts and staff members.
As a team, we analysed the data, created scenarios and personas before coming up with results on different issues that we could focus on.
Core Users Needs
The User Journey
The solution: A Rebranding of the website
I worked on defining the user paths on the website so as to direct users to the alternative options.
I prioritised the most important information, making it immediately discoverable and comparable.
On the current website of the Mater hospital, important information is being displayed under the fold. This interface update is intended to guide the user more intuitively and effectively towards the appropriate service.
Testing also showed us that the wrong choice of word on the website has an impact on decision-making: the staff members of the Mater Hospital use the expression “minor injury” to describe those injuries that don’t require going to the ED. However, when in pain, people don’t consider their injuries to be “minor”.
As a consequence, users often dismiss the options for minor injuries displayed on the website; move on and get lost on the website, before ultimately going to the ED so as to be on the safe side.
A chatbot called “ Ask Pat”: low to no waiting time
One of the main issues faced by the end-users was that, in an emergency situation, human service agents couldn't process requests quickly enough to satisfy them. Hence, a chatbot could improve the patients experience.
First, the technical capability of the bot needed to be considered. According to a recent article by Google AI, computer sciences research about chatbots tends to focus on the quality of speech. "Modern conversational agents (chatbots) tend to be highly specialized — they perform well as long as users don’t stray too far from their expected usage. To better handle a wide variety of conversational topics, open-domain dialog research explores a complementary approach attempting to develop a chatbot that is not specialized but can still chat about virtually anything a user wants."
Secondly, reading case studies online enabled us to determine that good user experience with chatbots requires simplicity, intuitive interfaces, and maximal similarity to a natural human conversation.
Our main concern was prototyping our conversation flows and testing multiple responses to see which ones work best. Not only had we to consider how users use chatbots, but also how they use messaging applications. We also needed the testing phase to be aware of the context we were putting the bot in. Should a hospital bot be very friendly or serious (personality of the bot), what are people using it for, etc?