Using A DialogFlow Chat Agent to Aid Alzheimer’s Patients: Part 1

Progress Report

As part of our work on Alice, we investigated the practicability and usability of two different smart assistants – Alexa and Google Assistant. We decided to proceed with Google Assistant for our project because of Android’s global availability and the ease of transferring our agent to other platforms, including Alexa, if we use DiaglogFlow (formerly api.ai – Google’s Conversational Interface Builder.) Based on the prof’s feedback on the project proposal, we have decided to concentrate on a single task given the timeline of this course (2-3 months.) Looking at research around building conversational robots to aid users in completing activities of daily living, we will test Alice on a tea-making task.

We have kept the steps of our task analogous in order to be comparable to [1] and [2]. Figure 1 summarizes the flow of our activity, with the arrows indicating ideal flow and the sub-tasks included in brackets notifying total dependency. Figure 2 shows the architecture we expect Alice to have at the end of this project. We implemented two test agents – one using Google’s DiaglogFlow, and one using the Actions SDK. The latter involves the overhead of taking care of all the conversation back-and-forth and doesn’t provide a significant performance boost. DialogFlow lets us leverage Google’s Machine Learning with the ability to apply our natural language understanding to the interactions with a user. So far, we have implemented a simple version of the complete flow described in Figure 1, and the agent can talk from start to end directing a user to make tea.

Our next steps would involve building a complete version of the conversation, with the ability to pause (and possibly engage like the robot did in [2]) while the user is carrying out a specific sub-task. Then we would tackle creating the ability to detect confusion or if a user wants to deviate from the ideal flow and incorporate “trouble recovery” mechanism. If time permits, we would like to develop a simple web interface that a caregiver could use to access some standard linguistic tests we run on data collected to provide a cognitive score for a user.

Overall, our end goals remain almost analogous to our proposal, but we have decided to scope down what we are trying to achieve in the next few months. Instead of concentrating on building an accessible agent from scratch, we want to test the feasibility of using existing smart assistants. If Alice manages to get accuracies similar to or better than [2], using TiB as a metric for confusion and number of subtasks completed without the aid of a caregiver as a metric for ease of use, we make our case for using this agent given its availability and the ability to engage in a conversation without the use of an external teleoperator.

Appendix

Tea Task.png

Figure 1: Tea Making Task Graph. Arrows show ideal flow through subtasks. “Needs: ” states absolute dependency of subtask on other subtasks.

Alice Architecture (2).png

Figure 2: Alice’s Architecture

References

[1] Patrick Olivier, Andrew Monk, Guangyou Xu, and Jesse Hoey. 2009. Ambient kitchen: Designing situation services using a high fidelity prototyping environment. In Proceedings of the ACM 2nd International Conference on Pervasive Technologies Related to Assistive Environments. Corfu Greece.

[2] Rudzicz, F., Wang, R., Begum, M. and Mihailidis, A. (2015). Speech Interaction with Personal Assistive Robots Supporting Aging at Home for Individuals with Alzheimer’s Disease. ACM Transactions on Accessible Computing, 7(2), pp.1-22.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s