Designing a connectivity metrics dashboard for AWS IoT Core

— PROJECT NAME

Designing a connectivity metrics dashboard for AWS IoT Core


— ROLE

Lead UX/UI Designer

User Research

Strategy


— TEAM

PM: Joseph Choi, Nitin Nair, Andre Sacaguti, Surabhi Talwar

SDM: Bharath Krishnappa, Steve Apel, Geoffrey Worley

FEE: Xiaoyi Tang, Isheeta Chinchankar


— DATE

2023-2024 (9 months)

AWS IoT Core is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. Although IoT Core had an existing Monitor dashboard, customers were still asking for a way to see detailed connectivity and messaging insights.  I was responsible for mapping out and creating the end to end experience of the dashboard, assuring that the usability and interconnectivity within services were seamless.  This project was a major initiative for IoT Core and required having the ability to work through ambiguity and exceptional cross-functional collaboration.


The Problem:


Solution Architect’s and Go-to-market leads were informing our team that customers are looking for connectivity insights upfront and weren’t aware of recent Device Management and Device Defender offerings. Once they did know about these offerings it was very difficult to set up and access. For example, in order for a customer to see the disconnection rate and historical trends they have to navigate through 9 different configuration pages within the console. Lastly, these connection metrics require a customer to configure 3 different services.


The Solution:


The new dashboard removes the need of in-depth understanding of underlying features and allows customers to obtain relevant insights related to connectivity and messaging without having to use alternative services (eg. Splunk + Elasticsearch). Once set up, all dashboards can be accessed and displayed in IoT Core Monitor page in a single pane of glass.


Scope & Constraints:


This project was unique for a multitude of reasons. First, this project was transferred to me from another designer who was still early on in the discovery process. I received the project in the new year and the product manager wanted to launch by September. I also dealt with a lot of ambiguity and technical constraints due to working with multiple services both inside and outside of my org. I collaborated with over 5 product manager’s, 3 software development managers, 4 engineers, and had to onboard a lead product manager half-way through the project.




The Process


Listen


My goal was to first understand who would be using this feature, what the current experience looked like, and what the previous designer worked on. 


Through reviewing the current PRFAQ (Press Release and Frequently Asked Questions) for the project, syncing with the previous designer, and learning if and how it was possible to view these connectivity insights another way.


Once I had all of this data I facilitated a project kickoff with all of the current stakeholder's of the project to make sure we talk about our assumptions, answer any open question's, record any unanswered questions, go over timelines, and so that everyone understands what role they play in the project and how we can help each other.


Define


I created my two main personas to make sure that I designed and made decisions with them in mind. I also created a preliminary set of use cases and reviewed them with my Product manager to make sure we were aligned when starting the Ideation phase.




Invent

I really enjoy creating user flows. Since this project had multiple entry points, multiple scenarios for onboarding, and was very technical I wanted to make sure I mapped out the experience before designing anything. Doing this helped me understand what technical requirements would be needed, what details would need to be added within the UI, helped identify any scenarios I might have missed, and what conversations I needed to have next. I reviewed these initial flows with stakeholders, made updates and started to create mid-fi mocks. I did this to help foster the conversation between PM's and leadership.



I created mid-fi mocks of the onboarding experience. I kept the initial designs mid-fi because I wanted both stakeholders and users to focus on the core functionalities. Reviewing the designs with stakeholders first allowed me to quickly iterate on them while we were early in the development process and gearing up for initial user testing.


When creating the onboarding experience, I had to consider multiple scenarios and how I can best guide the customer to complete an action to then be able to add widgets. For example, if a customer is visiting the dashboard but doesn't have any registered devices, they would need to first connect some devices before they can see any data within the dashboard. For this scenario, I added an alert that states they would need to connect some devices beforehand a link to start that experience.


Below shows the experience of a customer who has connected devices and is viewing the connectivity dashboard landing page. I wanted to include a "How To" section along with illustrations to showcase what can be done and what is possible. The mocks below show the experience of adding widgets to the dashboard.


Refine

I continued to have conversations and design reviews with product managers and developers about:


1. Closing on the different dashboard states (What does it look like if they have connected devices but aren't using the Fleet indexing service? How can we introduce pricing to them?)

2. How to best handle the resources that were created upon adding these widgets (think how will the delete experience look and what happens to the data once it's removed)

3. How to integrate alarms within our dashboard (I had to collaborate with a different AWS team to make sure my ideas were feasible)



Test & Iterate

Even though other areas of the experience were still being refined, I wanted to make sure we got preliminary onboarding feedback from customers as soon as possible.


The overall goal of this research was to improve upon the mocks the team has created thus far by being able to identify usability issues when viewing the existing monitor dashboard, onboarding to the connectivity metrics dashboard (widget configuration), and using the new dashboard by evaluating the current designs.



I conducted remote, moderated 1:1, 45-60 minute usability sessions with 6 participants using Chime Meetings. 4 of the participants were AWS Solution Architects and 2 external IoT Core customers. Participants were guided through a series of task-based scenarios while I shared my screen to act as their mouse.


Below you can see some of the insights from the study that helped influence my designs as I moved to creating high-fidelity mocks. View the full research report here.


High-fidelity designs post user research



Collaborating with Front-end engineers during implementation


Once my work was approved, I handed off my designs to the developer. This includes scheduling a meeting to run through all of the flows, answer any questions and to align on the tags I added within the design so that we can make sure to track usage trends and behavior once launched.


Once the developers work was completed, I noticed that when a customer deleted a resource ( a fleet metric) connected to a widget, the widget would be removed but would appear again once the dashboard loads. The only way to remove the widget permanently was through the modal view in the dashboard. The developer was focused on solutions that took the least amount of time and originally said no in our first discussion. I knew I had to lead by example and use this as an opportunity to showcase why UX is so important in these technical spaces.


I facilitated a working session using Figjam to talk through the current experience and to present my solution. The developer said it was technically feasible to do and felt confident in this solution.


Lessons learned


Document everything

Working with multiple stakeholder's on such a large project requires having a central repository. Having this repository helped me remember key updates and the reasons for my design decisions.


Flow charts are so important

When working in a technical environment flow charts really are really the only way to explain the logic of a product and to make sure that I can build the designs correctly. I found that PM's like to jump to high-fidelity first but it truly is a disservice to the experience thinking that way. I'm happy that the PM's I work with are finally coming around to understanding the importance of our work, including flow charts.


The person who clears the way shapes the path

The A in AWS stands for ambiguity (just kidding  🤪). But seriously, if you want change and to truly be user focused in this ambiguous space you have to be open to lead and don't be afraid to ask questions. When leadership is done well, stakeholders will follow along and that will help with trust now and down the road.


Next steps


Measure success

I'm currently working with the fleet indexing product manager and developer's to understand how many customers have started using this feature since launch.


Continuing to monitor customer feedback