Project Type: Concept App, UX, UX Research, UI, AI

!!! ( *FULL* version ) !!!

Project Overview

This project set out to assess how emerging technology can be used in the future of work. The ultimate goal of the project was to identify a user problem in current work practices and use emerging technology to create a conceptual solution.

The project resulted in the conceptualization of ContextClue, an AI-based app to help manage electronic communication notifications for remote office employees. To watch a video presentation of this case study, click here.

Project Scope

This was an individual project completed over the course of 5 weeks. The project followed a Divergent/Convergent thinking iterative process. The process is divided into four development phases: Discover, Define, Ideate, Test & Deliver

 
contextclue Timeline (1).png
 

 Roles

  • Research

  • Data Synthesis

  • Visual Design

  • Prototyping

  • Usability Testing

Tools

  • Sketch

  • Zoom (Usability Testing)

  • Stormboard

  • Atals.Ti

Phase 1: Discover

Secondary Research

I began the discover phase focusing on communication in virtual offices as the problem space. In order to understand the problem space, I combed through online research on communication practices in virtual offices.

This research stated both remote employees and managers struggle with working remotely because coworkers are out of sight and out of mind during the workday. This lack of visibility leads to uncertainty and mistrust about employees’ productivity. As a result, remote employees feel they have to consciously put more effort into appearing productive and responsive. This added pressure to be productive and responsive is referred to as feeling “always on” and has been shown to add stress and have negative health effects.

I used this secondary research to further focus my project research questions to the following:

  1. How do remote office employees navigate and manage availability?

  2. Do remote office employees feel a pressure to be "always on"?

Qualitative Interviews

Following secondary research, I set off to conduct primary research to get deeper insight into the refocused problem space.

For primary research, I conducted four qualitative interviews with remote office employees. Participants all had prior experience working in a physical office. I chose to conduct qualitative interviews because it allowed me to probe remote office employees on their current experiences with feeling “always on”, and how they signal and read coworkers’ availability.

Phase 2: Define

Interview Insights

In the define phase, I synthesized my research data, beginning with the interview findings. To analyze the interviews, I descriptively coded the interview transcripts for common trends. The most salient findings were:

  • All participants set daily working hours for themselves

  • Three out of four participants use the same physical spaces/devices for work and personal matters

  • All participants rely on availability statuses built-into communication platforms

    • All participants did not set availability themselves

    • Three out of four participants did not turn off notifications after working hours

    • All participant said they felt satisfied with current systems

  • All participants’ choice of communication platforms is pre-determined by their organization and inflexible

  • All participants admitted to screening notifications throughout the day

    • Motivations for screening notifications included: curiosity, self-control, anxiety

    • Three out of four participants said they filtered and replied to notifications received after hours based on certain screening criteria:

      • Work Roles

      • Notification urgency

      • Time required to answer

      • Current availability

  • All users stated they felt increased pressure to be “always on” since working from home

 

Affinity Map

Image: Affinity Diagram of Interview Research Themes

Image: Affinity Diagram of Interview Research Themes

After I identified common user behavior trends from the interview insights, I created an affinity diagram to connect trends into larger themes and get a comprehensive look at the use case space.

I highlighted two potential pain points for users that could benefit from a technology solution:

  1. Employees had difficulty separating work and home life

    • This is partly because employees use the same physical spaces and devices for work and personal use.

  2. Employees feel an increased pressure to be “always on”

  • This is partly driven by the constant notifications employees receive

    • The majority of users do not turn off visual and audio notifications

  • Also driven by employees need/want to screen notifications after working hours

    • Notifications do not have context and so users must manually screen notifications to determine context and urgency

Moving forward, I chose to focus on reducing the need for employees to screen notification messages.

Mitigating the need for employees to screen notification messages for context could help employees finally “turn off” and get uninterrupted personal time to themselves. This could go on to create a more distinct separation between work and home life. Also, screening notification messages was more salient in the interview data.

“How Might We” Statement

After I zeroed in on the problem, I reframed the problem with a “How might we statement” (HMW) brainstorming session. Options focused on leveraging current tech platforms and reducing “always on” anxiety. I chose to focus on three HMW statements:

How Might We…

  1. Reduce the need/want to screen work notifications for remote office employees?

  2. Improve notification screening tools to maintain a separation between work and personal time while not missing important information?

  3. Redesign notification tools to include descriptive context?

 
Image: Brainstorming Notes from How Might We Statement Formation

Image: Brainstorming Notes from How Might We Statement Formation

 

 How might we…

Redesign notification tools to include descriptive context?

 

User Personas

As the last step in the define phase, I created two user personas to ground any solutions in real use case requirements.

I created the personas with different levels of job urgency and flexibility, coworker socialization, living/working environments, office space set-up, and work/life balance.

Image: Primary Persona

Image: Primary Persona

Phase 3: Ideate

Proposed Solution Description

After I defined my problem space and users, I began the ideate phase. The goal in this phase was to create solutions for a device that translated context into notifications. I decided the solution should be a context-aware notification filtering app for remote employees.

The app, named ContextClue, would give users more information on the context of notification messages and help prioritize relevant notifications in order to reduce employees’ need to screen notifications throughout the day.

Building Off Current Tech Limitations

As the first steps to create a solution, I looked at how current communication platforms handle notification filtering. For this, I compared the notification filtering options for the two most common platforms participants in my interviews used for work: Slack and GChat. I used these notification filtering features and the gaps they left unaddressed to inform my own solution.

 

Slack Notifications

Allows filter by

  • Direct Messages

  • Mentions @’s

  • Replies

  • New Threads

  • Threads currently followed

  • Keywords - user sets up **

 

GChat Notifications

Allows filter by

  • Direct Messages

  • Mentions @’s

  • Replies

  • New Threads

  • Threads currently followed

 

 Outlining “Context”

With a foundation in place, I then fleshed out what information beyond what current communication platforms offered was needed to make a notification manager context aware.

Outlining context-awareness would highlight the additional information my solution’s filtering options needed to incorporate. To do this, I researched existing context aware AI devices. I found “content-awareness” encompassed the following:

 
  • Location – Where a person is

  • Time – Time zone

  • Identity – Who a person is

    • Individual Activity – What a person is currently doing

    • Individual Behavior – What person does over an extended time

      • Individual Affect – What emotions a person feels

      • Individual Past Behavior – What a person has done

 

I added additional information into the outline of “context-awareness” that the solution should use to contextualize notification messages. To create this list, I scanned interview data for what screening criteria participants already used to manually filter their notifications messages. The list is below:

 
  • Location

  • Time

  • Identity

    • Activity

      • Walking/Resting

      • Screen time

      • Device info

    • Behavior

      • Affect (Facial Expressions)

    • Past Behavior

      • What messages warrant Yes/No answers

      • What messages warrant quick answers

      • What messages warrant an acceptable response time

      • An individual’s work roles

      • Group chat summary

      • Temporary responsibilities

      • Time active in chat

      • Personal working hours schedule

      • Due dates

 

I also considered how the solution would collect the necessary information. Luckily, a lot could be pulled from the device. However, I would need to use AI or collect other information from user input directly.

Image: An Illustration Showing Where Data Collection Sources

Image: An Illustration Showing Where Data Collection Sources

AI Model Training

I researched machine learning to determine how the solution’s AI model would be trained to collect information. I decided my solution would use two rounds of training. I chose this process because it would allow for a modular approach that could easily be scaled up or down. Also, the process seemed relatively feasible conceptually.

Model Training Round 1

The first model training round would consist of unsupervised learning in a lab setting. During this phase, the model would analyze a body of potential users’ data to find patterns in customer behavior.

The patterns will focus on how users filter and prioritize notifications according to screening criteria. For example, User Type A only prioritizes notifications with high work roles and specified keywords, and User Type B prioritizes notifications based on their working schedule hours above all else. The patterns the model finds will be segmented to create a list of customer archetypes. In this round, the data would need to come from voluntary user testing.

Model Training Round 2

The second round of model training would involve reinforcement learning. During this round, the model would watch real users as they use the app.

As the model begins to understand individual user behaviors, the model will classify users as one of the pre-established customer behavior pattern archetypes. User feedback (ignoring a message, responding to a message) will reinforce their classification into one of the customer archetypes. If users report a high error rate in this phase, the model will refit the user into a new customer archetype.

 Lo-Fi Prototype

I wrote a user scenario focusing on the primary persona for the first lo-fi prototype. In the scenario, I describe an app that works on mobile and desktop, works with existing communication platforms, and allows users to select from a list what contextual information they would like their messages to be filtered with.

The scenario helped me decide on how the onboarding process should look (See Sketch 1), what contextual screening criteria filters should be included, and how users would know if the system is working (See Sketch 2).

 

Sketches

Sketch 1: Onboarding Process ; NOTE: This sketch maps out the general layout of the onboarding process. I decided a traditional Wizard process would work best.

Sketch 1: Onboarding Process ; NOTE: This sketch maps out the general layout of the onboarding process. I decided a traditional Wizard process would work best.

Sketch 2: ContextClue Notification Messages

Sketch 2: ContextClue Notification Messages

Sketch 3: ContextClue System Visibility Signal

Sketch 3: ContextClue System Visibility Signal

 Mid-Fi Prototype

I created a mid-fi prototype as a digital narrative video to assess the general concept of the app. The video briefly described the features of the app and follows the onboarding process. Testing this conceptual prototype would help me learn if ContextClue was useful and understandable to users as well as if they had any reservations about using the app.

 Phase 4: Test & Deliver

I used the mid-fi prototype for two rounds of user testing, each with two participants. In the testing sessions I watched participants as they watched the digital narrative video and asked them a short list of questions afterwards.

Testing Round 1  

Link to prototype: https://youtu.be/WLvdWWhqRYM

In the first round of user testing both participants thought the product would be useful. However, there was confusion about the list of screening criteria filters.

One user did not know if the screening criteria filters were preinstalled on the app and whether they needed to activate the screening filters themselves. Also, when asked if they had any reservations, both participants mentioned privacy concerns. Specifically, both participants said they were uncomfortable with the app having constant camera privileges to their devices. Participant one also mentioned they were uncomfortable with ContextClue constantly collecting location information.

 

“My only reservation is permissions. Would I need to ask other people’s consent if I use the app? It’d be cool for close coworkers, but not bosses.” -Participant 2

 
Image: Screenshot of “Pause Service” Prompt

Image: Screenshot of “Pause Service” Prompt

Round 1 Design Changes

Privacy

  • Eliminate face tracking as a potential screening filter.

    • Face tracking was the main reservation for both users and I debated whether it was necessary to include. In my research, I found face tracking as a means to infer emotional state is not entirely reliable. I also found it could potentially detect physical signs of depression on a user’s face and broadcast that information to coworkers. I thought it was best to leave face tracking out going forward.

  • Add “Pause Services” prompt when users move locations 

    • To mitigate privacy concerns on location tracking, I added an additional feature that asks users if they would like to turn the app’s location tracking off when changing locations. Giving users more control over data collection should help ease concerns over location tracking.

Confusion on Screening Criteria Filters

  • Emphasize screener criteria filters are preinstalled on the app

  • Emphasize screener criteria filters are optional and selected by users

 

Testing Round 2

Link to prototype: https://youtu.be/W7Nh50P6btc

In the second round of user testing I found both participants found the app useful. I also found there was more confusion around the screening criteria filters.

One participant stated they were overwhelmed by the screening filters because there was too many options listed. This same participant also said the screening filter names were unclear to them. Another participant asked if setting up the screening filters would require extra work on their part. Along with this, participants also wanted to know the cost of the app, if the app seamlessly integrated with multiple devices, how accurate the AI model was, and how much personal information was being collected.

 

“Sometimes too many options are overwhelming. Maybe if there was a “basics” and “advanced” sections. They’re helpful, but it’s a lot of choices” -Participant 4

 

Round 2 Design Changes

Screening Criteria Filter Confusion

  • Make prototype screens larger, more visible and easily legible

    • Both participants mentioned text on the prototype screens was difficult to read because it was too small. Future prototypes would need to be larger.

  • Tier and organize the list of screening filters

    • One participant was overwhelmed looking through the list of screening filters and suggested organizing the filters into categories. I organized the list into three sections (See Below): User Details (information that need users have to fill in themselves), Basic, and Advanced.

  • “Screening Filters” nomenclature

    • To make screening filtering function more obvious to users I renamed “screening criteria” to “screening filters”. I felt “filter” was a more easily recognized and understood term.

  • Emphasize which are automatic and which require set up by users

System Integration

  • Emphasize the app is set up with user profiles 

Image: Original Screening Filter List Dialogue Box

Original Screening Filter List Dialogue Box

 
Image: Reorganized Screening Filter List Dialogue Box

Reorganized Screening Filter List Dialogue Box

 

 Final Prototype

Link to prototype video: https://youtu.be/h3U-2SPHFUk

 

ContextClue Notification Messages (Desktop and Mobile)

Picture1.png
 

ContextClue “Active” Signal (Desktop)

 
Picture2.png
 
 

ContextClue “Screening” Signal (Desktop)

 
Picture1.png
 

Future Steps

Future Iterations

Moving forward, steps for future design iterations should test with a more varied sample of users. Also, future designs should address:

Cost 

Both participants in the second round of user testing were interested in knowing the cost of the app. The cost would be determined by the cost to manufacture the app. Research would need to be done on the cost of AI model training in order to create a feasible business strategy.

Test screening filter names for understanding

In both rounds of user testing there was confusion around screening filters. Further user testing processes like card sorting could help to make screening filter names more understandable to users. This could also help signify the functions of individual screening filters.  

Future Design Considerations

Future design considerations should take into consideration:

Privacy

In both rounds of user testing participants main reservation with the app was privacy. Being transparent about data collection practices and incorporating clear signals when certain information is being collected could help mitigate these concerns.

Screening Filters 

Throughout the user testing sessions, the screening filters were somewhat hard for users to understand. Future designs should consider building in concrete signifiers and incorporating feedback when screening filters are turned on/off. Both of these would help users know what screening filters did before selecting them and know when filters were activated. Adding direct and clear feedback could also help increase users’ perception of accuracy and trust with the app.

Accuracy

To ensure the AI model is accurate, future designers should first acknowledge that AI models are only useful and accurate if they grow alongside real user data. Future designs should continuously work to update the AI model based on user feedback and promote continuous model improvements.

Next
Next

Treehouse Information Architecture Redesign