top of page
the look cellphone.png

The Look

The Look: an app that helps people with visual impairments and blindness with shopping

Role: UX Researcher & Designer   |     Team Size: 3     |     Duration: 13 Weeks
Tools: Qualtrics (for the survey), Sketch & XD (for lo-fi wireframes), InVision (for hi-fi mockups), iMovie (for video editing),
Python (for implementation)
Skills: UX Field Research, Surveys, User Personas and Scenarios, Sketching, Wireframing, Literature Reviews, Interviews,
As-Is and To-Be Scenarios, Empathy Mapping 

Overview

We were initially motivated by the following statements found on a Quora online forum for this community and their shopping experiences:

 

  • "That's a great question and identifies a problem that technology hasn't been able to resolve. For me, I rely on my wife to help me with clothes." -David Bass

 

  • "Speaking personally, I care a great deal about looking fashionable. For me, I have a marked disadvantage regarding fashion and keeping up with it since I'm totally blind so of course, I cannot observe others, see magazines and other media that show fashion." -Kathy Strahan


The project is targeting people with blindness or visual impairments aged 16 to 35 with technological familiarity. We researched our problem and designed The Look - which aims to assist the users while shopping independently. We conducted semi-structured interviews and a survey to gain insights and guide our project.

 

At present, the project has gone through the research and design phases. Also, a high-fidelity interactive prototype of the system has been developed that highlights the key features. The prototype was designed using XD and InVision.

Process

Process of Project
Anchor 2

Motivation 

Fashion is a language that signals to others who we are and how we want to be perceived.


Users want to be ordinary because they did not want specific attention, such as people feeling sorry for them, or treating them differently once they notice their disability


Fashion is inaccessible because it can be difficult for someone with low-vision or complete blindness to see what others are wearing or know the norms and extremes of fashion


“Just because you have a vision impairment doesn’t mean you can’t be fashionable.”

Hardwood Manikin in a Dance Position

Research Objective

To investigate the shopping habits for fashion apparels of our target audience 

Hand drawn Person facing right

Research Questions 

Zero
One

How do they find the clothing item that they wish to buy?

Two
Zero

How do they come up with an outfit to wear every day?

Zero
Three

How do they organize their wardrobe?

Four
Zero

How do they think about fashion

and trends?

Research Methodology

Image depicting As-Is Scenario Method

Literature Review

Comparative Analysis
 

Survey
 

Interviews

Image depicting To-Be Scenario Method

Key Findings

The data from surveys and interviews were de-identified and coded using inductive coding procedure

Hand Drawn Image with Buy Online in a circle and a line drawn through

Do not like shopping online and prefer to shop in-store

Hand drawn measuring tape

Their main concern is fit and size

Hand drawn hands interlocked in the shape of a heart

When buying in stores, they need someone they trust (e.g., parent, friend) to assist them with texture, color, fit and size

Design Values

We felt it was important to establish design values to help guide us to make appropriate decisions, whether it's filtering opportunities, prioritizing features, or ultimately defining success. We also wanted the interface to reflect these intents and value proposition. These are our values:

 

  1. Accessibility/Inclusivity

  2. Cross-context compatibility

  3. Agency

  4. Non-intrusive/authoritative

Sketches 

To explore design opportunities, we brainstormed ideas and then evolved each into a detailed scenario. Storyboarding allowed us to evaluate how each idea could solve the design problem, and more importantly, to empathize by capturing users' interactions and reactions to pain points and success in the solution context.

Crazy 8's Brainstorming
 
We brainstormed and sketched 8 possible solutions 

The Look Crazy 8s Brainstorming - Sketches 1-4
The Look Crazy 8's Brainstorming - Sketches 4-8

After brainstorming we settled on the following three solutions.

1

Hardware-based solution - Echo frames

The main goal is to use an artifact that is worn by a majority, and make it accessible for our target users to navigate through the store, act as an assistant while they shop. The design of the glasses will not hinder anyone in the vicinity and easy to carry. The problem with this solution is that the technology needed in the pair of glasses needs a considerable amount of space, which would make the glasses look bulky and give indirect indications that the person wearing them has a vision impairment.

Hardware based solution - Echo frames

2

Non-tech approach - Braille Map of the store

This approach is easy to implement, and it is also cost-effective for the stores. The main idea is to include a braille map of the store and the current location where the user is at multiple places in the store. The probable issue with this approach is that it requires people to know braille, and it adds an added mental load to locate the map, all the while thinking about what kind of clothing to buy for a particular occasion.

 

Solution - Braille Store Map 3D Printed

3

Software-based approach - Application that uses an AI assistant

The main idea for this solution is to leverage a mobile phone ( which is present with the users all the time ) and use the already present sensors to help the user buy new clothes by:
 

  1. Helping them in judging the color
     

  2. Helping them in matching items from their wardrobes
     

  3. Leveraging the already present fashion-aware community to help in the selection process.

Solution - Software based approach - Smartphone app

Design Decisions 

  • No online shopping features on the app
     

  • Primary platform for this app would be iOS
     

  • Audio Assistant to help people with complete blindness
     

  • Identified Color = General Color + Object color
     

  • Positive reinforcement for every task completion

Silhouette of thumbs up
Hand Drawn Image with Buy Online in a circle and a line drawn through
IOS
Prime Color Illustration
Voice Assistant Mic

System Concept And Design

The final solution is a mobile application that will help the user to select the clothing items by identifying
colors. This application will leverage the user's data of existing clothing items to generate recommendations.

Major Features
 

The following were the most critical design features for our application:
 

  1. Wardrobe: The wardrobe features has the following functionalities:

    1. It keeps track of the clothing item owned by the user

    2. Allows the user to tag the item using custom tags, such as color, and description

    3. Matches the clothing item with each other to generate outfits (These outfits can be generated using the current trends or by applying an event filter)
       

  2. Color Matching: The color matching feature is used while the user is shopping in-stores to inform of the accurate color of the clothing item.

    This feature not only tells the base color such as brown, red or, yellow but also mentions a correlation object/word such as espresso (brown) or tomato (red) or sunset (yellow).
     

  3. Trends and communities: This feature is for the users who consider themselves fashion enthusiasts or those who want to buy something that is trending. This feature provides them with resources and connects them with active fashion communities that will support people with visual impairments, or events that involve our target population and experts.
     

  4. Chat-Bot: This feature will often be used in addition to the color matching feature. In reality, the bot will suggest the primary color, for example, espresso brown. This color and clothing item will then be matched with the clothing items in the user’s wardrobe to make an outfit.

User Scenarios 

User Goal: I don’t feel comfortable going shopping by myself. I only go shopping with my mother or a trusted friend. I love shopping with my friends when they’re looking for items, but not for myself. I want to be more confident in being able to choose outfits for myself based on color, size, and fit. I also rely on my mother to tell me the latest trends. Where can I find this information?
 

When a friend shares a new fashion app called The Look, which makes it easy for her to explore fashion trends, identify colors, and upload and add to her wardrobe, she decides to sign up and use it on her next shopping trip.
 

  1. Keisha has decided to shop for clothes by herself for the first time.
     

  2. She has already uploaded images of her wardrobe to the app.
     

  3. She’s looking for a shirt to match a skirt her mother has recently bought her.
     

  4. She would also like to know if it matches with other clothes in her wardrobe.
     

  5. Once she finds a shirt in the right size, she takes an image using the color identifier.
     

  6. The voice assistant informs the color of the shirt and how many outfits it matches in her wardrobe.

The Look Storyboard

Implementation Plans

Tech stack
 

  • Frontend: HTML & JavaScript

  • Backend: Python
     

We made progress on our implementation, using HTML5 and JavaScript for the front end, and Python for the API calls. We used an API called Imagga, to determine the color of an uploaded clothing image. For example, this photo of a hoodie (see below), after being sent through a POST request from our python file, results in a JSON file describing it as "maroon" and "Bordeaux".

The JSON file output is what we're looking for in terms of color identifying. Not only is the JSON file output accurate in terms of color identification, but it also gives descriptive and unique words for colors. For example, the JSON file output the terms "graphite" and "antique gold" as color descriptors instead of merely "grey" and "yellow". This level of granularity is perfect for our uses, which is to allow users to upload an image and hear the main color of the object, along with a suggested color palette for pairing accessories or other items of clothing. 

If we were to implement this project fully, we would use the following software engineering tech stack:
 

  • Backend App​: Python Flask

  • Database​: MongoDB

  • Frontend App​: React, Javascript

  • Web Server​: AWS

photo of a hoodie, after being sent through color identifier with the programming code used

See The Experience 

Anchor 1

Reflection 

interview illustration

More in-depth interview

Live

Live shopping Session

Final Design

bottom of page