The Approach

This study was designed based on four previous research projects. First, Phillips and Zhao explored factors that predict assistive technology abandonment; we explored whether those factors currently apply to the use and abandonment of object ID tools. Second, our interview questions were based on the Avila et al. study on Be My Eyes. Third, we based this study’s contextual inquiry on Sahasrabudhe et al., who observed how people who were blind use smartphones to complete daily tasks. Finally, we focused on social context based on the findings of Shionara and Tenenberg, who argued that related research focuses on what is technologically feasible and tends to overlook the contexts in which people who are blind typically use assistive technologies. This study was conducted by a team of four.


Research Questions

The goal for the project was to answer four contextually based research questions:


  1. - What the contexts in which people who are blind use object identification tools?

         A. How does the domestic context affect perception, use, and adoption?
         B.How does the public context affect perception, use, and adoption?

  2. - What are the major challenges to continued use and adoption of object ID tools?
  3. - Do research trends bridge the gap for users who are blind in ways that are actionable, relevant, and context-sensitive?
  4. - Are there existing, external components that object ID tools can leverage to provide richer object descriptions to users?



And The Results

Interviews & Observations

Four participants were recruited through the Chicago Lighthouse organization. All participants were blind or visually impaired, between the ages of 25-70, Chicago residents, and are current users of an object identification application tool on their mobile device.

We conducted one hour in-person interviews either in the participant's home or at the DePaul University College of Computing and Digital Media. Participants were given a $25 gift card for their involvement.

Each session began with an introduction to the project. We acquired verbal consent for participation and audio recording. Sessions consisted of five stages: (1) general inquiry, to gauge familiarity with and current perceptions of object ID tools, (2) focusing questions on the context and details of object ID tool use, (3) observation of the participant using their preferred object ID tools on two novels provided by the researchers and objects around the session room, (4) retrospective questions to identify unmet needs in the participants’ current experience with object ID tools, and (5) a follow-up discussion.

Narrative Concepts

During the follow-up discussion, participants were asked to give feedback on the following narrative concepts: Verbal Catalogue, Crowd See, and Wear & Tell. We derived these concepts from recent studies (Ivanov [2], Zhong et al. [9], and Mekhalfi et al. [4]) of proposed object ID tool features that may or may not be commercially available. With the Verbal Catalogue, users record and attach audio descriptions of objects to RFID tags, then use their phone to activate the audio description

Using Crowd See, the user takes a series of photographs that are stitched together and sent to a remote, human-powered service, which generates crowdsourced labels of items in the image. The labeled image is sent back to the user to aid navigation of an unfamiliar environment.

Lastly, Wear & Tell proposes using a voice-activated wearable to take pictures of the environment. Those pictures are interpreted using a database, with audio feedback given to the user through an earpiece. The wearable is also supported by a laser sensor, which provides feedback about environment depth.

Interviews with Questionnaires

A total of eight participants were interviewed and afterwards were given a questionnaire to understand the underlying attitudes that participants have towards their personal finances.

The results from our interviews and questionnaires revealed various types of insights such as: behaviors and attitudes towards personal finance, wants in a financial tool, and pain points with tools currently used or used in the past.


Our Findings

  • 1. RQ1 - Contexts of Object ID Tool Use: Conversations with participants revealed at least two broader social contexts (domestic and public) to consider when designing object ID tools.

         A. At home, participants described the use of tools to aid with the following tasks: intake and organization of belongings (e.g. sorting mail, identifying packages, matching home decor), manage finances (e.g. identifying currency, reviewing bank information), and interface with other technologies (e.g. using the microwave, adjusting speaker system, viewing a computer screen, changing a printer cartridge).
         B. In public, participants described using AIRA to aid with sightseeing on vacation, looking for an address on the street, and identifying points of entry for buildings. Participants still prioritized precision, but discretion was also critical to managing their safety in public. SeeingAI and camera-based apps were considered impractical for public use. Participant attitudes towards safety and discretion are explored by RQ3.


  • 2. RQ2 - Major Challenges: Kacorri et al. [3] identified several challenges to the continued use and adoption of object ID tools, including lack of privacy, cost, lighting and photography skills, required Internet connection, and crowd availability (particularly for tools and services that provide remote human assistance).

  • 3. RQ3 - Relevance of Research Trends: The research trends identified in our narrative concepts do not effectively aid in creating actionable, relevant, or context-sensitive tools. While certain aspects received positive feedback, the consensus was overall negative.

  • 4. RQ4 - Leveraging External Components During the demonstration, participants had the most success identifying Harry Potter and The DaVinci Code using the “Product” mode found on SeeingAI, which scans barcodes and produces a description associated with the product. Our participants were unsure where the descriptions were stored or how they are accessed by SeeingAI, but they found the descriptions to be more useful than those provided by the short text or document modes.

Discussion

This study seeks to propose design guidelines that are better aligned with how people who are blind use object identification tools in both domestic and public contexts. We found that these tools excel in certain situations and fail to aid users in others. Additionally, we found that some current research trends do not effectively contribute to developing useful or relevant tools for users who are blind. We propose the following recommendations for improving object ID tool effectiveness and usability.


Recommendations

High Priority

  • - Provide real-time feedback in natural, non-technical language to aid in positioning cameras.
  • - Provide robust lighting and focusing support for object ID tools that require use of a camera.
  • - Provide documentation of features to support user adoption and continued use of the tool.

Medium Priority

  • - Include feedback systems that do not rely on visual input or audio output, such as haptic feedback.
  • - Design wearable assistive technology (such as glasses) to be discreet.

Low Priority

  • - Raise, lower, or texturize barcodes on products to make them easier to locate and be leveraged by object ID tools.


Limitation

Our data is limited by a small sample size (n=4) and is not meant to be representative of the perspectives and attitudes of the blind community. Our narrative concepts did not explore the full range of object ID tool features, nor did our session protocol address object ID tools that were not used by our participants, such as Voice Dream Scanner. However, the design of this study may be used as a model of blended contextual inquiry and interview methods, aimed at generating design guidelines that are grounded in user experiences.



Future Work

Future research can leverage contextual inquiry methods to explore the perspectives of people who use tools that were not discussed or demonstrated by our participants. Additionally, a longitudinal study may identify additional factors that influence the use and adoption of object ID tools. Replicating methods with a broader range of users will strengthen the design recommendations for this class of assistive technologies. Research could also be done on the efficacy of using additional devices (such as glasses and smart watches) to augment object identification tools.


References

  • 1. Mauro Avila, Katrin Wolf, Anke Brock, and Niels Henze. 2016. Remote Assistance for Blind Users in Daily Life: A Survey about Be My Eyes. In Proceedings of the 9th ACM International Conference on Pervasive Technologies Related to Assistive Environments (PETRA '16). ACM, New York, NY, USA, Article 85, 2. DOI: doi-org/10.1145/2910674.2935839.
  • 2. Rosen Ivanov. 2014. Blind-environment interaction through voice augmented objects. Journal on Multimodal User Interfaces 8, 4 (January 2014), 345–365. DOI: doi.org/10.1007/s12193-014-0166-z
  • 3. Hernisa Kacorri, Kris M. Kitani, Jeffrey P. Bigham, and Chieko Asakawa. 2017. People with Visual Impairment Training Personal Object Recognizers. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI 17 (May 2017). DOI: doi.org/10.1145/3025453.3025899
  • 4. Mohamed L. Mekhalfi, Farid Melgani, Yakoub Bazi, and Naif Alajlan. 2017. Fast indoor scene description for blind people with multiresolution random projections. Journal of Visual Communication and Image Representation 44 (April 2017), 95–105. DOI: doi.org/10.1016/j.jvcir.2017.01.025
  • 5. Mohamed L. Mekhalfi, Farid Melgani, Yakoub Bazi, and Naif Alajlan. 2015. Toward an assisted indoor scene perception for blind people with image multi labeling strategies. Expert Systems with Applications 42, 6 (April 2015), 2907–2918. DOI: doi.org/10.1016/j.eswa.2014.11.017
  • 6. Betsy Phillips and Hongxin Zhao. 1993. Predictors of Assistive Technology Abandonment. Assistive Technology 5, 1 (1993), 36–45. DOI: doi.org/10.1080/10400435.1993.10132205
  • 7. Shrirang Sahasrabudhe, Rahul Singh, and Don Heath. 2016. Innovative Affordances for Blind Smartphone Users: A Qualitative Study. Journal on Technology & Persons with Disabilities 4, 22 (April 2016), 145–155.
  • 8. Kristen Shionara and Josh Tenenberg. 2009. A Blind Person’s Interactions with Technology. Communications Of the ACM 52, 8 (August 2009) 58-66. DOI: doi.org/10.1145/1536616.1536636
  • 9. Yu Zhong, Walter S. Lasecki, Erin Brady, and Jeffrey P. Bigham. 2015. RegionSpeak: Quick Comprehensive Spatial Descriptions of Complex Images for Blind Users. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI 15 (2015), 2353–2362. DOI: doi.org/10.1145/2702123.2702437