How-To Employ Artificial Intelligence to Create Empathetic Designs​

Josie Morris
6 min readSep 2, 2020

Found solutions to complex issues to create an experience for users to collaborate with AI technology to transcribe records from around the world.

Overview

FamilySearch is a nonprofit organization and website offering genealogical records, education, and software. They have been collecting images of records; Birth, Marriage, and Death Certificates, Census, and more for almost 3 decades. In that time, they have obtained over 4.1 Billion images of records in hundreds of languages. Our volunteer indexers can’t keep up with the number of records FamilySearch is ingesting daily.

A team of engineers has been training a computer to read, recognize, and transcribe content on an image, but the computer makes mistakes that are easily recognized by humans. I worked closely with this team from early conception through design to create an experience to allow our users to utilize the information the computer transcribes while correcting its mistakes.

Bite-sized summary

  • Researched indexing tools while asking questions early to better understand the complex issues and limitations of the computer.
  • Found edge cases and designed for the most complex images to find an experience to work with all records.
  • Worked with the product manager to define requirements for our product to be successful, and frequently updated him on the project's progress.
  • Created realistic Figma prototypes and set up multiple Usertesting.com tests to find our target audience and get real user data.
  • Found consistency in the feedback and made iterations on the flow and design of the prototype to solve those issues.

My Role

My role included the early conception of the project, researching, storyboarding, ideating, prototyping, and performing user tests. Our team consisted of myself, another UX Designer, multiple product managers, a group of engineers working with AI technology, and the developers building the product.

1. The Challenge

FamilySearch is ingesting millions of new images a week and the volunteer indexers can’t keep up. We have images in multiple languages and don’t have enough multi-lingual volunteers to be able to index them.

We have over 4.1 billion un-indexed images.

Our volunteers have spent hours transcribing the information found on images, and although we have hundreds of people indexing, the amount of images we have unindexed is exponentially growing, increasing with time as the photographers around the world continue capturing images daily.

A team of engineers has been training a computer to read images and transcribe the information, but it makes errors grouping the data to a specific person. These things are easier for humans to spot, but we need to be able to teach them how to work with the computer to correct the mistakes.

2. Research

I researched existing indexing tools across the industry to find how they use artificial intelligence to help people index. Many existing products have been successful at creating full-text, (having the computer transcribe the image word for word), but there aren’t any products where the computer can interpret if the word is a name, date, place, sex, occupation, or other.

Helping people find their ancestors would only work if we gave people the ability to be able to search for their ancestors by this vital information.

The computer can only read so many records at a time, and it is going to take years before it can begin to make a dent in all of our unindexed images. This meant that we had to design a tool where users could select the image they wanted to index and have the computer help them.

The computer is learning language by language, and the engineers decided to first teach it Spanish since so many of our indexers are English speakers and have indexed a large portion of the English records.

I speak Spanish and began finding various types of Spanish records to run through the computer; marriage records, birth certificates, and census records. As we ran images through the system, I was able to find where the computer would often make mistakes and confirmed my findings with the engineers developing the tool.

3. Design Goals & Process

Our design goal was to create an experience where users could either work on correcting images the computer had already read or to be able to find any image they wanted to index and run it through the computer. If we were successful at this, indexers could correct 20 or more records in the time it previously took to index 1 or 2.

One of the most difficult issues we were facing was that every record was very unique. From typed records to handwritten, from tables and grids to paragraph styles, the computer needed to be able to correctly group related information.

Whether you had 1 person or hundreds of people on a record, grouping the information to the correct person was vital in the success of transcribing. Properly grouping related information allows people to be able to search and find their ancestors.

Working with another designer, we created personas, scenarios, found common use cases as well as edge cases to design for, created a journey map, accounting for the various ways users could enter our experience, and eventually created a storyboard for this experience.

Journey map for the users.
Storyboard for the computer-assisted indexing flow.

Aware of how complex this process could become, we started with a mobile-first design approach to try to simplify the experience. This meant that we broke up the complex steps into smaller, more easy pieces and this made it much easier for users to complete a successful correction.

4. User Testing & Feedback

We created a Figma prototype of the mobile designs and began testing users to see if they could follow correct the computer transcriptions. We noticed that where we started the users in the flow was confusing to them when they were expecting to manually index the record. Although they were pleasantly surprised to have the computer do a lot of the work, they quickly became frustrated with the size of the screen they were using.

Because this is such a complex flow, we discussed not allowing users to even use a mobile device. To test this, we recreated the flow in desktop design and linked up a Figma prototype to do some more testing.

We set up an AB test on usertesting.com and had the users try to use both the mobile and desktop versions (in different orders). Without a fail, the users preferred the desktop experience. We knew that we had to make the mobile experience better, especially since internationally, way more people own a mobile device than a computer.

User testing was very important to this project. I had been working on it for months and I become so familiar with terms, ideas, and steps that we created that I almost forgot how complex this process really was.

5. Design Iterations

We began to make iteration after iteration, creating a tutorial onboarding, making a video, and even having them complete a test to be able to unlock the real experience. These tutorials and guides did help, but once again, the users were confused at what we were asking them to do. We decided to combine a few of the steps, and to land them on the step they had been expecting all along.

When we gave them a better context and started to teach them while they were in the real product, working with actual information, the tips and tools made sense and they were finally confident about their corrections and eager to do more.

6. Project Learnings

Keeping it Simple
This flow started off pretty complex. We tried forcing steps upon the users without giving them context, and they began to get confused and frustrated with the process. Once we realized we’d have more success simplifying their tasks to only correcting the errors instead of taking them through a long verification process, they were increasingly more successful.

Personalized Onboarding
We started off trying to explain the context, the purpose, and word for word how to use the tool. Time and time again we saw users skipping the boring paragraphs, trying to get to their task as fast as possible. We switched our focus and began giving them tips and help when they needed it instead of constantly bombarding them with instructions or explanations.

--

--