iPhone photo tagger

Re-designing the captioning feature of iPhone to be usable,visible, and easily accessible

The captioning feature exists as a feature in iPhone for several years, but a majority of their users are not even aware of its existence. I decided to re-design this feature for to make it usable & accessible.

Role

Research, user interviews, persona, site mapping, task/user flows, sketching, wireframing, prototyping, user testing

Duration

2 weeks

Tools

Figma

See prototype

Project overview

Scope

This project aims to make Captioning feature of the iPhone usable, visible, and easily accessible. However, to be honest that was not my initial idea. I was planning to design a captioning tool for IOS or Android. I wasn´t quite sure which direction to follow in terms of brand.

Background &
problem statement

My initial problem statement was “I took the photo of that surfing dog last summer, how do I find it now?”

We all enjoy taking photos of everyday things or stuff that we find interesting. Somehow all of the existing features in photo galleries of mobile phones are for people who know where they saved which picture. However, there are plenty of users who take the picture of an inspiring moment, remember what the subject was, but forget where they saved it. Maps and Dates are there but one just forgets when they took a photo or don´t remember the date after a certain time.

Even if the picture is marked as a Favorite, one should still browse through all the pictures in the Favorites to find what they are looking for. So the initial project focus was adding a new feature that enables captioning for photo galleries of one of these brands to be able to access them later effortlessly.

Discover

Market Research and Competitor Analysis

I had a look into the current popular trends in image search applications and websites. It was useful to locate competitors or similar organizations. First of all, I checked image tagging features of certain popular apps and websites. Listed the strengths and weaknesses to figure out opportunities that can be implemented to Android image tagging feature. I also had a look at the comments about these websites/apps to gain insight about people who use such image search platforms. This way I created the provisional personas.

Interviews

I have conducted  1:1 online interviews with 5 people matching the estimated Potential Target Groups profile. The questions were open-end questions encouraging them to tell more about their motivations, pains and goals. The interviews didn´t only provide valuable info about these topics, but also helped me gain insight about people´s behavioral patterns and purchasing decisions:

Initial interviews and secondary research showed that users need such a feature. I chose half of the interview participants from IOS users and the other half from iPhone users. Everything was normal till this point when I accidentally discovered during secondary research that Captioning exists as a feature in iPhone for several years, but a majority of their users are not even aware of its existence. At least none of the interview participants knew Captioning existed.

Then, I expanded my research and reached out to more iPhone users. The result was the same. The users I reached out were genuinely surprised and happy to realize this feature is existing. One of them described the feature as “a great helping tool to sort his photos in his way”

After this discovery, I decided to re-design Captioning feature for iPhone to make it usable & accessible. Here are some other interesting key takeaways from the interviews that shaped the project:

  • ..of the users take instant pictures and have screenshots to remember an object, a moment, or something that inspires them to have a look at them later.

  • ..of the users prefer using the photo gallery of the mobile phone they use to sort, organize and create albums.

    Only 20% invest in creating personal albums on their mobile phones per topic: Holiday, Events, Birthday, etc.
  • ...of the participants have difficulty in remembering and finding that later scrolling through the photo gallery app.

  • ...of the participants prefer using the automatically created albums from the phone itself.
Problem re-statement

In the light of the information derived from the interviews, I re-stated the problem:
"The iPhone Captioning feature is not visible or accessible to its users. How can this be improved?"

At the moment it is hard to realize the tool exists, thus lack of visibility makes it hard to use the feature. I took the screenshots of the existing Captioning feature problems from a user´s article under the provided link. Right now the user should tap the empty text area below the photo marked with "Add a caption”. The users don´t know this area is tap-able:  

Another obstacle is finding the captioned images. It is possible to search for them under the Search menu, but the users should remember which word they used for Captioning:

Users can access all the Captioned images easily if they are more visible in one place.

Persona

Then, I have created my persona Amy, who is a London based cook who likes travelling and inspirational pictures:

Define

Site map

First of all, I checked the structure of IPhone Photo App to figure out where it makes the most sense to integrate Captioning feature, so that it is visible and accessible to the users:

Then, I checked what kind of actions the user can do right after shooting a photo and where the Captioning should be located. Potentially, if the Captioning feature is visible and accessible then the users would know it is existing and they would be more likely to use it. For instance, to add a photo to the favourites, users are hitting the Heart icon. Similarly, Captioning feature can be made accessible via an icon that users can easily see and access:

User flow

Then, I started sketching the user flows for instantly captioning a photo and opening the library to look for the captioned photo after a certain time:

Develop

Development

I watched a design lecture called Essential Design Principles by Mike Stern, manager of Apple Design Evangelism Team to gain insight into the design philosophy of Apple before moving on with the development process.

In his speech, Stern underlines key design principles they follow in Apple. I followed these principles together with IOS Guidelines during the design and development process to solve three main problems of the existing feature:

#1 Add & remove a caption,
#2 Captioning with keywords
#3 Search for the captioned image

Design principles used for re-designing this interaction are wayfinding, visibility, proximity, mapping, consistency and providing feedback. During the case study, I will be sharing the principles when necessary.

Sketching
low-fidelity wireframes

Based on the site map, user & task flows as well as the key take-aways from the interviews and my persona; I started sketching the mid-fidelity wireframes before starting digitizing the screens:

#1 Add & remove
a caption
  • What is the problem?
  • First of all, I had been asking myself why the brand kept such a useful feature so hidden? After watching Stern´s presentation, I think the reason might be Progressive Disclosure. Stern describes Progressive disclosure as “a technique for managing complexity and simplifying decision making”. In other words, it is hiding irrelevant information until it is required. Maybe this is the reason why Captioning is hidden until the users tap the empty text area below the photo. However, it is too hidden that, it is hiding information by preventing accessibility and functionality.
  • What is the solution?
  • Visibility & Proximity:
    To increase the usability of captioning, a captioning symbol was introduced. The users can easily tap on the symbol with their fingers and access Captioning menu. Once the image is captioned, the image is marked with a symbol resembling Captioning activity. The proximity of the Favorites symbol and the captioning symbol is arranged in a way that gives the message to the user these are two different features:
  • IOS has already been using this symbol under Options. I used a stylized version of this symbol to keep the interface aligned with the corporate guidelines:
  • Mapping & Consistency:
    IOS has been using mapping to resemble certain activities fulfilled. For example,when the Favorites icon is tapped, that image is marked as a favorite. The heart icon changes color and becomes blue. The same principle was applied tothe Captioning symbol. During the design process, in addition to mapping, I used the consistency principle to represent Captioning feature similarly. This way the interface can be perceived as a whole.
  • Providing Feedback:
    It is possible to remove captioning of a photo with one click. To prevent users encounter errors or frustrations, a warning pop-up message is created when the user attempts to un-caption the picture:
  • Why this solution helps?
  • If users know the feature exists, then they use it more often. After re-design of the captioning tool, the feature is visible and accessible to the users.
#2 Captioning with keywords
  • What is the problem?
  • According to interview results, 100% of the users take instant pictures and haves creenshots to remember an object, a moment, or something that inspires them to have a look at them later. 80% of the participants have difficulty inremembering and finding that later scrolling through the photo gallery app. There is potential to improve the captioning feature in a way that helps them remember these moments.
  • What is the solution?
  • It was assumed that if captioning is not easily accessible, the users potentially wouldn´t use it.  To increase user engagement, captioning with keywords was introduced:
  • Highlighted words can be turned into emoticons on iOS. Following the principle of consistency, the keywords highlighted orange are differentiated visually from the self-written text. When selected these words to turn into emoticons: 
  • Why does this solution help?
  • The users will be able to caption the pictures with the least effort possible. This potentially can increase the usage of the feature.
#3 Search for the
captioned image
  • What is the problem?
  • Currently, the captioned images are not easily accessible. To find a captioned photo or video, the users should go to the “Search tab” and then tap the “Search” bar at the top. Then, the user should tap a word or a phrase from one of the captions to find what they are looking for:
  • What is the solution?
  • The re-designed version of the captioning feature enables accessing captioned images under:
    Photo gallery > Albums > Captions:
  • Consistency & Grouping:
    Captions section is added under Albums just like Favorites to enable users to access them easily. The captioned images are marked with the same captioning symbol with a grey color. To show a case when an image is both marked as a favorite and captioned, the symbols are grouped to help users identify them easier.
  • Why does this solution help?
  • Providing the captioned photos under Albums > Captions will help user find what they are looking for faster and easier.
View prototype

Validate & Iterate

Usability tests

The usability test was done with 4 participants from several demographic backgrounds. They tested the prototype with their mobile phones. Half of the users were IOS users for an average of 10 years while the other half had Android-based mobile phones for over 10 years.

The screens were recorded by the participants and sent to me. Then, I had short follow-up interviews with them to talk about their comments. Key takeaways of the interviews are listed below:

  • ..of the participants easily completed the given tasks without problems.

  • ..of participants mentioned they would prefer suggested Keywords based on the picture they took instead of writing a caption themselves.
     
  • ..of participants mentioned Keywords turning into emoticons is surprising and delightful.
  • ..of participants mentioned Keywords turning into emoticons is surprising and delightful.
  • ...of the users commented that suggested keywords for captioning should be based on the previously taken picture.

Next Steps

What else could be improved in the future?