Re-designing the captioning feature of iPhone to be usable,visible, and easily accessible
The captioning feature exists as a feature in iPhone for several years, but a majority of their users are not even aware of its existence. I decided to re-design this feature for to make it usable & accessible.
Research, user interviews, persona, site mapping, task/user flows, sketching, wireframing, prototyping, user testing
2 weeks
Figma
See prototypeThis project aims to make Captioning feature of the iPhone usable, visible, and easily accessible. However, to be honest that was not my initial idea. I was planning to design a captioning tool for IOS or Android. I wasn´t quite sure which direction to follow in terms of brand.
My initial problem statement was “I took the photo of that surfing dog last summer, how do I find it now?”
We all enjoy taking photos of everyday things or stuff that we find interesting. Somehow all of the existing features in photo galleries of mobile phones are for people who know where they saved which picture. However, there are plenty of users who take the picture of an inspiring moment, remember what the subject was, but forget where they saved it. Maps and Dates are there but one just forgets when they took a photo or don´t remember the date after a certain time.
Even if the picture is marked as a Favorite, one should still browse through all the pictures in the Favorites to find what they are looking for. So the initial project focus was adding a new feature that enables captioning for photo galleries of one of these brands to be able to access them later effortlessly.
I had a look into the current popular trends in image search applications and websites. It was useful to locate competitors or similar organizations. First of all, I checked image tagging features of certain popular apps and websites. Listed the strengths and weaknesses to figure out opportunities that can be implemented to Android image tagging feature. I also had a look at the comments about these websites/apps to gain insight about people who use such image search platforms. This way I created the provisional personas.
I have conducted 1:1 online interviews with 5 people matching the estimated Potential Target Groups profile. The questions were open-end questions encouraging them to tell more about their motivations, pains and goals. The interviews didn´t only provide valuable info about these topics, but also helped me gain insight about people´s behavioral patterns and purchasing decisions:
Initial interviews and secondary research showed that users need such a feature. I chose half of the interview participants from IOS users and the other half from iPhone users. Everything was normal till this point when I accidentally discovered during secondary research that Captioning exists as a feature in iPhone for several years, but a majority of their users are not even aware of its existence. At least none of the interview participants knew Captioning existed.
Then, I expanded my research and reached out to more iPhone users. The result was the same. The users I reached out were genuinely surprised and happy to realize this feature is existing. One of them described the feature as “a great helping tool to sort his photos in his way”
After this discovery, I decided to re-design Captioning feature for iPhone to make it usable & accessible. Here are some other interesting key takeaways from the interviews that shaped the project:
In the light of the information derived from the interviews, I re-stated the problem:
"The iPhone Captioning feature is not visible or accessible to its users. How can this be improved?"
At the moment it is hard to realize the tool exists, thus lack of visibility makes it hard to use the feature. I took the screenshots of the existing Captioning feature problems from a user´s article under the provided link. Right now the user should tap the empty text area below the photo marked with "Add a caption”. The users don´t know this area is tap-able:
Another obstacle is finding the captioned images. It is possible to search for them under the Search menu, but the users should remember which word they used for Captioning:
Users can access all the Captioned images easily if they are more visible in one place.
Then, I have created my persona Amy, who is a London based cook who likes travelling and inspirational pictures:
First of all, I checked the structure of IPhone Photo App to figure out where it makes the most sense to integrate Captioning feature, so that it is visible and accessible to the users:
Then, I checked what kind of actions the user can do right after shooting a photo and where the Captioning should be located. Potentially, if the Captioning feature is visible and accessible then the users would know it is existing and they would be more likely to use it. For instance, to add a photo to the favourites, users are hitting the Heart icon. Similarly, Captioning feature can be made accessible via an icon that users can easily see and access:
Then, I started sketching the user flows for instantly captioning a photo and opening the library to look for the captioned photo after a certain time:
I watched a design lecture called Essential Design Principles by Mike Stern, manager of Apple Design Evangelism Team to gain insight into the design philosophy of Apple before moving on with the development process.
In his speech, Stern underlines key design principles they follow in Apple. I followed these principles together with IOS Guidelines during the design and development process to solve three main problems of the existing feature:
#1 Add & remove a caption,
#2 Captioning with keywords
#3 Search for the captioned image
Design principles used for re-designing this interaction are wayfinding, visibility, proximity, mapping, consistency and providing feedback. During the case study, I will be sharing the principles when necessary.
Based on the site map, user & task flows as well as the key take-aways from the interviews and my persona; I started sketching the mid-fidelity wireframes before starting digitizing the screens:
The usability test was done with 4 participants from several demographic backgrounds. They tested the prototype with their mobile phones. Half of the users were IOS users for an average of 10 years while the other half had Android-based mobile phones for over 10 years.
The screens were recorded by the participants and sent to me. Then, I had short follow-up interviews with them to talk about their comments. Key takeaways of the interviews are listed below: