I’m happy with the final branding of the prototype, and tested the text field input in combination with it to see how it works in animations:
As the branding is taking its final shape, I have ran into other problems when assembling the final prototype. These problems are caused due to the limitations of ProtoPie. An example of a smaller issue I managed to overcome was layering multiple large reference images over each other. The changing the opacity value created issues with button hit areas, and thus I had to create a paging scroll that travels to the assigned image to solve the problems with the interaction. The transition is not as smooth, but the interaction was the priority. The following video demonstrates the difference:
Later I ran into a bigger issue, as the Android phone I’m using to demonstrate the prototype has a very high resolution. This means that the image quality in the prototype needs to be very high to prevent heavy pixelation on the device screen. In the end the prototype had too many images and become too heavy to run smoothly. Since the prototype includes multiple screens and images, the file size became too large and caused problems when saving, crashing the prototyping tool and the prototype on phone multiple times. When trying to solve the problem I tried changing the resolution of the prototype as well as the density of the rendering, but as ProtoPie converts vector images into bitmaps the images pixelated heavily on the device screen. I went back and forth with the ProtoPie customer support, but in the end I simply needed to reduce the quality and fidelity of the application to maintain a smaller file size. This means I had to split the prototype into multiple different files to ensure all interactions could be run, as one large prototype required too much processing power. It is unfortunate I cannot exhibit all the full features and routes I created and assembled for the prototype, but I will display videos in this blog to show them and leave only the core user journey for the live prototype demonstration. Even then I need to compromise the image quality in the prototype and split the resolution to ensure the prototype’s interactions can be run as intended. Similarly, not all gallery images will open in the live demo, as mapping the full prototype would have required too much from the prototyping tool and from the ProtoPie engine.
After Mark 2 I gathered more feedback on the prototype. People described the visual style as friendly and inspiring, and I’m happy that it meets my goals on that. I have changed the colour palette slightly towards a more uniform, colder palette, as well as smoothed out the organic shapes, as people preferred them more. I also changed the drop down menu designs to less cluttered ones, while still keeping the paint splash elements in most button press options.
Iterating the final visual design
The prompt path structure got a positive response, and now the focus is on smoothing out the interactions with some in-between animations and iterating the graphic design to create a better information hierarchy.
The word cloud in the home page was too distracting and messy, so instead I created a small statistics page that shows “what’s in” using a few key words. The page shows the user’s own current top tags for their own work, as well as others’ top tags for the user’s work. The page also has an option to change the user’s goals within the application, thus changing the recommendation preferences for the prompts. I have also added more in-between animations and welcome screens.
Additionally, when the user browses someone else’s work in the community gallery, they can now see the difficulty of the prompt the drawing was based on, some key words describing the prompt as well as an option to take the same prompt if they are interested and inspired.
I have also thought about aligning the images for the gif and come to conclusion that with digital imported images that can be easily done. With photographed drawings the application could use corner alignments or image recognition to smoothen the animation. Then again, even if the photographed images would be skewed or from an angle, as long as the user can see their own work they will be able to tag the parts they like in it.
I also thought how the tags would be visually displayed without being cluttered when the user gets multiple tags from other users. As the point of the tags is to provide an overall visual cue of what others like in the work, one way could be showing a fading animation of different tag clusters at each stage, that the user would then be able to tap open. However, due to the limitations of ProtoPie and the time it takes to assemble the interaction for each of the tagging circles, I am not able to demonstrate a full changing animation of tag clusters in this prototype.
Other limitation is in remembering the user’s tags at each stage – as ProtoPie does not support any kind of temporary memory system, the prototype is not able to remember what the user tagged live and then display the same tag in gallery or in the process timeline. I will create an example of the user added tags and comments for the progress timeline and community, and still have the free tagging interaction after each stage to demonstrate how the interaction feels.
Here are some screen iterations for the final design:
This is my Mark 2 prototype that includes a complete prompt path and demonstration of the community and own progress timeline. It also has interactive tagging and commenting features. When the user browses their own art either through their own progress timeline or through the community side, they can toggle a view between their own tags, and the tags and comments other people have added to the work. However, I will probably merge those two into one cluster of tags that will be colour coded depending on if the tag is created by the artist or by someone else commenting, as this would make the application to run smoother.
Instead of tagging only at the end of each prompt path, the user can add tags and comments in between each drawing stage. This will promote appreciation for the process stages and not just for the finished pieces, and guides the user to think positively of each stage.
I also included a small feature for WIP (work-in-progress) images. If the user wants to quit the prompt path in the middle, yet still continue the same drawing next time, the drawing will be saved as an work-in-progress image. The user can browse their WIP gallery and choose an image to continue. Luova will then provide the remaining prompt path. A detail the interviewed people mentioned was that they have difficulties deciding which WIPs to continue, and jokingly said they need to roll a dice to decide for them. Thus I added an option to click a button that randomly rolls a WIP for the user to continue if they have difficulties in making the desicion.
In the supporting materials, when the user opens a reference image page, they see a gallery of relevant images to the prompt that are to provide inspiration and reference. The user can also tap them open as larger images to view the details.
These are some iterations of the visual design of the screens:
This week I have been focusing on learning ProtoPie as a tool and prototyping different interactions as well as testing the limits of the tool. I have created a working tagging interaction – getting the variables right took a while, but I’m happy with the results. Due to time it takes to assemble the interaction, the user can now tag two circles per stage in the prototype, but ideally the user could add as many as tags they want. Unfortunately ProtoPie doesn’t support the use of a device’s native camera, so I won’t be able to use a live camera interaction and have to fake it instead. Additionally, I’m not able to record the user’s tagging choices on system memory, as this is only a prototype and not a real developed application. Luckily ProtoPie does support text input and a native keyboard, so I also created an interactive comment field.
I’m also thinking of including a detail that displays which prompt stage the photo was taken in for the process slider. For example, if the image was taken during a sketching stage, it would display “sketching” as a status. I’m also adding an importing option from digital drawing softwares as an addition to taking the photo, as digital drawing is a popular medium within my target group.
As my prototype features many different screens, I created a sheet to make it easier to manage and track the design and prototyping stage for each sceen. It has helped greatly with the project management.
Here are some new iterations of the branding and the visual side of the application. The previous angular design did not complement the UI elements and made the application harder to navigate, so I changed the visual direction to more organic with round lines. With only flat colours the palette was a bit too overwhelming, but it works with gradients and as a slightly toned-down version. As I want the design to reflect the theme of the app, I’m testing how paint splashes work as graphic design elements. However, the palette and design are not quite there yet, so I will continue iterating on them. I’m also thinking of details such as showing people “what’s in” in the home screen, consisting of key words on what the user has been drawing and enjoying the most lately.
I have also decided on a final name for the application: “Luova”. Luova is a Finnish word that is used to describe a creative person. Luova is short and snappy, and I think it fits the theme of the app.
I have also asked for feedback on the social side of the application to determine which direction I should take. It seems to be a popular idea, so I thought of applying the tagging interaction to the community along with the process animations/gifs. In the community side people can add what they liked about others work as quick tags. Tagging is a more effortless way to leave a comment compared to the effort of writing. These tags would then be displayed to the user if they browse their own old published work, and maybe also as visualised statistics. Some people mentioned that they would like to see “meaningless statistics” as long as it stays positive and doesn’t become too pressuring.
People often comment each other’s work in similar ways, such as “nice face” or “great line art”, which is easy to replicate with the tags. It also provides a set of nice visual data when looking at the image and its tag clusters at different stages of the illustration.
Some further feedback I got was that instead of just tagging, maintaining the open written field is something the user testers found important. Whereas tags are time efficient and an easy way to communicate something positive, written comments add a personal touch and let people to explain more complex nuances.
I did consider a direct critique tag, where the user could mark what they didn’t like and would need more help with in their drawings. I then discarded the idea after asking some opinions, as people from the target group explained that they do notice the negatives in their work, and if the app allows them to tag negative things, they will have even harder time thinking positively about their work. The direct critique tag could be useful if the user is really focused on improvement, but personally I’ve come to decision that it wouldn’t fit the tone of the application and the needs of the target group. They would prefer the indirect approach of focusing on the positive, as they struggle in finding the joy of the drawing process and the strong parts in their art.
I also considered a system where the application would notice which tags the user doesn’t get as often, and change the prompt recommendations to reflect that and provide more prompts with the subjects or media the user needs to improve on. However, within the time limitations and prototype limitations it would be difficult to test the results of that, and within this context it seemed irrelevant, as the focus of the application is not in teaching improvement. Additionally, the goals already provide an opportunity to customise the prompt suggestions, and I believe maintaining the user’s own agency in determining the goals is more important than hidden changes. The interviewed people mentioned that they would notice if they get a lot of “line art” tags but not “colour palette” tags for example, and that having the option to change the goals within the application towards more prompts that feature practising a colour palette is enough if the user has the desire to improve on that. However, it should not be forced, as that doesn’t suit the premise or tone of Luova.
I iterated the community side a bit as well as added a featured “daily prompt” slideshow in the gallery, along with a link that directs to that prompt path. The user can do the daily prompt and have their work featured in the daily prompt slideshow, which would provide a sense of community while still staying light-hearted. As the point of the application is to avoid collecting followers and likes, I have chosen not to include a following system to the community side, but to showcase the artwork based on different categories. In addition to “recent” the community has a filtering system based on prompt types such as “relax” or “challenge”, as well as prompt tags – drawings with a lot of tags on “line art”, for example, would be displayed when the user chooses to filter based on the “line art” tag.
The community side will include the artist’s name so that people can find them from other social media, but it will not have follow system. This ensures that the user is exposed to a lot of different artworks and not only the few artists they look up to, thus expanding their view on how different each person’s drawing process is.
As a small detail, I removed “your work” from the community side. It was unnecessary since the user can browse their work through their own progress timeline, and I will add highlights to display the published images in the user’s own progress timeline, as well as the user’s own images in the community timeline.
So far I have been collecting feedback on the prototype, but also looking for a suitable prototyping tool for the final demonstration. The previous versions I have created using Adobe XD, but XD doesn’t quite have the interactive features the app would need, and thus I have been searching for something with higher fidelity.
I have made multiple online comparisons, and top two that would provide both the interactivity and animations I require would be Proto.io and a newer ProtoPie. In the end I preferred ProtoPie due to its high fidelity, use of variables and pay once method instead of a subscription. I have been exploring and learning to use the new tool, the animations it provides and its trigger-response method that is based on variables, and I have been happy with the results.
To make the application make use of the opportunities a mobile platform provides, I came up with a “tagging” method that lets people visually document what they like in their own work. Instead of writing, the user can hold their finger on a photo to visually tag the areas of the artwork they like, and then choose a suitable “tag” title for it. For example – if the user is happy with how a character’s facial expression tuned out, the user can visually mark the facial area in the artwork, and then choose “expression” as the tag.
The user would tag their work after each drawing stage and at the end of each prompt path. The tags would be recorded on the user’s own timeline, where the user can browse their previous work and see what they enjoyed and liked at each drawing, thus maintaining a positive memory of the drawings. Next I will create a prototype to test out the tagging interaction.