Eye tracking is the process of measuring either the point of gaze or the motion of an eye relative to the head. (Wikipedia contributors, 2018).
Eye tracking enables researchers detect how long the test subject looks at something, the path their eyes follow, and what point in time they look at it.
In UX design, eye tracking enables designers understand how users view a website, and gives an understanding of how they interpret the visual hierarchy of that site.
The earliest attempts at eye tracking began in the 1800s, these attempts involved putting a plaster of Paris covering over the eye with sticks attached pointing outward.
One of the first eye-tracking usability studies was in 1947, when Paul Fitts and his research team used cameras to study the movements of pilots’ eyes while using cockpit controls to land airplanes.
The late 1990s brought about the modern-day eye tracker. Most modern eye trackers use on a method called Pupil Center Corneal Reflection (PCCR), to detect and track the location of the eye as it moves. Near-infrared light is directed toward the pupils causing visible reflections in the cornea, which are tracked by a camera (figure 1). (“What is Eye Tracking”, 2016)
Eye tracking has become increasingly popular since, and research has risen from 310 research articles between 1970-74, to 15,000 articles between 2005-09. (“Exponential Growth in Academic Eye Tracking Papers” 2011)
Benefits of eye tracking in usability testing include:
Learning which areas of the screen attract attention
Understanding if link labels and instruction text are read
Creating visuals to convince stakeholders, and defend design decisions
Providing a triangulation of data with eye tracking visuals, user verbalizations, and overt observed behavior (Romano Bergstrom & Schall, 2014)
Eye tracking requires additional time and resources at the beginning and end of studies
Investments in software and hardware Methodology
The device used for testing was the Tobii Pro X3-120, and Tobii Pro Studio was used to design tasks and analyse results. Six participants were tested.
There were some minor hardware problems initially, the location of the eye tracker device, was blocking the Wi-Fi signal, and the device slipped on occasions, causing the loss of calibration.
Participants had two goal-directed tasks, both on the Parkrun.ie homepage:
Task 1 – Find and click the Register link
Task 2 – Find and click the Login button
The aim of the study was to discover if the redesigned page improved the discoverability of these components.
The output from tests focused on analysing the UI heatmap, the gaze plots , clusters, time to first fixation, and time to first mouse click.
Two variants of the homepage were tested – version one (figure 2), the current homepage, and version two (figure 3), the redesigned page.
Results & Analysis
Gaze plots are a numbered, step-by-step, visual representation of fixations and saccades (Romano Bergstrom & Schall, 2014). They are useful for understanding how the subject perceives the visual hierarchy of the scene. Clustering of fixations can indicate the user deliberately viewed something, making it more likely their brain processed that object, it can also indicate confusion (Poole & Ball, 2005). A path of erratic fixations can indicate a lack of a clear visual hierarchy, and regressive saccades can indicate confusion, or lack of salience in content.
In the scanning phase the user entry-point is close to the centre of the page, the test subject moved to the page header, scanning the global navigation, before identifying the location, with a cluster of fixations.
The cluster appears to be above the Register link, suggesting calibration may have been incorrect, or the subject may have adjusted their seating position.
In the redesigned page the test subject’s entry point is almost identical, the second fixation begins to follow the path of the previous test, however the subject notices and fixates on the new Register button in only three more fixations.
In this test, the erratic gaze plot, and regressive saccades, indicate confusion (Mitzner et al., 2010; Olmsted-Hawala, Romano Bergstrom & Rogers, 2013), the Login icon, a small tree, is not salient, and is not a metaphor associated with logging in. All, but one, test subjects failed to identify it.
In the redesigned version, the more familiar door metaphor is used, along with a Login label. The test subject quickly identifies it and fixates. In this version there is almost no regression and the login is identified in 10 fixations.
Heatmaps indicate the amount of fixations participants make or for how long they fixate on areas of a page. Fixations are registered using foveal vision for time periods of between 100 and 600 milliseconds (Romano Bergstrom, Schall, 2014)
The heatmap for all participants, for Task 1 indicates the participants scan headings and subheadings. Participants also scanned the global navigation, and the Login link was viewed for the longest duration.
In the redesigned page, participants again scanned the heading, but, unlike in the first test, the global navigation got little attention, and the bigger Register button got longer fixations.
The heatmap indicates longer fixations on headings, and on the global navigation, where, based on established design patterns, a user may expect to find a Login. However almost all of the page receives fixations, including periods of gazing on the Help button on the bottom right-hand-side. This erratic pattern indicates subjects were scanning the entire page, trying to complete the task.
On the redesigned version of the page we see a smaller area of prolonged fixations. Participants, with little fixating on other areas, identify the correct location in the top right-hand corner. The only other area receiving noteworthy attention is the call-to-action (CTA) button, perhaps due to its visual weight.
Clusters are regions of a page where there are three or more gaze points at the same time.
In the current version of the page, for Task 1, we see three clusters, viewed by 100% of participants. All three clusters are over, or bordering content.
In the redesigned page we have only two clusters, indicating less scanning by participants. Cluster 2, which is the area of the Register button, was viewed by 100% of participants, compared to 67% who viewed Cluster 1. The reduced number of clusters, with the reduced percentage of participants in Cluster 1, indicate the redesign of the Register button increased findability.
In this task, to click the Login button, there are five clusters, all have been viewed by 100% of participants, indicating participants scanned the entire page looking for the login button.
Contrasting the current version of the homepage, which had five clusters, viewed by 100% of participants, the redesigned version only has one cluster in the region of the Login area, viewed by 100% of participants.
Time to First Fixation Mean
The time to first fixation measures the time it takes a participant to look at a specific area of interest (AOI) from the beginning of the test.
In the current version of the page mean time to fixation on the Register button is 3.98 seconds, compared to .48 seconds in the redesigned version, indicating a 87.94% reduction in the amount of time taken to complete the task.
For Task 2, the participant was required to find the login button. Mean time to first fixation, on the current homepage was over 13 seconds (in fact only one participant actually correctly identified the button) compared to 1.85 seconds in the redesigned version, indicating a 85.83% reduction in time taken to complete the task.
Time to First Mouse Click
In Task 1, the mean time to first mouse click was 6.03 seconds on the current design, on the redesigned page it was reduced to 2.57 seconds. indicating a 57.38% decrease in time taken to complete the task.
In Task 2, mean time to first mouse click was 22.03 seconds on the current design, on the redesigned version it was reduced to 2.92 seconds. indicating a 85.41% reduction in the time taken to complete the task.
It is not recommended to use eye tracking alone to analyse the entire user experience, as, on occasions the subject’s eyes can fixate unintentionally, these orphan fixations mean it is impossible to imply the participant actually saw the target, or that it registered cognitively.
Used with other non-invasive physiological response data, eg. pupil dilation, facial recognition, skin conductance, and neuroimaging, researchers can also measure emotional and cognitive response. However the quantitative data obtained from eye tracking (e.g., number of mouse clicks, fixation duration) add an additional measure of the user experience that is not easily obtained from usability sessions (Romano Bergstrom & Schall, 2014).
The aim of the test was to ascertain whether the redesign of the Register and Login buttons increased their findability. Analysis of the gaze plots, heatmaps, cluster maps and times to first fixation and first mouse click, on the redesigned homepage indicated improvements over the current version.
Besides some initial problems with hardware setup there were no further issues.
Eyetracking is a useful tool to evaluate how test participants see and interact with the page, however it is not going to tell us how the user feels about any other aspect of the user experience. For that reason, if I was to use eye-tracking again, it would be in conjunction with some other form of analysis, or retrospective talk aloud (RTA) feedback.
Mitzner, T.L., Touron, D.R., Rogers, W.A., Hertzog, C., (2010) Checking it twice: age-related differences in double checking during visual search. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 54. pp. 1326–1330 (18).
Poole, A., Ball, L.J. (2005) Eye tracking in human-computer interaction and usability research: current status and future prospects. In: Ghaoui, C. (Ed.), Encyclopedia of Human Computer Interaction. Idea Group, Hershey, PA, pp. 211–219.
This post maps the redesign of two user journeys for an existing app. The journeys that have been redesigned are the Logging In, and Sending Cash processes.
User research has not been included, and the app has been anonymised.
App Login Screen
A more visual login screen, with inspiring imagery which is aligned with brand personality.
The Sign In, and Sign Up buttons have been made equal size, with extra visual weight being applied to the Sign In button
Login Screen – Fingerprint Login Option
Include biometrics-based security, including fingerprint authentication.
Send Cash, Step 1
This is the app homescreen, the user lands here after logging in. It is also the first step of cash transfer process, as this has been identified as the action users carry out most frequently.
This step allows the user input the amount of cash they will be sending. Previous transaction amounts will be available via the dropdown, along with preset amounts e.g. $5, $10, $20, $50, $100. The user will also be able enter the amount manually in the text field.
Send Cash – Step 2
In Step 2 the user enters the recipient details. The recipient’s number can be added manually, and recent recipients are available via the dropdown.
The user can also access their contacts via the Contacts icon.
Applying the Gestalt Principle of Continuation to imply process stages, the Send Cash process is styled in a marquee-type manner. The previous and next step in the process appear behind the current step, clipped.
The previous step is clickable, should the user wish to amend the transaction details .
A clickable progress indicator is used to highlight the user’s current position in the process, and again the previous step is clickable, should the user wish to amend the transaction details.
The amount entered in the previous card is highlighted in the current card – in this case “Send $30 to”, to reassure users they entered the correct amount in the previous step.
Send Cash – Step 3
In the final step, Step 3, the user has the option to add a message to the transaction.
Also, as in the previous screen, details entered in earlier screens are highlighted in the current card – in this case “You are about to send $30 to 555-123-1234”, to reassure users they have entered the correct data.
Should they wish to do so, the user can tap the previous step’s card to access it, and they can also access the steps via the progress indicator. They can also access previous steps via the highlighted, linked text, in the transaction summary.
The Send Now button has been styled with a blue background to differentiate it from the previous card buttons, as they only took the user to the next step in the process, whereas this button completes the process, and sends the cash.
While the transfer is in progress an animation is shown, to reassure the user the transaction is being processed.
If the transfer is instantaneous, it is recommended to show the animation for three to five seconds, regardless. This is done to reassure users, during a UX stress point that their device, and app, are all hard at work for them. This process is known as Artificial Waiting.
In this screen, we see the animation from the previous screen has been replaced by a check mark to indicate the process has been completed. The recipient device has been highlighted in blue.
Leveraging the Peak-End Rule (psychological heuristic) an animated smiley has been added, this micro-interaction gives the app personality, and gives the user a moment of relief at the end of a potentially stressful process.
The user has the option to either view the transaction details, or close the transaction and return to the app homescreen (Send Cash – Step 1).
The empathy map indicates the main concern of the primary user is security – opening the app in public and revealing sensitive information, being one concern.
For this reason, in its default state the balance, card number, and CCV are masked by default. Tapping the eye icon reveals those hidden details. Tapping the eye icon again returns it to its masked state. The default state, masked or unmasked, can be set in the app settings.
Beside the credit card number is a Copy icon, selecting this copies the credit card number to the device memory, allowing users to paste the number when making online purchases. Copying is confirmed by a toast notification.
The > to the right of the balance takes the user to their recent transactions, this is an established design pattern, employed by PayPal and others, and so will be familiar to primary users.
The rise of the internet and mobile devices has altered the process of software development. With the rapid evolution and deployment of applications across the internet, software developers now need to react quickly to changes in technology, customer requirements and feedback, and rapidly-evolving competition in their sector. These changes in requirements and competition have lead to organisations gaining a competitive advantage by adopting user centered design (UCD) processes, and agile developmentRead On >>UX Design and Agile Development
Gestalt Psychology originated from a movement which became popular in Berlin in the 1920s. Gestalt seeks to make sense of how our minds perceive things in whole forms, rather than their individual elements (Busche L.). The most prominent founders of Gestalt theory were Max Wertheimer, Wolfgang Kohler, and Kurt Koffka.
The theory of Gestalt, is that, in order to simplify the many different signals encountered in day-to-day life, the brain attempts to reduce its cognitive burden by categorising these signals into groups, wherever possible.
Gestalt is a set of principles, or laws. Kurt Koffka summarised the principles as: “The whole is other than the sum of parts.” (“Gestalt psychology,” n.d.) Meaning the perception of an entity as a whole, is different to that of the individual parts it is composed of.
The word Gestalt is used in modern German to describe how something has been “placed,” or “put together.” (Encyclopædia Britannica 1998). Gestalt can also be referred to as the “Law of Pragnanz” or the “Law of Simplicity.”
The Gestalt principles, when applied to user interface design, are valuable as they tap into fundamentals of cognition, that are true across all people (Dain M. 2014), and when applied correctly, Gestalt principles help the interface designer guide the users towards their goals.
The Gestalt Principles include:
Principle of Similarity
The Principle of Similarity states that visually similar components within an object will be perceived as groups. These similarities can originate from qualities including the components’ shapes, size, colour and shading.
When similarity amongst components occurs and grouping is perceived, an element can be highlighted if it is dissimilar to the others. When this occurs it is known as an anomaly.
In user interface design the Principle of Similarity can be leveraged to increase the learnability of a product, to guide users’ expectations, or, where an anomaly occurs, it can help focus the user on an important element, such as a Call to Action (CTA) button.
Principle of Continuation
The Principle of Continuation applies to humans’ inclination to interpret and project direction and movement.
People choose to visually interpret and continue the direction of movement of an element, as being the one with the least visual friction. Whether this direction of movement is implied, or whether it is initiated by the sweep of the element.
Another application of this principle is that when faced with multiple elements, humans are more likely to group the elements with similar directional changes together.
Principle of Closure
When looking at a complex arrangement of individual elements, humans tend to first look for a single, recognisable pattern. (Rutledge A, 2009)
The Principle of Closure occurs when elements hint at a shape rather than actually completing it – it is still perceptible. Human minds complete the shape, even though it doesn’t fully exist. People mentally combine positive and negative space to complete the image.
In interface design, this innate desire to complete objects and shapes is leveraged on mobile devices, where space is limited. Horizontal lists are intentionally clipped to encourage users to scroll horizontally to view more content.
Principle of Proximity
The Principle of Proximity considers that, groupings of objects, aligned in what could be perceived as a pattern, are perceived as being part of a unit. And, as part of a unit their relationship and meaning could be considered comparable.
Conversely, if items are spread far enough apart, without pattern or any other obvious relationship, they are perceived as separate items, for the user, parsing and understanding these elements is more difficult as each element needs to be considered individually.
The principle of proximity, in user interface design is applied where the designer wishes to demonstrate a relationship, between elements of similar meaning, relationship or value, in relation to the user goal.
Misuse of this principle, whether intentional or not, can occur when objects with no real relationship are grouped by proximity or alignment, mistakenly creating a relationship between the objects.
Principle of Figure and Ground
Figure and Ground has come about as a direct translation of people’s three-dimensional view of the real world. With Figure and Ground that view is transferred to two dimensional space, and the expectation of depth – where people consider elements as being either figure, the foreground object, or ground, the background.
Figure and ground has been used in modern instances, to lessen the impact of device orientation on mobile devices and other emerging form factors. Switching between landscape and portrait mode has become less of a cognitive burden on the user, and easier to facilitate for the designer by implementing the popular card metaphor in their designs. The card itself becomes figure, and the remaining space becomes ground.
Principle of Symmetry
People innately enjoy recognition and pattern, and reject dissonance and asymmetry. (Dain M. 2014)
The principle of symmetry states that the mind perceives objects as being symmetrical and forming around a centre point. It is perceptually pleasing to divide objects into an even number of symmetrical parts. (“Gestalt psychology,” n.d.) Therefore, when our minds encounter a group of symmetrical elements it will perceive them as a coherent shape, or unified group.
For example, the image below shows four curled and two square brackets. However when the image is viewed, we tend to perceive three pairs of symmetrical brackets rather than six individual brackets.
Site Review – ManUtd.com
The site chosen for review was http://www.manutd.com , website of English Premership football team, Manchester United.
The Manutd.com landing page appears to be primarily focused on news, the current style appears old and cluttered, with little thought given to user experience, and user interface. The site is responsive, however the mobile version seems to be little more that a basic translation of the user interface, from desktop to mobile, with little thought given to the challenges and opportunities provided by each form factor.
The Gestalt Principles being applied to it are the Principles of Similarity, Figure and Ground, Continuation, Closure and Proximity.
Principle of Similarity
The dissimilarity, in terms of size, of the main feature story, at the top of the page make it an anomaly. An anomaly such as this is used to attract the users’ attention, and to highlight the element.
Principle of Figure and Ground
Each news article has been presented in card format, focusing the user’s attention on the figure (foreground element), using figure and ground in this way lessens the design challenges presented by changes in device orientation, as the user focuses on the individual elements, the cards, which remain unchanged, regardless of orientation.
Principles of Continuation and Closure
The horizontal layout of the news stories leads users through articles, and the intentional clipping of the third article in each category acting as a signifier of horizontal scrolling to the user.
Principle of Proximity
Articles from each of the different news category have been moved closer together, this grouping of objects reinforces the relationship between articles and the category under which they appear.
Dain M. (2014, October 16). Gestalt theory for interface designers – 3 similarity. Retrieved 26 March 2018, from http://michaeldain.com/2014/10/gestalt-theory- for-interface-designers-3-similarity/
Encyclopaedia Britannica (n.d.). “Gestalt Psychology.”, Retrieved 28 March 2018, from www.britannica.com/science/Gestalt-psychology.
Busche, L. (n.d.). Simplicity, Symmetry and More: Gestalt Theory And The Design Principles It Gave Birth To. Retrieved 26 March 2018, from https://www.canva.com/learn/gestalt-theory/
Gestalt psychology. (n.d.). Retrieved 26 March 2018, fromhttps://en.wikipedia.org/wiki/Gestalt_psychology
Rutledge, A. (2015, August 25). Gestalt Principles of Perception – 5. Closure. Retrieved 1 March 2018, from http://www.andyrutledge.com/closure.html
The project requires you, working individually, to analyse and test the information architecture and content of an existing site.While you will not have knowledge of the entire content strategy at work, the effectiveness of this strategy will be reflected in the quality of the content on the site (text content, language used on labels, tone of voice, consistency writing style, etc.).
There were a number of iterations of the UI design over the final stages of the project.
In the medium-fidelity prototype stage, on the app homescreen, the design showed more elements – category headings had icons, and each story tile had a Like and Download button. As the design moved into the high-fidelity stage, and real photographs were used instead of placeholders, it was clear these elements were adding clutter unnecessarily. As a result the Like and Download buttons were moved into an overflow menu, accessed on the top right-hand-side of each tile. Read On >>Section 7 – High Fidelity Interactive Prototype
The persona created, Jenny Miller, was based on the user survey – a tech savvy female, aged 33 and a frequent user of social media. Jenny travels regularly and is an avid amateur photographer. Jenny would be both a contributor to, and consumer of, app content. Read On >>Section 5 -Personas