Learn Computing from the Experts | The Rheinwerk Computing Blog

Critically Evaluating Data for Design Projects

Written by Rheinwerk Computing | Mar 1, 2024 2:00:00 PM

How should you handle all the data you have collected, and how you can make the best possible use of your data?

 

An essential element is critically questioning the data you have. Depending on how you present or view data, you can come to completely different conclusions with the same data. Thus, you must know of the typical pitfalls when working with data.

 

Especially if you work with quantitative data, many additional pitfalls cannot be listed in this book. If this topic interests you, many textbooks in the field of market research go into much more depth (e.g., Agresti, Franklin, & Klingenberg, 2020; Patten & Newhart, 2017; Hague, 2021).

 

Where Does the Data Come From?

Data is valuable for designing UIs and products. However, the value of that data is closely related to whether you have collected data from the right people. If admins install an internal B2B application individually on users’ computers, the data will almost inevitably come from the right people.

 

But when you create an online questionnaire, you usually don’t know whether the people answering the questionnaire belong to the user group from which you want to collect data. Whenever you don’t “see” the people and only see the data submitted, we recommend integrating questions that allow you to filter out data from inappropriate people downstream. Questions could include, for example, age, gender, occupation, or willingness to use in a particular area. However, exactly what questions these are depends, of course, on which user group you want to reach.

 

Tip: Open Questions as a Filter: Note that participation in studies, tests, surveys, and so on is often associated with an expense allowance. In other words, participants receive money, vouchers, or products for taking part. As a result, people may try to belong to the user group even if they don’t and cheat their way into your survey.

 

We prevent this issue by making the questions for including or excluding participants very open-ended. Thus, for the responder, the “correct” answer to that question is unclear.

 

Many online tools can provide people with one-time links or cancel a survey if the participation conditions are unmet. Whether this filtering out is necessary is up to you to decide. You can also use the flexibility and risk matrix as a guide: The more risky/inflexible your solution is, the better the data quality you need.

 

Objective Data or Interpretation

Especially with qualitative data, a common pitfall is for data not to be recorded objectively but instead to be interpreted as early as the documentation stage. For example, an interview with a user might not be recorded with a camera (objective). Instead, the interviewer’s interpretation of what was said might be recorded in bullet points (somewhat subjective).

 

We advise you against the subjective recording of data first. Try to make sure that you first collect the data objectively, perhaps live by taking notes or with recordings. Then, when you have completed the data collection and want to take stock, you can look for patterns in your objective data. Recording data objectively does not necessarily mean that you transcribe everything that was said verbatim. We want to emphasize that you should note the essence of an observation and the essence of what was said, not your interpretation of it.

 

One great advantage is that you can view and interpret the objective data several times. If necessary, you can also re-analyze the recorded objective data later with more knowledge. If you did not record the objective data in the first place, but instead interpreted it directly, then you lack this option. Because the data interpretation depends on the person doing the interpreting, you lose information. The assessment could have been different for different people. Thus, you might find that, if several people are involved in the data collection, they interpret the same event differently, causing you to miss patterns that should have been obvious from the objective data.

5

This point is less critical if you’re the only person collecting data and working with it. But even then, collecting objective data and only interpreting it later is helpful. This approach also makes it easier to defend your conclusions if necessary.

 

Is the Frame of Reference Correct?

People’s evaluation of a system, product, or service depends on their frame of reference. People will still make comparisons if you don’t set a frame of reference, but they won’t know what they are comparing. Don’t believe that? Let’s explain.

 

At this point, we’d like to share a personal story that happened to Michaela (one of the co-authors of the book from which this blog post came from) in her diploma thesis (equivalent to a master’s thesis). During her studies, Michaela specialized in the field of usability and UX, and she worked on whether an object’s aesthetics influenced its perceived and actual usability. (Spoiler alert: No.) At least not as far as the experiment could show. But do you know what the problem is? We still don’t even now if aesthetics matters. And now we come to the issue of the frame of reference.

 

A Personal Example: The Can Opener Study: In Michaela’s thesis, the usability of several can openers was compared. For this purpose, the same can opener was purchased and redesigned several times. The redesign did not change the item’s usability, but the visual design changes were severe: There were some hideous things at the end and some quite fancy. To avoid making this classification based on Michaela’s assessment, a pre-test was conducted in which the aesthetics of these can openers were compared and evaluated. Then, for the experiment, we could select three can openers perceived completely differently: one neutral, one very beautiful, and one very ugly.

 

In the actual study, the participants were then divided into three groups. One group was given the neutral can opener; one group, the beautiful can opener; and one group, the ugly can opener. The subjects were then asked to open several cans quickly and accurately and assess each can opener on usability and aesthetics. It turned out that neither the objective usability (i.e., speed and accuracy in this case) nor the subjective usability differed between the three groups.

 

So far, so good. Unfortunately, however, it also turned out that the assessments of the can openers’ aesthetics did not differ between the groups. Thus, the three groups all rated the can openers as equally aesthetic. The problem: Since each group only saw its own can opener (and not the others’ can openers), each group had its own frame of reference. The people from the group with the beautiful can opener concluded that even more beautiful can openers must exist. The individuals with the ugly can openers probably figured even uglier can openers exist. To this day, we do not know whether beauty plays a role in the perceived usability of can openers. Unfortunately, finding out with the help of this chosen experimental setup was impossible. Well, you don’t make the same mistake twice.

 

You can see from this can opener story: If you do not provide the frame of reference— for example, by offering comparative products, alternative solutions, previous versions, or variants—then your participants will set their own frames of reference. Therefore, many subjective assessments become less valuable as a result.

 

Have You Had Value Judgments or Statements Explained?

In interview situations, sometimes, a statement is made that is interpreted entirely differently by the interviewer than by the interviewee. For this reason, as an interviewer, you should ideally inquire about value judgments to record the correct interpretation.

 

A Personal Example: The Car Mechanic’s Car: In an interview on the topic of “vehicle use,” Benjamin (the other author of the book from which this post was taken) received a statement from an interviewee that his vehicle is often defective. The interviewee said literally, “Oh, my car is always broken. There’s always something to be done or something to be replaced!

 

How would you interpret this statement? If you were like Benjamin, you would think to yourself at that moment that this must be terrible. After all, a car is for driving, and no one needs a defective vehicle.

 

Fortunately, Benjamin followed up and asked, “And what do you think of that?” because Benjamin’s interpretation would have been entirely wrong. It turned out the interviewee loved that the vehicle was permanently broken. He was an enthusiastic hobby mechanic who rode his bicycle to work. He had the car only to tinker with it forever. The interviewee would, therefore, not have needed a functioning vehicle without defects.

 

Perhaps you can recall this story the next time you hear a statement or evaluation as an interviewer. Ask specific questions!

 

Is the Data Complete and Meaningful?

When you work with quantitative data (e.g., questionnaire data), the question of data quality is crucial. People rarely want to fill out lengthy questionnaires. The longer and more cumbersome a survey is, the more likely you’ll lose people entirely along the way. Sometime, long questionnaires are so demotivating that, although people put in checkmarks, they no longer make sense.

 

The first check you should make in your data is whether all the questions have been answered. You may have 150 people answering the first question but only 75 getting to the end. This self-filtering of respondents is not bad if you know dropping out of the survey happens “by chance,” not because a certain category of people drops. Otherwise, you may end up with only a specific type of person in the survey who will give different answers. Depending on how important this aspect is to you, you can exclude people from the evaluation who have not completed the survey.

 

The second thing you should check is whether any strange response patterns appear, which is harder to check manually in the digital age. In psychology studies, when you work with paper questionnaires, you should check whether the questionnaires are filled out according to a specific pattern (e.g., in an X-shape, always option 1, or always option 5). Digitally, this pattern matching is less obvious because you don’t see the completed questionnaire in front of you; however, you can recognize many patterns from the numbers. Usually, all persons who answer everything with the same answer option should be excluded.

 

Now, you might say, “Hey, what if the person just loved everything?” Then, we would tell you, “You have a problem with your questionnaire design.” Usually, you design questionnaires so that, even if someone thinks everything is great, the questions require different values to be checked off. You achieve this goal by turning some questions around. (Instead of “I found it very easy to use,” it then says, “I found it very hard to use.”) In this way, a checkmark in the same place becomes a contradictory statement. Additionally, in the case of essential aspects, sometimes very similar questions are asked. Later in the questionnaire evaluation, whether those answers match can be verified.

 

Clean questionnaire construction is an art in itself. To learn more about this topic, we recommend literature on market research or test and questionnaire construction. But quite frankly, in most cases, you’ll be okay with a less complex survey. The most important thing is to ensure the questions are straightforward and that they measure what you want.

 

Editor’s note: This post has been adapted from a section of the book Usability and User Experience Design: The Comprehensive Guide to Data-Driven UX Design by Benjamin Franz and Michaela Kauer-Franz.