What is Operationalization?

That’s a difficult word to say and most text editors won’t recognize it as a word. Here’s a good scientific definition “An operationalization is the translation of a theoretical variable into procedures designed to give information about its levels” (https://www.sciencedirect.com/topics/social-sciences/operationalization)


Although the name might be confusion, this practice is done in research and UX research very often. Let’s take for instance a common measure of “Daily Active Users”. Usually analytics tools will give you this as a number and you can generate other measures like stickiness with Daily Active Users/Monthly Active Users as a percentage. But a core question is “what exactly constitutes an active user”? Google Analytics defines this as “the number of unique users who initiated sessions on your site or app“. So they are saying a single uniquely identifiable user (probably a person but can’t be sure), has started at least a single session. So if you ran a physical shop somewhere that would mean the number of individual people that entered your store. Some people may enter twice or more but they don’t count, and we don’t quite care about how long they stayed, or what they did when they were there, but we care about whether they made it through the door or not.


What Google Analytics is doing here is operationalizing a concept into individual steps that we can measure. This is a critical part of any analytics process but I don’t see it quite as well discussed, because either knowledge is assumed or analytics products make these decisions for you. But in fact it’s very important to know how a concept is operationalized, or in other words split into individual processes that you can in fact put a number on through direct observation.

For instance one major concept in web analytics is “engagement”. However there is no single engagement related data point on the web. It’s not a thing. Even if you made a button called “engage” and counted how many people clicked on it you would end up with information about something (I guess) but it’s not a measure of engagement. Instead you need to spell out specifically what you are going to count as engagement and how you are going to define it. Will you make it an either/or, or levels of engagement. If so where will one level end and another level begin. And more importantly what individual observable elements will you consider to be constituting engagement? Is one more important than the other?

In qualitative research this kind of work is done very often but it takes a very long time and is very resource intensive and still includes some guess work. For instance surveys or instruments that are supposed to measure a concept are a good example. Let’s say you want to measure how much “anxiety” a person has at any given time and you want to measure that. You would first go about reading the literature (research from the recent past) that have successfully associated some metrics with anxiety (i.e. people who we know have high anxiety also disproportionately have X factor). Then you make a question that may ask people “how much of X factor do you think you have right now?”. You can put it in a Likert scale or if this is something that is already quantifiable you can ask them that directly (for instance number of hours in a week). Then you put together a couple of more items and give this survey to a whole bunch of people and there are statistical tests run for validity and reliability (assuming other things were done properly like giving the survey to a representative sample). So you can see why PhDs take 6 years.


This kind of rigorous research is not something that is worthwhile to do for UX research. However one some components are the same in both. One is that, just like looking at prior research, you need to have a sense from prior work in UX about what has been tried by others or what is an industry standard (doesn’t mean the industry is always right but you need to check). The second is that there needs to be conceptually meaningful relationship between your operationalized definition and your construct. This can only be known with a deep knowledge of the concept and process. For instance if you know that in most cases your users are forced to open a session in your app (maybe they have to go through it to reach somewhere else) then you know that opening a session is not an indication of being active.


Where it gets a bit complicated is when multiple factors effect the outcome. For instance is someone using your new feature? You may need to count things like “how many times did they visit the page that has the feature”, “did they interact with any inputs in the feature?”, “How long did they stay on that page?”, “How often did they hit a critical action like pressing Apply button”, etc. One of the best things to do in this case is use a rubric.

A rubric is a matrix of individual items and the number thresholds that separate them from being part of the final concept count. You can have a threshold of yes/no or multiple levels. An example below.


But where do you get your numbers for this exercise? This is the part that is more of an art than science. One place is industry standards or benchmarks to a concept and environment that is similar to yours (concept being something like “active”, environment being the kind of app yours is e.g. SaaS application for finance). Another good source is your past analytics numbers. Take a look at what you consider examples of the concept and see what numbers they reached (more on this topic later).


Whatever you do, be honest with yourself and don’t fudge the numbers after results are becoming clear and you are embarrassed to show them to others. You need to operationalize BEFORE you start collecting or looking at the data. If after your do your first round it looks like your rubric was not well aligned or weighted, then you can revise and test it with ANOTHER set of data.


If you follow this practice to operationalize in your UX research you will be far ahead of many existing practices. You need to remember while drawing conclusions from this data, it’s not proof, it’s a suggestion. How strong of a suggestion it is depends on how strikingly different your findings are. But it will give you a lot more visibility than just guessing.


Photo by Sigmund on Unsplash

10 views
Let's connect on LinkedIn

© 2020 by Caner Uguz. Good job, you are reading the fine print but there is nothing interesting here.