November 3, 2022 | 48 min

Episode 3: Introduction to Behavior Science & Nudges

In this new episode, we go back to exploring behavior change, from another angle. Our guest? Nurit Noble! We first learned about Nurit’s work when we stumbled upon her course: Designing Nudges. Together with her colleague, Christina Gravert, they walked us through what nudges are and the BOOST framework, a process they use in designing nudges to support behavior change in different setups.

Listen on:

Episode Podsheet

Here's the main things we got out of this episode, and something you should keep in mind as well.

Episode Transcript

Lavinia: Hi, I'm Lavinia and this is offbeat on air. We are on a mission to break our bubble and go beyond L&D borders. We want to connect to the outer world and seek inspiration from different people. People trying to achieve similar goals as ours, but in other circumstances with different skills, tools, and mindsets. 

Mili: Offbeat on air is here to inspire you. We will learn how scientists solve problems, how professional athletes think of performance, and how surgeons approach the learning process. In a nutshell, in each episode, we will connect to great minds in order to infuse new perspectives into our lives as learning professionals.

Lavinia: Welcome to another episode of Offbeat On Air. In this one, we'll go on exploring behavior change, and this time we have with us, Nurit Nobel. We first learned about Nurit’s work when we stumbled upon her course designing nudges. Together with her colleague Christina Gravert, they walked us through what nudges are and the Boost framework, a process they use in designing nudges to support behavior change in different setups.

Lavinia: Nurit has a masters in social psychology from the London School of Economics and is currently pursuing a Ph.D. at the Stockholm School of Economics. She has tons of experience applying behavioral insights to increase demand for brands and services, and in recent years she translated all her work to product design, project planning, recruiting, and group work among others.

Lavinia: We are pleased to have you in your welcome to With On Air. How are you doing?

Nurit: Uh, first of all, thank you so much for having me. It's great to be here. I'm doing great. Just, uh, came back from, uh, some, uh, uh, long vacation, uh, or summer vacation, so feeling good.

Lavinia: Nice. Before jumping into the main topic of the episode, which is nudges, we wanted to take a step back because we do know that nudges are a tool from a bigger toolkit, which is the behavioral scientist toolkit.

Lavinia: So we wanted to start with this. Can you tell us what behavioral science is actually?

Nurit: Yes, definitely. Great question. And I love the fact that you use behavioral science because that's really the inclusive term. And behavioral science really talks about this, um, several disciplines that having their focus, that are attempts as scientists and researchers to understand human behavior as part of a larger context.

Nurit: And underneath this umbrella can be disciplines such as psychology, um, that looks at the inner, uh, works of the mind and the psyche and how people relate to each other, and, uh, the, the cognitive aspects of, um, of how we work, how we think, and feel and react towards other people that psychology. Then we have of course, uh, economics, which maybe looks at how, um, people's actions in the real world affect all kinds of outcomes from, you know, macroeconomics to politics, uh, labor markets, et cetera. And we have other disciplines such as sociology, uh, political science, et cetera. So I would say that any discipline that really focuses on human behavior or the inner works, of humans, of people, uh, it can be part of this idea of behavioral science.

Nurit: Uh, and I think that also specifically, um, as a birthplace for nudging, people use behavioral science in the context of this is the place where we try to understand maybe the quirky ways that people work that are maybe not always abide by any kind of, you know, textbook or, uh, laws of, you know, trying to maximize benefits, but sometimes kind of digressing from this formula. Um, and, and how can we affect, uh, people to behave in ways that are better for them? 

Lavinia: Uh, that's really, really nice because actually wanted to ask if there are other frameworks or tools out there that we can rely on beyond nudging obviously to affect or to support behavior change. Is there anything else you would recommend? 

Nurit: Well, there, So the first part of your question, are there other frameworks? Yes. The second part is there anything I would recommend less because I believe in this, but of course, there are other ways to, to look at a problem. You know, I mean, and the way, the way that one approaches a problem depends very much on, you know, your home disciplines. So a pure cognitive psychology will probably talk about cognitive behavioral therapy as a way to change behavior, which there is an extreme amount of evidence supporting that for specific behavior. An economist, a traditional economist, maybe would talk about rational choice theory, where indeed we look at humans as utility maximizing.

Nurit: So sort of if you want to shape their behavior, what you would do is something that has to do with incentives. So like tax people, if you want them to avoid a behavior or, you know, um, provide subsidies or, you know, pay people if you want to encourage the behavior. So these are all other models and I would say that, uh, they have a role absolutely in, in different uh, aspects, but something that I think that behavioral science adds to the picture and nudging is basically approaching areas where these solutions have not been effective and proposing something else we could look at to see if we can, you know, move the needle a little further. But it doesn't mean that nudging is superior to these or it's inferior to this. All of them make sense in different circumstances and to different issues.

Nurit: We also tend to often think about nudging as a solution to problems where we specifically see a gap between what people say that they want to do and what they actually end up doing. We call these problems where there is an intention-to-action gap. So this gap between good intentions and the actions that we don't take.

Nurit: So, for example, you know, we really want to lose weight. But when, you know I am faced, uh, with, uh, you know, the choice of whether I or not, I should have an extra piece of cake I, I choose to take the extra one. I really wanna work out, and that's really important for me. So I set the alarm clock for tomorrow at six to run before work. But when that time comes, you know, staying in bed is just, uh, more tempting. So there's a lot of issues in life, a lot of problems where we have this gap and we tend to think of, to, to talk about nudging as a good solution to those kinds of problems. 

Lavinia: Um, I'm way too familiar with those problems to me. Okay, so now just tell us what are nudges like, Is there a definition, of nudges?

Nurit: Yes. Uh, there is a definition. So nudging comes from a book that was published in 2008 by an economist Richard Thaler and a legal scholar Cass R. Sunstein, and they wrote a book called Nudging, Improving Decisions on Health, Wealth and Happiness, I believe. Um, and in that book, they defined nudging as an act with the purpose of changing behavior of individuals in a way that doesn't rely on this traditional way, traditional ways of changing behavior such as you know, banning something altogether or providing these classic incentives, like the carrot and the stick type of thing. So nudges are cases where we don't use that, but instead, we use what we know from psychology about the way that people's minds work when they make decisions. And we use this knowledge to design an environment where we, in a subtle way, nudge them toward the right action. So nudging uses a tool called Choice Architecture and Choice Architecture is basically this act of designing an environment to promote the right behavior. So if we, for example, um, it's really important for us to do this thing that I mentioned about working out in the morning.

Nurit: How do we design an environment in this case for ourselves that promotes that? For example, by, you know, laying all of our clothes, outfit, equipment and everything the night before, for example, by choosing a podcast or a playlist that really energizes us so that we, you know, look forward to it. And when the morning comes even though, you know, the bed is really appealing to stay there, we have enough motivation to get up. So this idea of, you know, removing the barriers that stand in the way and, and, um, using the environment as a tool to kind of, nudge us towards the right, uh, the right behavior.

Milica: I really like how this is very experimental. So like what you say, it's, it's, um, running a lot of experiments and understanding what you say, what is the environment in which this habit can actually become a habit or like our nature. So it, it's really, um, for us as L&D professionals, we are asked to do at the end of the day, a lot that comes to this, you know, we try to provide as much as possible for learners or our employees to grow. And somehow does that not happen, always. You know, the same when I put the book next to me, it doesn't really always work. So I wonder, um, from your experience, are there different categories of nudges, like a first question and second, how do we know if we know what types are there, how do we know which one can work in different situations? I know that that's a very complex question and depends on a lot, but how do you go about understanding how to play around? What is the variable you will change? Or is it just you will run as many experiments as possible?  

Nurit: That's a really great question. So first of all, yes, there are different types of nu nudges and different. And there are different, I would say, classifications that exist out there in my, uh, uh, company Impactually, where we work on these things, both from a research perspective and with clients, we design the frameworks that mainly has, I would say, three broad categories of, uh, nudges.

Nurit: We call it the refine matrix. And it starts with reframe. That's the first category. Uh, the second category is encourage. And then finally we have, uh, facilitate. So reframe and courage and facilitate mm-hmm. And broadly what this category means is that in reframe we try to do exactly what I mentioned in this choice architecture thing. We try to subtly, uh, influence the environment. So design the environment, in a way that nudges someone uh, here or there. Uh, and that can be something very, very, very subtle. So that can be, for example, the structure of a form, how many options we provide, if we are, for example, a restaurant designing menus.

Nurit: You know, do we put the meat option first or do we put the vegetarian option first? That is going to have implications on how many people choose the meat dish versus the vegetarian. So this kind of subtle changes that can be about the structure, the placing, and so on, that we put under the category of refraim.

Nurit: Then we have the category of encourage, which is slightly more, I would say, active. So if reframing is, is passive, the, the client of the restaurant won't even notice if, you know, unless they go to this restaurant every day, let's just say it's a one-time client, they won't know if we played with the menu, right? All they see is a menu. So it's, it's very subtle. But then encourage is a bit more of, uh, nonsubtle category. So it's a bit more active from the point of view of the nudger, uh, encourage can be, for example, so if we, for example, are doctors and we are really interested that our patients will take their medication on time.

Nurit: So this is an issue that, and, and again here, I think it's a, it's a classic issue of intention to action gap because most patients also want to take their medication as instructed, but you know, they can, for example, forget or, you know, have a lot of things on their mind or not have it accessible to them exactly when they do remember, so having reminders from the doctors that come in specific times that have proven to be, uh, effective with these patients can be a really great way to nudge them towards an action. So this kind of reminder, or for example, messages, uh, that tell people, let's say they're shopping in their grocery store and they see a message that says that most of our, uh, customers in this grocery store, you know, buy at least four vegetables in their, uh, shopping cart, like, I'm like, ah, that's a good idea. So that's also an active nudge. So both of these reminders and social proof messages are from the category of encourage.

Nurit: And then finally we have the, the category of facilitate. And that's really kind of the essence of nudging, which is this idea of if you want people to do something, make it easy. So here in this category, I would, for example, uh, think about this thing of, you know, putting out my outfit the night before or even some if I want to run in the morning some would even say that they sleep with their running clothes on so that in the morning it's just so easy to just go out and run that they don't even need to take any action. Right. So this kind of ideas of just removing all the barriers and making it as easy as possible, that's under facilitate.

Nurit: So this is to answer your first question, which is are there different types? So yes, there are absolutely different types. 

Nurit: So how do we then match the type of nudge to what is relevant in this case? And here I would say that this is a process and this is something one needs to work on. So there is no, you know, magic solution. There is no one nudge to fit all problems. We first need to understand who has the problem. Is it again, is it like one person? Is it Nurit that wants to run tomorrow morning but can't? Or is it, you know, the employees of Company X that we're gonna give a training to and we really want them to retain the knowledge? Um, or is it, you know, teenagers who we try to encourage to save more money or, you know, we need to understand like who is our target group?

Nurit: What is actually standing in their way? Okay. What are the barriers? Cuz if, if, if nudging is about removing barriers, we need to understand what are these barriers. And we need to then, so after we diagnose the problem, this is where we match the solution. And the solution would be the type of nudge. And after that, we would experiment.

Nurit: So we would maybe choose the most promising ideas and then run a test to see which one of them actually works. And this whole process, what I've just briefly discussed is also a process that we have at Impactually where we both work from a research perspective and that we work with our clients. And that's called the Boost Model.

Nurit: And Boost stands for Behavior, Obstacle, Outline Interventions, Study and Tailor. So that's basically, um, the whole process from identifying the behavior and the target group to understanding the barriers, what's standing in the way, designing, outlining the interventions, the potential interventions, um, making the tests, so, uh, understanding what works and what not, and then implementing and tailoring it.

Milica: Oh, this is so fascinating and so applicable, I think, to everything we do. Because it is exactly our task to really, really go back to the barriers and see like how can we remove it? But there are a lot of like different target audiences in our universes where wherever we work, um, and it's always hard to identify what is the biggest pain actually that we are trying to, to address. And maybe I will go back to that process of the research. We spend quite a lot of time. On, on understanding very well who is it and what is it that is preventing them? So can you maybe tell us a little bit from your experience, and if it helps thinking from, you know, who, who's going to listen to this podcast is people who are trying to help people develop either internally or working one on one. So from your experiences, if we want to understand the audience, what are some research methods that worked very well in your experience or are actually also easy to implement even if you don't have, you know, all these resources and time in the world to do so? Where would you start? 

Nurit: Yeah, that's, that's really, that's really, really great. I think there are so many different methods that we can have in our toolkit. And, and then choose the right one based on the context that we're working in. If we're working in a huge company, I, I would probably choose something else that if we're working with a team of five people, uh, but some of the messages to just name a few that I'm sure your listeners are also familiar with.

Nurit: But things that I've worked with in the past, of course we have surveys, right? Um, so it's very easy if you work in a huge company, I would say, you know, should out survey to understand, um, what is standing in people's way. Um, and then again, there are ways to phrase these questions so that, you know, we're not leading people to choose a particular option, and, you know, so definitely work with people who know their ways around surveys and, and how to, um, and how to phrase those, uh, because that's a whole science in itself, that sometimes companies are, are not aware of, like when things leave academia, they have a tendency to kind of lose a few dimensions in between.

Nurit: But, so there's definitely surveys, but I would say that something that I, uh, think is really great for the context of, uh, understanding barriers is qualitative research. And that's really sitting down and, and talking to people, um, whether it is in the form of interviews or focus groups. To really, really, so first of all, understand your target group and understand the barriers that they're facing.

Nurit: So I would say there is no, uh, replacement to sitting down and talking to people. If one can, you know, one should do that. And it can be an informal chat at the coffee machine. You know, it doesn't have to be a big thing. Or, you know, I work with students obviously in the university, so if I wanna understand something that's on their mind, I could like, put up an ad and like, you know, free pizza if you come and talk to us for two hours about this and that, and like it would be two hours.

Nurit: So this kind of thing like finding the informal ways to, to get this kind of information. I would say that one other, um, method that is relevant when we talk about behavior is observing, actually. So, you know, uh, we have worked, for example, a lot with projects that have to do with environmental behavior, whether it is food waste or recycling. So here you could imagine that just standing and observing can really give you a lot of information. For example, standing and observing a recycling station and you see like how you know people come, what's the amount of things they're carrying with them? Where do they go directly? Where do they not go?

Nurit: Are they confused? What is the confusion? Where do they go, when they should be going somewhere else, et cetera? Um, so, so observation is, is also a, a, something great to have. And I would say that probably ideally you would be combining a few of these methods because, you know, uh, when it comes to the qualitative research and talking to people, that can be really helpful.

Nurit: But we need to, again, remember what we talked about in the beginning, which is this intention-to-action gap. Right? So a lot of the times people will tell you, Yes, we're super interested in this. We really, really would want it. And then, you know, Sure. You know, as a, your profession, like they don't show up, right?

Nurit: We know it from, again, like looking at sustainable behavior. Yes. We're super interested in this ecological products. You know, we, we really, really care about this. And then they don't actually buy them, for example. So we need to dig a little deeper and, and understand what's going on there. So I think probably a combination of all of these or maybe, you know, peeking and choosing what's right for the context can, uh, go a long way.

Milica: Yeah, that's wonderful. And I think we should always have this mindset that we get inputs all the time. I think when, when we work, what you say, who, if we just have a chat, it, it is information that we get or what people complain about. It must, must be really interesting in your mind, you know, just what you hear and see on a daily basis that that inspires you. Oh, help. This could be a good experiment to run. Uh, it must be really, um, different lens to live with this knowledge. I find it really cool. 

Nurit: Um, yeah, it, it is really fun. I mean, uh, a lot of people say that research is actually mesearch so that you go and find the problems that you also have in your life. And so I've definitely, by my research and others research in like my own life and you know, talking to students and talking to other researchers and yeah, you never know where inspiration can come from. 

Milica: Yeah. So nice. And I think, you know, we understood a little bit where do we start with, and you know, what we can use to explore a little bit. And let's say we ran this experiment. Um, for us, it's also interesting to learn a bit from you what happens next. So, um, once you gather all this data, how do you go about actually designing this, uh, these interventions that you call? And that's another term that we heard, from also experiencing your course and I would so behavioral change intervention. How would you go about it and maybe just walk us through a little bit your approach. 

Nurit: Yeah, that's, uh, that's a great question. So basically that would be the final step in our model, right? I talked about the boost model. And so what you have after the test is the tailor phase where you basically tailor the intervention to the specific context.

Nurit: And then I would say it would be very, very dependent on what did we actually come up with. If the, the, the test that we did let's say that we were a company that wanted to encourage their employees to book, uh, long haul travel via train instead of via air travel, right? They want to be more sustainable, so they want people to travel by train, and that is their goal, and you know, we did the research, we understood the barriers, and one of the solutions that we tested was then a default. Uh, in the system of travel booking, which has the default option, um, that, you know, it suggests a train and only if you don't want it, you need to override that, and then you can book a flight instead.

Nurit: So a default for those who are maybe not sure about the use of that word. A default in the nudge world, is, um, it's basically the option that someone chose for you that you didn't need to choose, right? The default can be the setting of a printer, is it one side, single-sided, or double sided can be the setting of, you know, your iPhone or whatever. It always comes with something and then you can override it into something else. And defaults are extremely powerful because we know from the knowledge of psychology and, and all kinds of research about how people make decisions that people, in general, don't like to make decisions. Decisions, you know, take time and energy and we're trying to kind of, you know, um, survive, right? And, you know, not spend our energy on like a million things. So actually, if someone makes a decision for, for us, we think that that's nice and there's this overwhelming tendency for people to go with default. Um, so let's say that after a test, we've decided to implement a default option and in that test we see that that option works.

Nurit: So now we need to implement it in a, in a larger scale. We need to look at, you know, what is the system that the company has. Um, if it is indeed like a computer system, then you know, how do we ride on that system and actually make it a part of that? If we're not so lucky, you know, can we do it in other ways? Is there a newsletter that we can implement as part of, or some other things? But basically, you need to look at the tools that you have with you. And those tools can be different. It can be, again, a system, it can be a means of communication, like a newsletter or an employer-employee internet website.

Nurit: Um, it can be a physical place like I was mentioning, you know, a grocery store. Yeah, where we decided to nudge by, for example, having floor stickers that, you know, lead people towards the more sustainable choices and so on. So there, there really could be different types of settings and you need to look at what you have that's accessible to you and that is gonna determine the kind of nudges that you can even test and then implement.

Milica: Yeah, that's really good because I think sometimes it's also, you know, thinking how you can use what you already have. And I think that's always underrated. We always want the next tool, the next system, and are not, but not looking at maybe the way the things that we have, the default is actually not set up right. So that's a, that's a really good one. I will have to think about that now, what does that mean? Um, wonderful. And, and, and finally, you know, we talk always, uh, you will hear us uh, talking about measuring impact, and I'm sure that you in your work are also called to, to, um, you know, talk about, how do we know that the behavior has changed and has it changed in the direction we wanted. how, how do you go about that? What, what is exactly that you do, um, to measure what the impact of your work? 

Nurit: That is super important and that is actually part of our model and that's part of the testing part. You, you can't test if you don't have a desired result that you're testing for, right? So, uh, when we define the challenge, according to that challenge, we also decide what is the objective, you know, to, to raise what, to, to, cause what if we'll go with the case I just mentioned with employees in the company and you want them to book train trips instead of flight.

Nurit: Then, you know, your measurement is maybe percentage of train trips booked, or number of train trips booked, whatever it is, you know, we'll measure that. If we're talking about a grocery store and we wanted people to buy more, um, organic bananas, then normal bananas, then our measure will be number of organic bananas. So it's really about looking closely into what is the behavior you want to encourage and how do we measure that. And that's something that as an academic, as a researcher, that's just really ingrained in how we work. Because as a research, I am an experimentalist. This whole, uh, discipline of nudging comes from very empiricist, uh, sciences, both psychology and economics, where we really what we do is run experiments and in experiments you have to look at something, you have to count something, and then in a statistical, uh, way, decide like, is this something coming from this group? Is it more or less than the one coming from this group? Um, so, so that I would say is, is a decision that is, is very intuitive for us, the researchers, but something that we often need to help companies with because it's maybe not so intuitive for them.

Lavinia: Yeah, you mentioned some. Sorry to uh, jump ahead, but you mentioned something really interesting, like having two groups to test your ideas on. Can, can you walk us a bit through that? 

Nurit: Yes, of course. So that is the core of the testing phase of the boost model, and that is also the core of the academic way of, of, uh, of understanding the world. So again, this idea of holding an experiment. Something that we also often refer to as a randomized controlled trial. So maybe that name is a bit familiar to people who have looked into, uh, the world of medicine or drugs. This is also how drugs get, uh, validated and then, um, regulated into the market. You have to test them and the way that you test them is in a randomized control trial.

Nurit: And what randomized control trial basically means is it has two key principles. One of them is randomization, which means that we will allocate people into different groups in a random way. We won't say, you know, Millie, she will probably respond to the default, so let's put her in a default group. Whereas, you know, Sarah, I don't think that she will respond to that. So let's put her in a control group. Nothing is gonna work with her, you know? 

Nurit: As opposed to doing that, what we do is allocate people randomly. And then the other principle that we keep is that we have a control group, right? So it's randomized control trial. And that means that we do have two groups, and one of them is the group that gets the nudge, this default, for example. And the other one is a group that doesn't get anything and we need to compare them to each other. And the reason why we have this control group, because many companies come to us and they tell us, why do we need this? Can't we just test if this works or not? Can't we just implement the default and see if it works?

Nurit: And the reason that we try to explain to them is that if we don't have a control group, then we don't know if the behavior change that we observe is really thanks to the default or thanks to something else. For example, maybe we see, let's say we don't have a control group, but we see that the default group really did increase the booking of trains and we're like, it worked.

Nurit: You know, we're super happy. But actually what we don't know is that the train company just happened to have had a giant price rebate that same week that we tested, and you know, they lowered all of the tickets by 50%. So actually it wasn't really the default that drove people to book Mo trains. It was the price reduction, but we wouldn't know that.

Nurit: That's why it's really, really important to maintain these two principles of random allocation into groups and always having a control group. And that's basically how we test that. 

Lavinia: We often talk about it, um, and you probably know this, um, in A/B testing, kind of less researchy words, but I do, it's really interesting because I always wonder how similar should the two populations be? 

Nurit: Yeah, so AB test is actually the name of a randomized control trial, kind of performed more in the context of maybe a startup or online and so on, where it is very easy, especially with the technology that exists today to randomize visitors for a webpage and you know, some visitors get one version of the webpage and other visitors get another version of the webpage.

Nurit: We can see it very clearly with some of the popular services that we consume today, such as Netflix and Spotify. They have hundreds of RCTs, randomized controller trials or AB tests that they roll out every week. So basically if you know, if we pull out our phones and look at our Spotify accounts, which I have, and like here in Sweden, they're very big, I don't know about you guys, but we will see different versions of, of Spotify, right? Like we would not have this version. So in terms of the question, how different should the two groups be? I would say that ideally, again, let's remember our key, our first key principles we ran, we allocate them in a random way.

Nurit: So ideally they're all part of one big population, and that population can be Spotify listeners, Netflix watchers, visitors of this company, website, or whatever. And out of this giant group, we have randomly by an algorithm just randomly allocated them into A or B. And it's very, very important. And again, the technology today makes this very feasible and easy that this is done in a random way. But I would say even for those of us who are dealing with the context that's more low-tech, like for example, a grocery. So we, we can't do an A/B test because we can't just randomize the visitors. But we can do it in other ways. We can, for example, if the grocery store is part of a chain, we can try to choose a similar store or, or ideally similar number of stores and randomize them. Or we can do it in the same store, randomize the days. But again, ideally we, we truly randomize. And we also take into account that, you know, a Friday is not the same as a Monday. Right? But maybe we can randomize the weekdays. So we need to take all of these things in under consideration, and that's a bit more tricky in, in like a real-life situation.

Nurit: But, uh, these things get pretty easy when we're talking about the internet and, and A/B testing. 

Milica: Yeah. And I think now when I'm reflecting what we usually do, um, in terms of kind of testing, we run these pilots. But now that I listen to you, pilot gives you like if it's working or not, but we rarely, like, look at what happens with those who have not participated, if you know what I mean. Like we say, Okay, this is successful, but like, what kind of difference does it actually make? Um, you know, so that's a very good point that we, when we do a pilot, we should look again who is, uh, who are we piloting it for, and also what are the metrics that we will look for, those who don't actually have access to whatever we want to introduce, which I believe is quite missing.

Nurit: this is a great point and one that I would love it if you are, if the, if your listeners will come out of this talk with this. I, I think that that would be great because I think some of the mistakes that we often do and we just pilot, is again, the, this idea as you said, that we didn't really check another group that didn't go through the training. So we say, you know, we implemented the training, we see results, is great. Nut maybe the group that we chose, especially if it was, you know, we advertise and we're like, who would like to go to this training? Yeah. This is the group that like, even if you don't do anything to them, they would be higher than others or like they will increase the whatever.

Nurit: Um, so that's another thing. That's something that's called demand characteristics. And that is, again, it's a nonrandom allocation to a group. Right. We basically invited them and the people who invite and, and show up to things have different characteristics than other people. Um, another problem that you could have is that you know, they want to support, like they have understood that, you know, we want to do this training, we wanna implement it. When they get the questionnaire afterwards, they were like, yes, this was great because they want to help out, right? So they have figured out our purpose and you know, they want to be like good citizens and, and, and help out and, and just in general, the fact that we have nothing to compare it to, it means that maybe, if we would just take this group of people and sit them in an empty room for eight hours and just, I don't know, do a party or dance, or maybe that would also work. We don't know. Right? We don't know if it was the actual content, the training, or something else. So that's why, you know, we should actually, if we can, when we decide to do this, like choose another group that either goes through something else that is similar, again, like yeah, we take them in a room and we don't give this new fancy training, but we give something else. Or, yeah, we, if we can't do that, we just take a group that we monitor, but we don't do anything with them, and then we at least compare these two things. That I think is something that if implemented, can really help a lot.

Milica: Yeah, that, that's a really, really good point. I think, you know, I'm just thinking about our leadership program. You know, like we said. Like we kind of assume that success will say that those who go through this program will be the managers who also stay longer with the company. Right? This is kind of a success, but how we actually measured or like looked, they were not really randomized in the first place, but also we don't know, you know, why others are actually staying or not saying.

Milica: So it was, this is a very good point, like how to set up our success that we know what is, if the impact is a leadership program, that people stay with us or not. So yeah, thank you for that. And, uh, um, oh, so many insights. I'm also writing the notes. So really, really nice. And I think what we like to do when, when we talk to people is to highlight a certain project that speaks and resonates with us quite a lot. We know that you have like a large portfolio of projects, one of those that, uh, uh, uh, got our attention is the project on, you know, lowering the bias in the hiring process. So, um, it was not a fully L&D project, um, but it is a proxy for what we actually do usually. Could you walk us a little bit and tell us more about this project?

Nurit: Yes. Of course. So this project, we were approached by a tech startup that was heading towards, uh, a period of intense growth. So they, uh, were faced by the need to recruit many, many people fast. I think they were something like, you know, single digits, maybe five to seven people before, and they were aiming to be like close to 30 at the end of the, the end of the round. And, and what they wanted to do is to really grow as a diverse organization. Uh, I think in that part, they did a really great job for coming to us in the beginning because it's very hard to change these things when you are already 30 or 50 or 200. Uh, but it's easier, you know, when you're seven growing to 30.

Nurit: So it was important for them that these people that they recruit would represent, you know, um, a diverse, um, number of backgrounds, uh, and so on. And, um, so they talked to us about that. So I, if we, if we walk through with our boost models, so that was, you know, the behavior that, that they wanted to promote is this unbiased hiring. And, uh, and we also knew what we would measure then, right? We would measure, you know, the numbers specifically. Um, diversity of gender was very important, you know, so we would measure the number of women that they actually managed to recruit. And then we looked at the barriers and here what we did is, of course, we did interviews with the people who are already in the company, but we also did an extensive amount of research just about the tech, um, sector in general and all of the problems there. There's tons and tons of material out there about the barriers of, you know, underrepresented minorities and if we get specific, you know, to women to both get into these companies, thriving these companies, advancing these companies, and, you know, we just took everything and distilled it to, um, list of actions that the company needed to take.

Nurit: And, and these actions were also very much based on behavioral science because actually, one of the, uh, roots of behavioral science is this idea of, um, of reliability, of choices, and that when we want to increase the reliability of judgment, what we need to do is to delay people's intuition. So they, they came from a world of, again, startup culture where it was very, very common to interview with, you know, these kinds of like quirky questions. Like what an, if you could be any animal, what animal would you be? Or like, what's your favorite sci-fi novel? Uh, it seemed to me at the time, like not coming from tech, that people in tech sort of look at it as a badge of honor. Like, how weird are your interview questions? I think it came from some like Jeff Bezos or whatever and, um, and we were like, no, let's stop this because if you interview people by asking them, what's your favorite sci-fi novel? So first of all, that has like zero correlation or predictive value on how good you will actually be in the job that you are recruited to do whether that job is a product manager, an engineer, a developer, et cetera. Uh, so zero predictive value, but also, you know, what is the message that that sends?

Nurit: And we only like sci-fi novels in this company. Meaning, you know, if you're a person that doesn't like them, maybe you won't thrive here, et cetera. So that's just one example, but we basically looked at the entire hiring process from, you know, who are we targeting and what are the channels that we are reaching out to them, the language that we have in the ads.

Nurit: Um, but also like really, really thorough work about the actual recruiting process, which then came down to, you know, getting rid of these unstructured interviews and having structured interviews that are basically stemming from a definition that the company made of, you know, what are the qualities and the experience that we actually want to see in a candidate? What are the things that we are actually looking for? Once you translate that into, you know, the desired skills, and you translate the desired skills into questions, you actually left at the end with five or six different questions instead of, you know, meandering, like 20 questions about, you know, if you would be on a, a lonely island, what would what you would bring.

Nurit: Yeah. Completely, completely irrelevant. So, you know, taking this and, um, and also adding things like, you know, having a task and objective task that people, uh, are rated on. So again, we, we uh, the whole process, even less subjective, uh, and just continue with, with that throughout the process.

Nurit: Implemented that, uh, with the help of the company and, uh, and then of course measured the results. So the results were that out of, now, I can't remember the numbers exactly, but basically, 50% of the positions recruited in this round ended up going to women. So, and, and that is from a company that has zero women when they started. So that was a really great achievement that they were very happy with and mostly many of these people that were recruited then are still with the company. So I think they're also happy with their recruitment as a whole. And this is also something that was very important for us, that the, you know, the goal was to, you know, increase the diversity of the company, but the, the primary goal of recruitment is and always should be, recruit the right person for the job Right? And so it is also important that this remains a quality recruitment and also that it's diverse. And what we're also saying is that these things don't need to contradict. But first and foremost, it needs to be recruiting the best talent for the job, which it did, and the diverse than it was before. So overall, they were very happy with it.

Lavinia: Awesome. Thank you so much. So I, I've heard first that one of the most interesting concepts I, I learned about in, in your course actually, and you already mentioned it, uh, at the beginning, was this idea of intention - action - gap. Basically, we intended to do something like I always intend to order healthy food and not always, but most of the time end up, you know, ordering, uh, junk food. So not just come in as a, as a way to architect the, the environment so that people aren't actually forced to make a certain decision, but, are actually nudged into making the right decision for them. So I heard a lot about that. I heard a lot about the, the research process and how important is it to look at quantitative and qualitative data and to, I, I heard about observing and, uh, about maybe surveys, running surveys in a, in a way that’s actually, right. Uh, because we do run surveys in L&D, but we don't have the experience of running proper surveys, um, and asking questions in a way that's not leading maybe. So, uh, I've heard a lot about the research process and, about testing, testing our, whatever, uh, interventions are. And there's an actual, uh, really interesting piece of information you, you're giving in your course, which is the difference between intervention and solution and not thinking about intervention in terms of solution until we know that they actually work, that our interventions actually work. That's something I, I, I took with me. Really happy, really happy to learn about measurement, how, how you, you measure your, uh, your interventions, the impact of your interventions. So a lot, so much knowledge to, to be taken out of today's episode, Nurit. Thank you so, so much for, for taking the time to meet us and, and for sharing some of the things you, you've learned in, in your profession. 

Nurit: Thanks a lot for today. 

Lavinia & Milica: Thank you. Thank you so much. 

Subscribe to our newsletter

Each Sunday we compile the best L&D resources we find and send them right to your inbox.

Awesome! A confirmation email is traveling to your inbox right now. Make sure to check it so you can receive the next Offbeat issue. 🚀
Mmmm! Something must be wrong! 😥

Everything we do is out of love for learning & each and every learning & development professional.

Copyright @Offbeat 2022