I thought long and hard about who I was writing this for.
Someone who’s thinking of transitioning to L&D (me over 6 years ago)?
Someone who’s new and hasn’t seen or experienced L&D beyond their first Learning team yet (also me 4 years ago)?
Someone who thinks they have it figured out (me again, at the start of the 2020 pandemic, and also today), but is about to uncover more layers?
In the end, I decided this is for the curious, looking to get a glimpse into a nerdy brain in hopes of finding new, inspiring, challenging or validating.
I reference some concepts in this article that may be new to you. Not going into explaining them is intentional, in the interests of keeping this short. I’ll leave exploring them up to your curiosity (and Google).
So here we go, 10 lessons I’ve learned from my time in L&D.
1-6 are the Big Lessons. 7-9 are the Smaller Lessons. And 10 is The Final One. Enjoy!
#1 At its core, corporate L&D is about more than just Learning
There are many reasons L&D exists in the workplace, none of which is for people to learn for learning’s sake.
Most of the time, when someone wants to learn something at work, there’s a “Why” behind it.
They’re curious because they care about this thing, they want to get better at their job, get ready for their next role, meet the expectations of a new role, etc.
It’s not because the company or their manager said so.
Those who seek us out to design, buy, curate and deliver learning solutions for their people also have a Why. And more often than not, learning is not it.
No learning is an island, entire of itself. In the workplace, learning is mostly a means to an end.
For the business this end is about growth: think more sales, new products. Or it’s about efficiency and cost reduction.
Who can deliver this? People who are more engaged, collaborating better, optimising performance, eliminating wasteful activities (such as admin that could be automated, for example doing your expenses), communicating and managing projects better, and so on.
This lesson was an eye opener for me. And led me to #2.
#2 So ask “Why?”
Why do we need this? What’s the challenge? What are we hoping to achieve? What does success look like (tip: “an engaging learning” isn’t it!)?
I learned about root cause analysis as a continuous improvement champion in my audit days: the 5Whys, the Ishikawa or Fishbone diagram, etc.
I had no idea it would be relevant even now.
When I’m particularly frustrated with L&D, I imagine a parallel world. One where Design Thinking and Lean Six Sigma had a baby. A baby that went on to revolutionise L&D.
In that world, we don’t have an impact measurement and effectiveness crisis.
We might not even have a separate L&D function anymore. It’s all Human Centered Design for Continuous Improvement (HCD-CI - not very catchy, oops!). It brings together Organisation Development and Change, Comms and Strategy, Finance and IT - everyone working together to address the organisation challenges, systemically.
I haven’t yet worked out the details of that fantasy or come across anyone approaching it like this in this universe. If you are, I’d love to chat about how you’re doing it and what you’re learning from it.
And while I don’t have access to that parallel world right now, I think of that little question, “Why?” as a little footbridge into that universe.
Lesson 3 is one that fuels that vision.
#3 The effect of most learning created by L&D is unclear
Realising this some time ago triggered a bit of an identity crisis for me. “What’s the point of what I’m doing if it doesn’t really make a difference?”
While I don’t think what I did made no difference, there were no measurable improvements I could refer to. How many courses people completed and how much they liked them was useless.
A link from someone’s LinkedIn post made me think about this again recently. How do we know our audiences wouldn’t put our programmes on this list of unnecessary inventions, if they could?
I sometimes wonder if L&D is afraid to measure because it will find that their solutions make no difference. I’ve definitely been worried about that. Failure can be such a huge blocker.
This is why I loved Ogilvy’s annual report recently. It presented case studies where their solution was ineffective.
They didn’t sweep in under the rug. They said it didn’t work and tried to find reasons why.
I felt empowered reading the case studies, and I try to push that thinking into my work as well. It’s worth knowing something didn’t work because I can do something about it, I can learn from it, I can take action to fix it.
I made it my mission to bring outcomes and their measurement to the front and centre of what I create.
I try to look at a combination of hard and soft metrics, using measures the organisation cares about.
Whether it’s things like turnover and employee engagement, specific points of failure like number of data leaks from phishing emails, behaviour data, performance evaluation data, attitudes or something else. I want to know what it is I’m trying to change.
That’s not to say I’m doing it perfectly. More often than not I get less than what I hoped for. Especially as a consultant who doesn’t have direct access to data (though it wasn’t easy when I was internal either).
But this is a journey. I’m learning something from every new project and I’m definitely better equipped to deal with it today than two years ago.
That’s what matters: progress.
And pushing for more data.
It sucks to see something you’ve created fail to work.
But not knowing means… nothing. It does nothing. It gets us nothing.
If there’s one thing you want to take from this whole article, take this. Get hungry to find out the difference you’re making.
Because nothing ain’t good enough.
Speaking of enough...
#4 Learning is not enough
.. to boil the ocean that is enabling performance and building capability in an organisation. As much as I want to tell myself “I am enough” as an L&D pro.
Because performance and capability aren’t just about knowledge. All roads lead to behaviour - what people actually do.
Diving into behavioural science and what drives behaviour shifted my thinking around what it takes to change behaviour. Unfortunately, an eLearning course isn’t it.
I’ll reference one of my favourite frameworks here: COM-B. It may not always hold the answers, but it definitely helps ask the right questions.
Dealing with behaviour at scale gets you to think about systems. And when I think about systems I also think Lean Six Sigma: a systemic approach to continuous improvement (that again).
Throw Design Thinking / Human Centered Design, stuff from Product Design and Management, Marketing and Data Analytics into the mix, and you’ve got yourself the ingredients for a magic potion capable of unlocking performance and building capability.
But how do you mix these to get there? See lesson 5.
#5 There is no magic potion
The unfortunate reality is that people are complex, our organisations are complex, our environments are ever-changing.
What works for one organisation is unlikely to work for another. Things that worked at one time for the same organisation might fail at another time.
The real magic for unlocking performance and capability lies in knowing that it will look different. For every organisation. Almost every time.
But if you approach it as an experiment (see lesson #6), you can work out what works, for each challenge.
Short as it may be, I wanted to have this as a separate lesson because it’s too easy to replicate someone else’s solution, hoping it will do the trick. You can learn from others (see Lesson #9) but the system that is your organisation, made up of your people, will need a tailored approach.