I've been reading The Skill Code by Matt Beane. Maybe it's too soon to review it since I'm only two-thirds through, but I think I’ve got a good sense of where it's going and feel ready to share my thoughts. If the book throws a curveball later, I'll let you know.
The Formula: Challenge, Complexity and Connection
The book starts off strong, discussing research on how we learn best. It's not groundbreaking, as these are familiar concepts, but Matt's research spans various jobs and industries to validate these ideas.
He introduces a framework called the "3 Cs" – Challenge, Complexity, and Connection. Basically, to learn or work well, the task needs:
to be challenging but within your abilities,
complex enough to be engaging,
and take place in a social environment where connections are fostered, particularly between mentors and mentees.
He uses storytelling to show how people who have the 3 Cs thrive, while others don’t. I doubt all the stories are real, but they’re entertaining, even if they feel a bit cherry-picked. When a book relies heavily on storytelling, it’s often appealing more to your emotions than logic.
At this point in the book, you feel super motivated, like you've unlocked the secrets to learning, and you're ready to quit your remote desk job to go work in a warehouse, building meaningful connections and ensuring timely deliveries (okay, I might be getting a bit sarcastic).
The Phantom Menace
Then, the book shifts. It starts talking about how new intelligent technologies are going to disrupt all of this, potentially leading us into a grim future. Again, I haven't finished the book, so I’m not sure if Matt will criticize this outlook or offer solutions.
Matt argues that intelligent technologies will break the mentor-apprentice relationship, preventing future generations from learning the skills they need. He gives examples, mostly focused on virtuosos like surgeons, who feel demotivated because their job has gone from showcasing their expertise to just validating that machines are doing things right.
On the flip side, Matt also talks about people who are adapting to this new world and finding opportunities in it, so props to him for showing both sides.
Here’s my take
First, the mentor-apprentice model has been breaking down for decades. While it’s great for learning, it's only present in a few industries and specific moments. Outside of fields like craftsmanship or hospitals, the norm is more like: the senior does the work, and the junior figures things out on their own (and becomes senior through blind effort). This happens both with and without intelligent technologies.
Second, there will always be new things to learn, and jobs will change. Maybe in the future, we’ll have AI mentors that challenge us just the right amount. That might sound a bit tecno-optimistic, but I don't see the downfall of human learning just because some experts haven't adapted to new ways of working or juniors are not capable of o interested to master the skills.
In my (non-expert) opinion, even though Matt highlights the importance of learning from seniors, he seems to miss the point that in most jobs, people are thrown into the deep end with just enough training to avoid injuring themselves and suing the company.
We’ve never lived in a perfect world where Socrates guides us through life’s mysteries. Likewise, it’s just as improbable that we’ll all end up stuck in cubicles, monitoring algorithms that mimic human speech.
Yes, the mentor-apprentice model is great for learning, but it’s been largely lost, and those who still have it don’t realize how valuable it is.
Exponential AI, Human Pace
Could this new wave of technology disrupt more jobs than previous revolutions? Sure. But we’re smart, and even though we tend to resist change, we’ll adapt as we always have.
A year ago, I was fascinated by the arrival of ChatGPT.
How has it changed my work?
Completely.
How much has it changed my environment?
Not at all.
And I’m talking about people who work with technology every day, not those forced to adapt. These are tech creators. The large language models back then weren’t what they are today, and they still haven’t lived up to their promises. But that wasn’t why we didn’t adopt them quickly—the real reason was resistance to change.
I have no doubt that work environments will transform and that there will be plenty of collateral damage. I can't even predict where I'll be in three years, though I hope it’ll be the same place, since it doesn’t seem like an AI will replace me anytime soon (but we’ll see if I eat my words in a few years).
Furthermore, I even think that the slow and cautious approach to AI adoption, which we often criticize in Europe, might actually be a good thing. This tech is going to take over, if it hasn't already, in subtle ways. So maybe it’s not a bad idea to tread carefully, especially on a societal level. I'm more concerned about the U.S. making trillions of dollars while four big tech companies dominate, leaving people without jobs or support. If Europe’s cautious approach helps us reskill our workforce and avoid a massive social crisis, I’m okay with that.
[humor] Meanwhile, I curse OpenAI for not releasing the new voice mode for ChatGPT in Europe, and Meta for doing the same with LLaMA 3.2 multimodal. I guess we’ll all switch to Qwen and Europe will become a new communist utopia, haha. [/humor].
In conclusion, even though I’m being tough on Matt’s perspective, I agree with the value of the 3 C's he presents in his book—they're a real asset and advantage for professionals. At the same time, we’ll need to roll up our sleeves, become dedicated self-learners, and develop the ability to adapt to change.
And you, do you have mentor o mentored?