I appreciate how you're grappling with emerging AI tools and applying your views to your ongoing work in explaining software.
Your first 3 graphs are especially compelling to me. For anyone who has worked anywhere around a developer team that includes juniors, this is relatable and engaging. And the way you quickly bring us up to speed on LLM oriented software tools is great.
Since the third graph brings the reader up to date on a specific scene, it seems to me your 3 intro graphs are doing a lot of efficient and smart work, in terms of hooking the reader.
Your structure for training an "intern simulator" comes across as well thought out and high potential.
At the top level, though, I do wonder if your current approach will be too-soon obsoleted?
The "intern simulator" is one thing. Discovering in the year 2028 that we no longer need to build software in any traditional sense of the term... is another thing!
Looking at your view, "If there are senior developer agents, how will we spend our time?" -> "We will, in other words, all become lead developers": I wonder if you're begging the question. If not, then I wonder how long, really, your described reality will last as a window of time -- will it last long enough to be relevant for career planning and such?
Post-it notes:
* I've always found the topic of training junior engineers to be super compelling. I have fond memories of successes in this area -- and poignant regrets around failures. But just because this is compelling for humans doesn't mean it's the proper metaphor for machine tooling.
* The classic (possibly Henry Ford) quote about "faster horses". Kind of like, "If I had asked engineers what they wanted, they would have asked for sleeker tools (or an updated tooling methodology)".
* If The Machine is getting so much smarter so quickly, then my primary wish is not to make it easier for engineers to build things that other humans use to do things. Rather, I just want The Machine to do things. This is some kind of riff on the Law of Demeter.
If I'm being too optimistic about "the year 2028", then that's a concrete mistake I'm making and I'm interested to see how I'm wrong. E.g., I recognize a valid issue is the limits of LLMs versus the much more ambitious dream of AGI.
But anyways the bottom line for me is that there's writing in here that I think is some of your best and most compelling, at least for this N=1 reader sample size.
Hey Zach, I enjoyed this latest post!
I appreciate how you're grappling with emerging AI tools and applying your views to your ongoing work in explaining software.
Your first 3 graphs are especially compelling to me. For anyone who has worked anywhere around a developer team that includes juniors, this is relatable and engaging. And the way you quickly bring us up to speed on LLM oriented software tools is great.
Since the third graph brings the reader up to date on a specific scene, it seems to me your 3 intro graphs are doing a lot of efficient and smart work, in terms of hooking the reader.
Your structure for training an "intern simulator" comes across as well thought out and high potential.
At the top level, though, I do wonder if your current approach will be too-soon obsoleted?
The "intern simulator" is one thing. Discovering in the year 2028 that we no longer need to build software in any traditional sense of the term... is another thing!
Looking at your view, "If there are senior developer agents, how will we spend our time?" -> "We will, in other words, all become lead developers": I wonder if you're begging the question. If not, then I wonder how long, really, your described reality will last as a window of time -- will it last long enough to be relevant for career planning and such?
Post-it notes: * I've always found the topic of training junior engineers to be super compelling. I have fond memories of successes in this area -- and poignant regrets around failures. But just because this is compelling for humans doesn't mean it's the proper metaphor for machine tooling. * The classic (possibly Henry Ford) quote about "faster horses". Kind of like, "If I had asked engineers what they wanted, they would have asked for sleeker tools (or an updated tooling methodology)". * If The Machine is getting so much smarter so quickly, then my primary wish is not to make it easier for engineers to build things that other humans use to do things. Rather, I just want The Machine to do things. This is some kind of riff on the Law of Demeter.
If I'm being too optimistic about "the year 2028", then that's a concrete mistake I'm making and I'm interested to see how I'm wrong. E.g., I recognize a valid issue is the limits of LLMs versus the much more ambitious dream of AGI.
But anyways the bottom line for me is that there's writing in here that I think is some of your best and most compelling, at least for this N=1 reader sample size.
Thanks for sharing!