This month’s main book:
- $100M Offers: How To Make Offers So Good People Feel Stupid Saying No - Alex Hormozi
This one bubbled on top of the list from a confluence of references from other materials I was studying. At first approximation, this is a book by one rich guy who became rich by selling his course on “how to become rich” to wannabee rich people. There are thousands of those around these days. There were two things I found intriguing however, and that put me in the mood to pay attention. The first is that Alex H is objectively rich and did not grow up rich from rich parents. So something valuable happened and I wanted to learn what it was. The other thing that I had picked up from interviews with him is that he has things to say about human psychology in sales, and I am currently currently studying that.
My concerns about value creation notwithstanding, this was a seriously enjoyable read. It was not just a page turner; I was taking notes at every chapter, with tangible and actionable ideas for my own projects. While his “method” remains mostly inapplicable to my work, there are things that I need to be reminded of frequently regarding how atypical my relationship to influence is and why/how sales tactics objectively work even when I’m not sensitive to them personally, and this book does this very well. There are chapters I already plan to re-read in a few months.
❦❦❦
There was a significant amount of interstitial reading as well, but sadly I was not able to find time to share all the things that I found significant.
Let’s focus instead first on the main “AI science” results that made me change how I think.
On a practical level, a consortium of multiple universities (including MIT, Stanford, etc.) has done a broad-scope review of the various human activities involved in software engineering, and what would be required for “AI” solutions to really take them over. Their findings have been published as Challenges and Paths Towards AI for Software Engineering (Alex Gu, et al.), and reviewed by Rina Diane Caballar for IEEE Spectrum. My take away was, there is a lot of work to do and really replacing the humans throughout would require technology advances that are not quite plausible. But I did like reading this for another reason: it was a good overview of the various processes involved in building technology! This is interesting to learn about, both for people who build organizations and people who study processes (or work on solving process problems).
Meanwhile, here are findings on some social aspects we should strongly keep in mind IMHO.
In AI-induced dehumanization, Hye-young Kim and Ann L. McGill did some real experiments that demonstrate that when people use AI agents a lot, and treat them like machines, then after a while hey start to treat real people like machines too. The take away for me is that as we introduce more and more of those technologies in our life, we should treat them politely and respectfully—not because they deserve it (they don’t!), but because it helps us remain polite and respectful the rest of the time.
In Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs, Jan Betley, et al. dive into a recent major discovery: when a LLM is exposed to obviously mistaken solutions to a problem as input (as in, the solution is incorrect), then it starts to propose morally problematic solutions (albeit possibly correct) as output when prompted to work on new problems. It’s not exactly clear what the underlying mechanism is, but this seems like a major finding, especially in work to use LLMs to drive policy or governance.
❦❦❦
Looking at a more “macro” level, the following two things were intriguing.
Are OpenAI and Anthropic Really Losing Money on Inference? (Martin Alderson). The answer seems to be “no,” as long as most users ask for short answers. The cost of processing input tokens is negligible; the compute costs really come from output. Since most output tokens are currently produced by a minority of use cases, the coast is (still) clear. Things might change as more and more people integrate inference APIs in applications that are not just chat bots. This article also helped me understand why Cursor tends to choose lower quality models in interactions where I ask it to produce more code (which feels illogical).
Meanwhile, TJ Jefferson tells us about The startup bubble that no one is talking about: the theory is that we’ve seen a glut of VC funding recently not because of the AI hype; it was merely a ZIRP after effect. This would imply that funding is likely to drop soon even if the AI hype is not over yet (and even if it’s not a hype at all). This also means that we could see major economic shakes happen and a focus towards profitability even before the eventual AI bubble pops. I think it’s a good thing, as it might make the industry more resilient ahead of the pop. (But don’t quote me on that.) It might make things even more difficult on the job market, though.
❦❦❦
On a different level entirely, I loved reading A Crack in the Cosmos by Colin Wells. On the surface, this tells us the story of how the Greek scientist Anaxagoras discovered the motion of the earth around the sun and later was exiled by people who found his work “heretical”. The article makes a bigger point: that in any age where science becomes stronger, there’s a chance of a backlash against it afterwards. Salient quote:
The stronger the bonds of nature are perceived to be, the stronger must be the ‘divine force’ that bends or breaks them; the more concrete the boundary, the bigger the thrill of transgression.
I’d say this is good to read both to learn more about famous ancient Greek philosophers and their context (both their life and the politics around them); and also to learn more about macro geo-political changes that our world is currently going through.
❦❦❦
References:
- Alex Hormozi - $100M Offers: How To Make Offers So Good People Feel Stupid Saying No
- Alex Gu, et al. - Challenges and Paths Towards AI for Software Engineering
- Rina Diane Caballar (IEEE Spectrum) - Why AI Isn’t Ready to Be a Real Coder: AI’s <coding evolution hinges on collaboration and trust
- Hye-young Kim, Ann L. McGill - AI-induced dehumanization
- Jan Betley, et al. - Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
- Martin Alderson - Are OpenAI and Anthropic Really Losing Money on Inference?
- TJ Jefferson - The startup bubble that no one is talking about
- Colin Wells - A Crack in the Cosmos