Here are the important short-form (and not-so-short-form) reads for this month.

In Intentionally Making Close Friends, Neel Nanda shares their own best practices and routines for growing friendships. Some of this overlaps with my own experience; some I find distasteful, and some I am likely to learn from. I would say this write-up is a very good starting point for anyone who wonders why it is so hard to make friends nowadays and what to do about it.

In Hyperlegibility — Trading Secrets for Attention, Packy McCormick writes about a complex, nuanced model of modern reality. It would be hard and imprudent for me to try and contain it in just a few words; you might want to read it to make your own opinions. Nevertheless, I found it an eye opener: that the cacophony of signals from individual (= not corporate) voices is being processed thoroughly, without us realizing it’s happening, by the new generation. And that it might turn into good things. This write-up also gave me a word to talk about one of my projects (more on this below).

❦❦❦

On a more practical note, Manuel Kießling shared his experience of introducing LLM programming assistants to his team in Senior Developer Skills in the AI Age: Leveraging Experience for Better Results. As written about by others before, Manuel found out that the quality of the output increases in proportion to that of the input (in turn justifying why senior staff is still pretty much a requirement to operate the new tech effectively). The new thought I found intriguing is that he also creates some scaffolding prior to asking stuff to the machine, so that the machine’s response “grows” onto his scaffolding and follows his preferred structure. Maybe that is the solution we need for unwanted metaphorical vines growing onto our metaphorical economic walls? Judicious gardening, with both guides and adequate trimming.

On the flip side, we should continue to be vigilant about what we can legitimately expect from the new technology. In What can LLMs never do, Rohit Krishnan draws the boundaries of the class of problems transformer-based LLMs could ever solve. In The Command of Language, David Cole reminds us that it is our incomplete understanding of what “language” is that confuses us into believing LLM generate real experience and meaning. (I will gladly admit that I too would have fallen into this pitfall if I hadn’t been inoculated beforehand by my study of Ludwig Wittgenstein’s work.)

Perhaps more importantly, Edward Zitron also shakes his fists at the clouds in Reality Check, reminding us that we are sacrificing a non-trivial amount of real-world resources to what might remain, in the end, an unproven pipe dream.

❦❦❦

On a not entirely unrelated note, Albert Lloreta wrote about some ideas I had been musing for the last few years: in A Strange Stain in the Sky: How Silicon Valley Is Preparing A Coup Against Democracy, the author identifies how some unscrupulous people with money are organizing online communities whose ostentatious goal is to undermine real-world geography-based government.

This connects to Adam Becker’s recent book, “More Everything Forever”, also recently reviewed by Jennifer Szalai, through whom I learned of it: Go to Mars, Never Die and Other Big Tech Pipe Dreams (archive). In there, the author posits a theory that venture capital investors in Silicon Valley are currently in a sort of metaphysical flight forward. My limited view on this is my own theory that they are betting their proverbial house on a hope that “AI” will solve some real and serious problems but without plausible evidence this will happen. Also, that they are trying to hedge their bets by also creating power structures (such as those discussed above) to make it more likely for money to continue to flow in their preferred direction even if/when the bet doesn’t work out.

Even with historical hindsight, it is still unclear whether the inordinate amounts of money and resources spent towards the space race in the 1960s (including sending people on the Moon!) resulted in a net benefit or loss for humanity. I believe it will cost us at least 50 more years before we understand the true impact of what we are currently investing into. Most of us will also be dead before then.

This raises a fundamental yet critical question: how to guide our choices responsibly, if we will not be there to carry the weight of consequences?

❦❦❦

References:

Like this post? Share on: BlueSkyTwitterHNRedditLinkedInEmail

Comments

So what do you think? Did I miss something? Is any part unclear? Leave your comments below.


Keep Reading


Published

Category

Miscellaneous

Tags

Stay in Touch