We’d like to welcome a new voice here at Econlib, Sam Enright. Sam works on innovation policy at Progress Ireland, an independent policy think tank in Dublin, and runs a publication called The Fitzwilliam. Most relevant to us, on his personal blog, he writes a popular link roundup, in which he gives short commentary on the most interesting things he read, watched, and listened to in the previous month. His ‘linksposts’ are sometimes lovingly mocked for their astonishing length; what follows is an abridged version of his Links for October.
Blogs and short links
1. Ava Huang on the friendship theory of everything. (I subscribe to this theory.)
2. You don’t have to choose between the environment and economic growth.
3. Free market economics is working surprisingly well. As Noah Smith points out in this piece, the benefits that the Argentine economy has seen so far under Milei are probably mostly attributable to orthodox macroeconomic stabilisation policy. It’s too early to say whether the other reforms will be successful. Is an alternative title “We All Owe the IMF an Apology”?
4. The only countries that tax non-resident citizens on worldwide income are the United States and… Eritrea. Here is a wiki about the other financial and legal restrictions that American citizens face after emigrating, which include not being allowed to invest in the greatest tax instrument in Britain, the individual savings account. That is from Bogleheads, a website of people who… really like John Bogle.
5. Eventually, we will all come to love congestion pricing.
6. Sebastian Garren’s whirlwind tour of Chilean economic history. You’ll be hearing more about this soon:
“Thank you to Sam Enright and the Fitzwilliam for setting me on this quest.”
Music and podcasts
7. Chakravarthi Rangarajan on what’s happened to Indian monetary policy since the 1991 liberalisation. I was unaware of how much of a problem fiscal dominance was in India before the 1990s (or even really what it is).
8. Dmitri Shostakovich, Symphony No. 8. And the associated Sticky Notes episode. This is darker and more complicated than the triumphal Symphony No. 7, which would have been a better place to start. I think you can hear the cautious optimism about the Red Army’s advance, and in general, I find it a lot easier to get into composers with specific historical episodes they are associated with (#8 premiered in 1943, #7 to 1942).
9. Tabla Beat Science, Tala Matrix. Another one of Zakir Hussain’s bands. If you still haven’t read Shruti Rajagopalan’s obituary for Zakir, it is the best thing I’ve found written about Indian music.
10. Richard Sutton, the father of reinforcement learning, on why he thinks LLMs are hitting a dead end. When will I learn my own “bitter lesson” that I’m not smart enough to follow these podcasts over audio, and I need to switch to reading the transcripts?
Papers
11. P.W. Anderson, More is Different: Broken Symmetry and the Nature of the Hierarchical Structure of Science. I’ve heard the title of this paper countless times before, but I never got around to reading it. The author makes an argument for anti-reductionist pluralism, which is (I think?) similar to what Daniel Dennett is saying in Real Patterns. It’s been a while since I thought about these issues, but from what I recall, I was sympathetic to the claim that “chemistry is just applied physics” is philosophically confused. I also read a 50-year retrospective by Steven Strogatz et al. Sociologically, it is quite fascinating that a non-philosopher managed to write such a widely discussed paper in philosophy in only four pages.
12. Richard Sutton, The Bitter Lesson. I figured if I’m reading Sutton, I may as well get around to this famous essay. Here is the lesson in question:
“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin . . . We have to learn the bitter lesson that building in how we think we think does not work in the long run.”
One thing I learned from Sutton is that the more general methods of building AI – that scale up compute, and eschew the symbolic representations of GOFAI – used to be called “weak methods”. People were really convinced that scaling wouldn’t work, and honestly, who can blame them?
13. David Silver, Richard Sutton, Welcome to the Era of Experience. I read this accessible essay as part of a machine learning reading group with the nice folks at the coworking space Mox. They have a cool group they call the 90/30 Club, in which week-by-week, they are reading through Ilya Sustkever’s list of the 30 AI papers for which “If you really learn all of these, you’ll know 90% of what matters today.” At some point, they seem to have finished that list and moved on to other papers. I assumed that I wouldn’t be able to follow a conversation with the legendarily “cracked” (am I using this term correctly?) San Francisco engineers, but thankfully, I was also able to listen to Sutton on the Dwarkesh podcast in preparation.
To be honest, I find the intense interestingness of the Bay Area to be overstimulating, and this contributed to low mood and distractibility while I was visiting. Something I like about Dublin is that it feels like you can know pretty much everyone with a certain set of interests. Small ponds are underrated.
In any case, the basic argument of Silver and Sutton’s paper is that AI is now reaching a limit of what it can learn from human-generated data, and going forward, AI will be learning mostly from experience, trial and error, and so on. In this view, reaching superintelligence will require the fabled “paradigm shift”, and will rely heavily on reinforcement learning. This is the key graph, from page 6:
Figure 1: A sketch chronology of dominant AI paradigms. The y-axis suggests the proportion of the field’s total effort and computation that is focused on reinforcement learning. From Silver and Sutton, “Welcome to the Era of Experience.”
They have a more detailed picture in which the most advanced AI will be steered by human desires and feedback, which I didn’t quite follow. This paper came out in April and will (eventually) be published in a book called Designing an Intelligence, so I will pre-order it once there is a release date.
This is all pretty heavy stuff, and my head hurts, so I will conclude this section with recent wisdom from my mate David:
They should call the opposite of an AI doomer a sloptomist.
You can read the full version of this post here.
[1] Reading up on this has reminded me of a Marginal Revolution comment from 2023 about how John Bogle should receive the (hypothetical) Nobel Prize for the practice of economics.[2] The name David Silver didn’t ring a bell, but I now realise I saw him in that incredible documentary about AlphaGo.
















