Reading with LLMs

Oct 13 2025
TL;DR;

This is a demo of using LLMs to summarize books to the desired reading length.

Like any everyone else with a keyboard, I spend a lot of my time these days using Large Language Models. It is both exciting, in that this stuff is new and the applications seem endless and unexplored, but it is also frustrating, in that most real uses of AI have received lukewarm receiptions from users. Some studies even suggest that consumers are averse to buying products that include AI in their descriptions. It seems fair to say that we're still just figuring out what this stuff is good for.

This post is me sharing one of my own explorations into what AI might be good for. The demonstration is below, jump ahead if you'd like, but I'll try to keep the background short. In the last few years, I have had very little time for reading. In those rare moments when I expect to have some extended quiet time for reading a book, I struggle to pick the book. Since it will likely be the only book that I read for a while, the stakes seem higher and so I spend far too much time researching. There was also this dark, but memorable, Books Before You Die Calculator that still haunts me. I've come to realize that there are books that I would like to read but there are far more books that I'd like to understand as if I had read them, but without the time commitment. Perhaps that former category of "reading" is something appropropriate for AI?

Tangentially, I also tend to buy audiobooks for long road trips and I've had some success getting the gist of a book by asking ChatGPT or Gemini for a directed summary. For instance, I recently asked this of Gemini and the results were enough to convince me to pick the book for an upcoming road trip.

I am considering reading Burning Down the House: How Libertarian Philosophy Was Corrupted by Delusion and Greed by Andrew Koppelman. Can you please list the chapters and tell me what I'm likely to take away from each chapter?

It only hallucinated one of the chapters and even for that chapter, the content that it summarized was actually in the book. For technologists who spend a lot of time with AI, summarization of text doesn't seem very interesting, but it is a task that LLMs seem to do quite well. The demo below came about as I was wondering if LLMs could provide me with a way to "read" a book without the time commitment of actually reading the book. You start with the original text but if that's too much of a commiment, you ask the LLM to reduce the text to a more manageable size.

Before you start clicking, let me give you some caveats. This is not a live demo. I cannot afford to run a live production model on my personal site. I generated the summaries in advance at a few target lengths. Hopefully you can imagine how this might be improved if a live model were available. Given that the summaries are pre-computed, you might wonder why there are incremental text transformations between the summarization levels. The answer is that I've become quite fond of the aesthetic of LLMs streaming in tokens and I was thinking of ways that might be preserved when text is being mutated, rather than produced. Those transformations are my take on preserving the streaming token aesthetic when a block of text is being updated.

With the background and caveats out of the way, here's the demo.

Try some other works