The Shortcomings of AI-Driven Book Recommendations

Large Language Models (LLMs) have been touted as a solution for various tasks, including personalized book recommendations. However, many users have been left disappointed with the suggestions provided by these models. Despite providing a decent amount of input data, the recommendations often stray far from the requested parameters or suggest non-existent book titles and descriptions.

One possible explanation for this discrepancy is the training data used by LLMs, which may be biased towards popular books. This could result in the models suggesting well-known titles rather than lesser-known works that better fit the user’s criteria. To test this hypothesis, one user described 8-10 features they were looking for in a book, including prehistoric settings, coming-of-age themes, and competence porn, but none of the LLMs suggested the desired book, The Bonesetter series by Laurence Dahners.

Instead, the models suggested a mix of relevant and unrelated titles, such as Clan of the Cave Bear, Dungeon Crawler Carl, and The Martian. This raises questions about the suitability of LLMs for tasks that require nuanced understanding and personalized recommendations. Are these models simply not designed for this type of task, or are users not utilizing them correctly?

Photo by Valentin Ivantsov on Pexels
Photos provided by Pexels