Skip to content

AI Configuration

There are 2 possible ways of configuring AI to help you tag and summarise books.

Models

Go to /ai/model to configure AI models. You can configure as many models as you want. Currently the app allows to have Ollama compatible and OpenAI compatible models.

  • Please refer to OpenAI’s documentation to get your API key and model for ChatGPT.
    • Example: use https://api.openai.com/v1/ as URL and gpt-4o-mini as model.
  • Please refer to perplexity.ai documentation to get your API key and models for Perplexity

You can test your model and the results directly from the UI.

You can enable or disable the addition of context per model. You can also configure system prompts for each model.

Below the model list, you can configure which model will be used for different actions.

Configure your prompts

When your favourite LLM is configured, fill in the prompts that will be used to generate the summaries and tags.

The application will replace the {book} placeholder with the book’s title, author and series. Also the {language} placeholder with the book’s language or user’s language.

For tags

For tags, the following will always be appended to your prompt:

The output must be only valid JSON format. It must be an object with one key named "genres" containing an array of genres and tags for this book in strings. Do not add anything else than json. Do not add any other text or comment.

Here are some example prompts:

Can you give 5 classifying keywords about the book {book} in a list without explanation in language {language}
Give me a list of genres or tags for {book}. The first one must be about target audience (example: for teenagers), the second one must
be about the location of the main story (example: Switzerland), the next one are topical and must help the user find similar books.

For summaries

Can you make a factual summary of the book {book} in around 150 words in language {language}.

The search model can be used to convert user searches in natural language to typesense filters.

How to use

Run it for one book

On a book page, you can click on the “generate summary” or “generate tags” buttons to show propositions and accept them or not.

Run it through your whole library

You can run the following command:

docker compose exec biblioteca bin/console books:ai <tags|summary|both>

It will tag all your books that currently don’t have tags with the default prompts.

If you want to use a user’s configured prompts:

docker compose exec biblioteca bin/console books:ai <tags|summary|both> <userid>

If you want to use it on a specific book:

docker compose exec biblioteca bin/console books:ai <tags|summary|both> -b <book-id>

Enter a natural search query, for example: Mangas by eiichiro oda and click on the magic wand. This will convert your query in filters like authors:=Eiichiro Oda && extension:=[cbr,cbz,pdf]

You can try with things like Thrillers in italy or cook vegetarian meals and it will try to find the best query

Add context

There are currently 3 ways to add context for better results. They be enabled at the same time.

  • Wikipedia context: requires to fill in a wikipedia personal API token in the configuration, will search for information on the book for wikipedia
  • EPUB context: if the book is an epub, it will be entirely added to the context for summarization.
  • Amazon context: the app will try to find and scrape the Amazon product page of this Books

Example workflow

I use 2 different LLMs in my workflow

  • I have a local Ollama instance with the qwen2.5 model
  • I use perplexity.ai API with the large model.

How I proceed:

  • Search queries are built by Qwen
  • Summaries are built by perplexity.ai, as it is the model that has the most chances to know about books out of the box.
  • Tags are built by qwen after the summary as been written, because the summary is part of the context.