There are 2 possible ways of configuring AI to help you tag and summarise books.
Chatgpt
To use OpenAI’s ChatGPT features, set up your openAI API key and desired model in the .env.local file:
Please refer to OpenAI’s documentation to get your API key and model.
If no key is set, ChatGPT will not be used in the application.
Ollama
To use a local instance of oLLama, set up the URL and model of your server in the .env.local file:
If no url is provided, Ollama will not be used.
When your favourite LLM is configured, fill in the prompts that will be used to generate the summaries and tags.
The application will replace the {book}
placeholder with the book’s title, author and series.
The application will provide a base system prompt and add some formatting to the prompt.
Base prompt is defined in the code and is currently:
For tag, the following will always be appended to your prompt:
Here are some example prompts:
For summaries
Run it for one book
On a book page, you can click on the “generate summary” or “generate tags” buttons to show propositions and accept them or not.
Run it through your whole library
You can run the following command:
It will tag all your books that currently don’t have tags.
If you want to use a user’s configured prompts:
If you want to use it on a specific book:
Add context
There are currently 2 ways to add context for better results. Both can be enabled at the same time.
From wikipedia
Set your WIKIPEDIA_API_TOKEN
in your .env.local file. You must register in wikimedia API for a personal API Token.
From Amazon
Set AI_CONTEXT_AMAZON_ENABLED
to 1 in your .env.local file to scrape results from Amazon. This should be used with parsimony.