Hi Mark,
Using the Metadata Assistant as an example, they are not getting into technical details, but they are giving some more in-depth responses in the FAQ:
"Q: What LLM are you working with?
A: Models are subject to change as models and capabilities evolve. As of the February 2025 release, the AI Metadata Assistant was working with the gpt_4o_2024_08_06 LLM model."
Best,
Erin Nettifee
Duke University Libraries
From: Kluzek, Mark
Sent: Thursday, April 10, 2025 10:18 AM
To: ai-sig@exlibrisusers.org
Subject: [Ai-sig] Ex Libris & AI - technical documentation
Hi all,
I've been wondering whether there are any technical documents that outline, in some depth, how AI works with Ex Libris products.
I'm thinking along the lines of the documentation you see in the Developer Network. Specifically, I was hoping it might address questions such as:
-
What specific AI models are used for the different AI-related functionalities in Alma? Architectural diagrams outlining the flow of data would be useful.
-
Is there a usage threshold in the event of excessive use?
-
What risks are there of AI "hallucinations"? How do the models ensure the greatest possible accuracy of information?
-
In terms of privacy, is the data we input retained or used in any way?
I've engaged a little with Ex Libris, as well as with other members of the Ex Libris user community. I don't believe anything like this currently exists, but I do think there would be value in greater transparency around how AI is being used.
I'd be interested to hear if anyone else has similar thoughts or insights on the matter.
Best regards,
Mark