No need to apologize, hypotheticals are what we have to analyze the black boxes Clarivate has provided us. (For the record, I haven’t yet dug into any of their products.)
For a possible method, I spent some time yesterday reading through
this investigation of Claude 3.5 Haiku, which traces various pathways that reflect how the AI is
actually reasoning. Composing poetry is an excellent example because researchers assumed it would improvise the next line whereas it actually generates rhyming words (thinking ahead) then creates a line based on the content of the previous line. The
jailbreak example is also interesting because it demonstrates that the LLM has to generate a “stop” token, usually a period, before it can execute a warning against the content it’s providing. Regardless, there’s loads of interactive diagrams, and it’s nice
to see a methodology that starts from the outside in.
Dejah
Rubel
Metadata and Electronic Resources Management Librarian
Ferris Library for Information, Technology and Education
231-591-3544
From: Engwall, Keith <KENGWALL@depaul.edu>
Sent: Tuesday, April 1, 2025 11:08 AM
To: Pate, Davin <djp130330@utdallas.edu>; ai-sig@exlibrisusers.org
Subject: *EXT* [Ai-sig] Re: [EXT] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products)
**External Email**
Our library has created an AI working group and we are working towards setting up formal evaluation of several tools (we're still putting that list together). But first, we wanted to create a values statement and
a rubric (and perhaps a test guide?) to inform our evaluations and help us articulate our observations and how they relate to any decisions as to whether or not to enable generative AI features in products.
We've been asking for these features to be disabled until we are ready to perform evaluations. We're trying to be ready to evaluate the various AI tools over the coming summer.
I would imagine that this conversation could be very helpful in giving us various perspectives to consider when we're putting everything together, so thank you!
One of my concerns is the perception that AI summaries and/or ranking are somehow more authoritative than if the user did the work themselves and might curtail further investigation into a search. Even if the AI does a good job of relevance ranking and summarizing,
it's not exactly performing a reference interview so does the user know what it does and doesn't know about what the user is
actually looking for? These tools are designed to craft a response that meets the expectations of the user based on the question asked, so whereas the abstracts themselves are query-neutral and can provide the user with a sense of how close a particular
result is to what they want, the crafted summary will be word-smithed to satisfy the question asked and may mask any disconnect between the query and the selected results. This problem actually gets worse as the hallucination problem gets better. The more
reasonable the response, the easier it is to trust that the response is correct.
Sorry, I don't mean to derail the conversation with hypotheticals. I look forward to hearing everyone's observations.
Keith
Keith Engwall, MSLIS (he/him/his)
Systems Librarian
DePaul University Library
2350 N. Kenmore, 215F | Chicago, IL 60614
Tel: (773) 325-2670
From: Pate, Davin <djp130330@utdallas.edu>
Sent: Monday, March 31, 2025 4:30 PM
To: ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org>
Subject: [EXT] [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products)
Hi Everyone,
I wanted to begin our initial Topic Discussion series with the following:
What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding
the products?
If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue.
I hope to track the information on a spreadsheet and provide it to the related working groups/steering.
We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful.
Thanks again for your contributions,
Davin Pate, M.L.S.
Assistant Director for Scholarly Communications and Collections
(972) 883-2908 |davin.pate@utdallas.edu
http://www.utdallas.edu/library/
The University of Texas at Dallas
**Notice** This message is from a sender outside of the Ferris Office 365 mail system. Please use caution when clicking links or opening attachments. If you are unsure if
this email is safe, please report it using the Report Suspected Phishing button on the Outlook ribbon and the Information Security Office will review it.