Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products)

Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA259.2A8C2BE0]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...>

Hello Davin, We enabled these features and invited faculty members to provide feedback. However, faculty have expressed concerns that the feature is not helpful and may undermine students' ability to develop independent critical thinking and research skills. As a liberal arts college, our education emphasizes cultivating students' critical thinking, independent inquiry, and analytical skills. Therefore, we will be turning off this feature in Primo. However, since ProQuest does not provide an option to disable it, we are unable to do so there. Thanks, *Ping Fu* *He/Him/his* Director of Library & College Librarian Penrose Library Whitman College 345 Boyer Ave. Walla Walla, WA 99362 (509) 527-5193 fup@whitman.edu [image: Whitman College website] On Mon, Mar 31, 2025 at 2:30 PM Pate, Davin <djp130330@utdallas.edu> wrote:

Thanks, Davin, for the discussion prompt! I am a community college librarian. We began using the Primo VE Research Assistant in late December and have found it helpful to our students. However, the ProQuest Central AI Research Assistant was too problematic for us. * Ping* - You can submit a helpdesk ticket to ProQuest and ask them to remove it from your instance. The additional search terms that are generated in ProQuest Central are often nonsensical. Instead of generating new search terms based on all of the search words used in the original search, the AI generates any word related to any of the search words. For example, when a student was searching on whether Kurt Cobain's suicide was a murder, a suggested AI search term was Agatha Christie. DOH! So, we had to remove it. Ann On Mon, Mar 31, 2025 at 2:30 PM Pate, Davin <djp130330@utdallas.edu> wrote:
-- [image: eSig Logo] ANN ROSELLE PHOENIX COLLEGE Library Faculty 1202 W. Thomas Road Tel: 602-285-7549 | Fax: 602-285-7368 ann.roselle@phoenixcollege.edu www.phoenixcollege.edu/ A Maricopa Community College

Hi Ann, I will let our librarians know. Thank you. *Ping Fu* *He/Him/his* Director of Library & College Librarian Penrose Library Whitman College 345 Boyer Ave. Walla Walla, WA 99362 (509) 527-5193 fup@whitman.edu [image: Whitman College website] On Mon, Mar 31, 2025 at 2:47 PM Ann Roselle <ann.roselle@phoenixcollege.edu> wrote:

Thank you for your input, Ann. I think your point on the search term providing incorrect information is precisely why there should be a way for search queries to be replicated and shared to check for hallucinations and bad data returns. It will help in understanding the cause of the data fallacy. Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA260.A99C6240]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> From: Ann Roselle <ann.roselle@phoenixcollege.edu> Sent: Monday, March 31, 2025 4:47 PM To: Pate, Davin <djp130330@utdallas.edu> Cc: ai-sig@exlibrisusers.org Subject: Re: [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Thanks, Davin, for the discussion prompt! I am a community college librarian. We began using the Primo VE Research Assistant in late December and have found it helpful to our students. However, the ProQuest Central AI Research Assistant was too problematic for us. Ping - You can submit a helpdesk ticket to ProQuest and ask them to remove it from your instance. The additional search terms that are generated in ProQuest Central are often nonsensical. Instead of generating new search terms based on all of the search words used in the original search, the AI generates any word related to any of the search words. For example, when a student was searching on whether Kurt Cobain's suicide was a murder, a suggested AI search term was Agatha Christie. DOH! So, we had to remove it. Ann On Mon, Mar 31, 2025 at 2:30 PM Pate, Davin <djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>> wrote: Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA260.A99C6240]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> The Ai-sig mailing list is managed by ELUNA and IGeLU. To update options or unsubscribe, go to: https://exlibrisusers.org/postorius/lists/ai-sig.exlibrisusers.org/ -- [eSig Logo] ANN ROSELLE PHOENIX COLLEGE Library Faculty 1202 W. Thomas Road Tel: 602-285-7549 | Fax: 602-285-7368 ann.roselle@phoenixcollege.edu<mailto:ann.roselle@phoenixcollege.edu> www.phoenixcollege.edu/<https://www.phoenixcollege.edu/> A Maricopa Community College

Here are the Ex Libris products/tools we currently are investigating and/or utilizing at UT Dallas. Primo Research Assistant Metadata AI Assistant Specto Current Issues I have seen Primo Research Assistant - Issue One: When you search with the Research Assistant, while the article is referenced, where the data is pulled from within the article is not. Most of the results seem to pull from the Abstract of an article, not the article itself. This can make it difficult to see if the data has any hallucinations. This is especially true if unowned content is referenced, which leads to point two. . Issue two: Articles will be pulled from anywhere, not just owned content, as long as they are part of your catalog and have an abstract. There is no clear way or intuitive way to have only items owned by the library included in the return results. This may need to be refined more in the future. Issue Three: Five articles, while okay in beta, maybe a bit limiting for real-world applications. Giving a synopsis on a specific topic based on just bases of just five results could lead to false conclusions. If a synopsis is based on more than just the 5, then an easy way to access the underlying data for the synopsis should be available to the user. Issue Four: Source refinement: I might want to limit the results to only specific information sources. There is no clear way to direct the research assistant to do this. What I do like, The layout of the tool is good, I think that, given some work, it could be a valuable way for students/faculty to get a top-level overview of a topical matter. Requests: I would want to have some sort of embedded staff tool to request assistance if students had follow-up and for the query the students utilized to be able to be shared with staff along with the results that were provided. This would allow staff to understand better what the student did and the results that the research assistant provided and compare those results with what they see if they do the same query. Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections Interim Chair Artificial Intelligence Special Interest Group(ELUNA) (972) 883-2908 |davin.pate@utdallas.edu<mailto:|davin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA25A.BFD698B0]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> From: Pate, Davin <djp130330@utdallas.edu> Sent: Monday, March 31, 2025 4:30 PM To: ai-sig@exlibrisusers.org Subject: [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA25A.BFD698B0]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...>

Our library has created an AI working group and we are working towards setting up formal evaluation of several tools (we're still putting that list together). But first, we wanted to create a values statement and a rubric (and perhaps a test guide?) to inform our evaluations and help us articulate our observations and how they relate to any decisions as to whether or not to enable generative AI features in products. We've been asking for these features to be disabled until we are ready to perform evaluations. We're trying to be ready to evaluate the various AI tools over the coming summer. I would imagine that this conversation could be very helpful in giving us various perspectives to consider when we're putting everything together, so thank you! One of my concerns is the perception that AI summaries and/or ranking are somehow more authoritative than if the user did the work themselves and might curtail further investigation into a search. Even if the AI does a good job of relevance ranking and summarizing, it's not exactly performing a reference interview so does the user know what it does and doesn't know about what the user is actually looking for? These tools are designed to craft a response that meets the expectations of the user based on the question asked, so whereas the abstracts themselves are query-neutral and can provide the user with a sense of how close a particular result is to what they want, the crafted summary will be word-smithed to satisfy the question asked and may mask any disconnect between the query and the selected results. This problem actually gets worse as the hallucination problem gets better. The more reasonable the response, the easier it is to trust that the response is correct. Sorry, I don't mean to derail the conversation with hypotheticals. I look forward to hearing everyone's observations. Keith Keith Engwall, MSLIS (he/him/his) Systems Librarian DePaul University Library 2350 N. Kenmore, 215F | Chicago, IL 60614 Tel: (773) 325-2670 kengwall@depaul.edu<mailto:kengwall@depaul.edu> https://library.depaul.edu<https://library.depaul.edu/> ________________________________ From: Pate, Davin <djp130330@utdallas.edu> Sent: Monday, March 31, 2025 4:30 PM To: ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [EXT] [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA259.2A8C2BE0]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...>

No need to apologize, hypotheticals are what we have to analyze the black boxes Clarivate has provided us. (For the record, I haven't yet dug into any of their products.) For a possible method, I spent some time yesterday reading through this investigation<https://transformer-circuits.pub/2025/attribution-graphs/biology.html> of Claude 3.5 Haiku, which traces various pathways that reflect how the AI is actually reasoning. Composing poetry is an excellent example because researchers assumed it would improvise the next line whereas it actually generates rhyming words (thinking ahead) then creates a line based on the content of the previous line. The jailbreak example is also interesting because it demonstrates that the LLM has to generate a "stop" token, usually a period, before it can execute a warning against the content it's providing. Regardless, there's loads of interactive diagrams, and it's nice to see a methodology that starts from the outside in. [cid:image003.jpg@01DBA2F9.6A990050]Dejah Rubel Metadata and Electronic Resources Management Librarian Ferris Library for Information, Technology and Education 231-591-3544 From: Engwall, Keith <KENGWALL@depaul.edu> Sent: Tuesday, April 1, 2025 11:08 AM To: Pate, Davin <djp130330@utdallas.edu>; ai-sig@exlibrisusers.org Subject: *EXT* [Ai-sig] Re: [EXT] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) **External Email** Our library has created an AI working group and we are working towards setting up formal evaluation of several tools (we're still putting that list together). But first, we wanted to create a values statement and a rubric (and perhaps a test guide?) to inform our evaluations and help us articulate our observations and how they relate to any decisions as to whether or not to enable generative AI features in products. We've been asking for these features to be disabled until we are ready to perform evaluations. We're trying to be ready to evaluate the various AI tools over the coming summer. I would imagine that this conversation could be very helpful in giving us various perspectives to consider when we're putting everything together, so thank you! One of my concerns is the perception that AI summaries and/or ranking are somehow more authoritative than if the user did the work themselves and might curtail further investigation into a search. Even if the AI does a good job of relevance ranking and summarizing, it's not exactly performing a reference interview so does the user know what it does and doesn't know about what the user is actually looking for? These tools are designed to craft a response that meets the expectations of the user based on the question asked, so whereas the abstracts themselves are query-neutral and can provide the user with a sense of how close a particular result is to what they want, the crafted summary will be word-smithed to satisfy the question asked and may mask any disconnect between the query and the selected results. This problem actually gets worse as the hallucination problem gets better. The more reasonable the response, the easier it is to trust that the response is correct. Sorry, I don't mean to derail the conversation with hypotheticals. I look forward to hearing everyone's observations. Keith Keith Engwall, MSLIS (he/him/his) Systems Librarian DePaul University Library 2350 N. Kenmore, 215F | Chicago, IL 60614 Tel: (773) 325-2670 kengwall@depaul.edu<mailto:kengwall@depaul.edu> https://library.depaul.edu<https://library.depaul.edu/> ________________________________ From: Pate, Davin <djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>> Sent: Monday, March 31, 2025 4:30 PM To: ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org> <ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org>> Subject: [EXT] [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image004.png@01DBA2F9.6A990050]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> **Notice** This message is from a sender outside of the Ferris Office 365 mail system. Please use caution when clicking links or opening attachments. If you are unsure if this email is safe, please report it using the Report Suspected Phishing button on the Outlook ribbon and the Information Security Office will review it. ________________________________

Hello, all: Thank you Davin for initiating this conversation! We at Oregon State University are investigating several AI tools, including Primo Research Assistant, Web of Science RA, and other vendors (Elicit, Scite). For Primo RA, we have: * Enabled it in a test view * Offered a public webinar on AI tools for libraries Initial feedback we have received: * What are the use cases for Primo RA? To patrons, it is unclear what are the benefits and advantages of using it over other AI tools or Primo. * IZ and NZ records are unavailable in Primo RA. * What patrons like is the multilingual feature. For example, you can search in Chinese, get English articles in the results, and the summary is in Chinese. That feature is missing in Primo, but is it available in NDE? I hope to track the information on a spreadsheet and provide it to the related working groups/steering. Maybe we can discuss the strategy as a group. ELUNA? I am planning to attend it, but only on the 19th and 20th. Thanks -- Hui From: Pate, Davin <djp130330@utdallas.edu> Date: Monday, March 31, 2025 at 2:30 PM To: ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) You don't often get email from djp130330@utdallas.edu. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> [This email originated from outside of OSU. Use caution with links and attachments.] Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA259.2A8C2BE0]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...>

We are currently trying to develop an evaluation/assessment tool for use with AI/GenAI research products. I suspect many others have similar interests and may be working on related projects. Our goals/purposes are primarily to try and assess which tool(s) might be worth pursuing a paid subscription relative to others. If anyone has already developed something that might be useful for that purpose, would you be willing to share? If you know of rubrics or assessment tools that might be useful, that would be great as well. Thanks in advance for any information and suggestions. __ Glenn Bunton Data Visualization Librarian 803-777-2903 buntonga@mailbox.sc.edu<mailto:buntonga@mailbox.sc.edu> Digital Research Services University Libraries University of South Carolina 1322 Greene Street Columbia, SC 29208 [cid:f632ce4e-da02-4052-b602-a32e402909ce] ________________________________ From: Zhang, Hui <Hui.Zhang@oregonstate.edu> Sent: Tuesday, April 1, 2025 3:10 PM To: Pate, Davin <djp130330@utdallas.edu>; ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Hello, all: Thank you Davin for initiating this conversation! We at Oregon State University are investigating several AI tools, including Primo Research Assistant, Web of Science RA, and other vendors (Elicit, Scite). For Primo RA, we have: * Enabled it in a test view * Offered a public webinar on AI tools for libraries Initial feedback we have received: * What are the use cases for Primo RA? To patrons, it is unclear what are the benefits and advantages of using it over other AI tools or Primo. * IZ and NZ records are unavailable in Primo RA. * What patrons like is the multilingual feature. For example, you can search in Chinese, get English articles in the results, and the summary is in Chinese. That feature is missing in Primo, but is it available in NDE? I hope to track the information on a spreadsheet and provide it to the related working groups/steering. Maybe we can discuss the strategy as a group. ELUNA? I am planning to attend it, but only on the 19th and 20th. Thanks -- Hui From: Pate, Davin <djp130330@utdallas.edu> Date: Monday, March 31, 2025 at 2:30 PM To: ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) You don't often get email from djp130330@utdallas.edu. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> [This email originated from outside of OSU. Use caution with links and attachments.] Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA259.2A8C2BE0]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...>

My colleagues created a Libguide that might be helpful: https://guides.library.oregonstate.edu/ailitreviewtools Best, -- Hui From: Bunton, Glenn <BUNTONGA@mailbox.sc.edu> Date: Tuesday, April 1, 2025 at 12:25 PM To: ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) You don't often get email from buntonga@mailbox.sc.edu. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> [This email originated from outside of OSU. Use caution with links and attachments.] We are currently trying to develop an evaluation/assessment tool for use with AI/GenAI research products. I suspect many others have similar interests and may be working on related projects. Our goals/purposes are primarily to try and assess which tool(s) might be worth pursuing a paid subscription relative to others. If anyone has already developed something that might be useful for that purpose, would you be willing to share? If you know of rubrics or assessment tools that might be useful, that would be great as well. Thanks in advance for any information and suggestions. __ Glenn Bunton Data Visualization Librarian 803-777-2903 buntonga@mailbox.sc.edu<mailto:buntonga@mailbox.sc.edu> Digital Research Services University Libraries University of South Carolina 1322 Greene Street Columbia, SC 29208 [cid:f632ce4e-da02-4052-b602-a32e402909ce] ________________________________ From: Zhang, Hui <Hui.Zhang@oregonstate.edu> Sent: Tuesday, April 1, 2025 3:10 PM To: Pate, Davin <djp130330@utdallas.edu>; ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Hello, all: Thank you Davin for initiating this conversation! We at Oregon State University are investigating several AI tools, including Primo Research Assistant, Web of Science RA, and other vendors (Elicit, Scite). For Primo RA, we have: * Enabled it in a test view * Offered a public webinar on AI tools for libraries Initial feedback we have received: * What are the use cases for Primo RA? To patrons, it is unclear what are the benefits and advantages of using it over other AI tools or Primo. * IZ and NZ records are unavailable in Primo RA. * What patrons like is the multilingual feature. For example, you can search in Chinese, get English articles in the results, and the summary is in Chinese. That feature is missing in Primo, but is it available in NDE? I hope to track the information on a spreadsheet and provide it to the related working groups/steering. Maybe we can discuss the strategy as a group. ELUNA? I am planning to attend it, but only on the 19th and 20th. Thanks -- Hui From: Pate, Davin <djp130330@utdallas.edu> Date: Monday, March 31, 2025 at 2:30 PM To: ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) You don't often get email from djp130330@utdallas.edu. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> [This email originated from outside of OSU. Use caution with links and attachments.] Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA259.2A8C2BE0]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...>

Duke is also asking these same questions and would be interested in examples - we're currently evaluating the new Syllabus Assistant tool to see if we want to turn it on in production. (So just chiming in to say - yes, sharing resources would be welcome!) Best, Erin Nettifee Duke University Libraries ________________________________ From: Bunton, Glenn <BUNTONGA@mailbox.sc.edu> Sent: Tuesday, April 1, 2025 3:25 PM To: ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) We are currently trying to develop an evaluation/assessment tool for use with AI/GenAI research products. I suspect many others have similar interests and may be working on related projects. Our goals/purposes are primarily to try and assess which tool(s) might be worth pursuing a paid subscription relative to others. If anyone has already developed something that might be useful for that purpose, would you be willing to share? If you know of rubrics or assessment tools that might be useful, that would be great as well. Thanks in advance for any information and suggestions. __ Glenn Bunton Data Visualization Librarian 803-777-2903 buntonga@mailbox.sc.edu<mailto:buntonga@mailbox.sc.edu> Digital Research Services University Libraries University of South Carolina 1322 Greene Street Columbia, SC 29208 [cid:f632ce4e-da02-4052-b602-a32e402909ce] ________________________________ From: Zhang, Hui <Hui.Zhang@oregonstate.edu> Sent: Tuesday, April 1, 2025 3:10 PM To: Pate, Davin <djp130330@utdallas.edu>; ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Hello, all: Thank you Davin for initiating this conversation! We at Oregon State University are investigating several AI tools, including Primo Research Assistant, Web of Science RA, and other vendors (Elicit, Scite). For Primo RA, we have: * Enabled it in a test view * Offered a public webinar on AI tools for libraries Initial feedback we have received: * What are the use cases for Primo RA? To patrons, it is unclear what are the benefits and advantages of using it over other AI tools or Primo. * IZ and NZ records are unavailable in Primo RA. * What patrons like is the multilingual feature. For example, you can search in Chinese, get English articles in the results, and the summary is in Chinese. That feature is missing in Primo, but is it available in NDE? I hope to track the information on a spreadsheet and provide it to the related working groups/steering. Maybe we can discuss the strategy as a group. ELUNA? I am planning to attend it, but only on the 19th and 20th. Thanks -- Hui From: Pate, Davin <djp130330@utdallas.edu> Date: Monday, March 31, 2025 at 2:30 PM To: ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) You don't often get email from djp130330@utdallas.edu. Learn why this is important<https://urldefense.com/v3/__https://aka.ms/LearnAboutSenderIdentification__;...> [This email originated from outside of OSU. Use caution with links and attachments.] Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/<https://urldefense.com/v3/__http://www.utdallas.edu/library/__;!!OToaGQ!tArJysO8bcsqdB7u5LcTnESXIWolUV0wBUidKlpKsnLh2eRYtWK38slFxLqj72kaUtBSFRadxq08JoHL_IHgBo555w$> The University of Texas at Dallas [cid:image001.png@01DBA259.2A8C2BE0]<https://urldefense.com/v3/__https://outlook.office.com/bookwithme/user/af697...> Book time to meet with me<https://urldefense.com/v3/__https://outlook.office.com/bookwithme/user/af697...>

Having an assessment tool is an interesting idea. I have read/seen some articles on utilizing AI in library assement but have not seen clear examples of assessing the AI itself from a library perspective. I am also curious if someone has already developed an AI evaluation tool they want to share. We currently utilize standard resource workflows from staff/faculty reviews/feedback to gadge the usefulness of a specific AI tool and if we would like to consider it for adoption. We don’t utilize a formulaic approach, at least not now. Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA317.0835CB00]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> From: Bunton, Glenn <BUNTONGA@mailbox.sc.edu> Sent: Tuesday, April 1, 2025 2:26 PM To: ai-sig@exlibrisusers.org Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) We are currently trying to develop an evaluation/assessment tool for use with AI/GenAI research products. I suspect many others have similar interests and may be working on related projects. Our goals/purposes are primarily to try and assess which tool(s) might be worth pursuing a paid subscription relative to others. If anyone has already developed something that might be useful for that purpose, would you be willing to share? If you know of rubrics or assessment tools that might be useful, that would be great as well. Thanks in advance for any information and suggestions. __ Glenn Bunton Data Visualization Librarian 803-777-2903 buntonga@mailbox.sc.edu<mailto:buntonga@mailbox.sc.edu> Digital Research Services University Libraries University of South Carolina 1322 Greene Street Columbia, SC 29208 [cid:image002.jpg@01DBA317.0835CB00] ________________________________ From: Zhang, Hui <Hui.Zhang@oregonstate.edu<mailto:Hui.Zhang@oregonstate.edu>> Sent: Tuesday, April 1, 2025 3:10 PM To: Pate, Davin <djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>>; ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org> <ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org>> Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Hello, all: Thank you Davin for initiating this conversation! We at Oregon State University are investigating several AI tools, including Primo Research Assistant, Web of Science RA, and other vendors (Elicit, Scite). For Primo RA, we have: * Enabled it in a test view * Offered a public webinar on AI tools for libraries Initial feedback we have received: * What are the use cases for Primo RA? To patrons, it is unclear what are the benefits and advantages of using it over other AI tools or Primo. * IZ and NZ records are unavailable in Primo RA. * What patrons like is the multilingual feature. For example, you can search in Chinese, get English articles in the results, and the summary is in Chinese. That feature is missing in Primo, but is it available in NDE? I hope to track the information on a spreadsheet and provide it to the related working groups/steering. Maybe we can discuss the strategy as a group. ELUNA? I am planning to attend it, but only on the 19th and 20th. Thanks -- Hui From: Pate, Davin <djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>> Date: Monday, March 31, 2025 at 2:30 PM To: ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org> <ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org>> Subject: [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) You don't often get email from djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> [This email originated from outside of OSU. Use caution with links and attachments.] Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA317.0835CB00]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...>

Hi Everyone, Credit primarily goes to my colleague, Joseph Deodato for leading the development of an AI Research Assistant rubric here at Rutgers. We've used this for ScopusAI and Primo Research Assistant. This is for librarians to use when evaluating a trial AI Research Assistant product. You'll note that it doesn't include numerical scores; it's more of a way to make sure we're asking the right questions when we evaluate an AI product and don't overlook anything important! One drawback is that if we get a lot of people to fill out the rubrics (and we do want participation!), it's then a lot of work for one person to go through everyone's rubrics and summarize them. Sincerely, Elizabeth Elizabeth York Electronic Resources Librarian Rutgers University Libraries ________________________________ From: Pate, Davin <djp130330@utdallas.edu> Sent: Tuesday, April 1, 2025 4:14 PM To: Bunton, Glenn <BUNTONGA@mailbox.sc.edu>; ai-sig@exlibrisusers.org <ai-sig@exlibrisusers.org> Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Having an assessment tool is an interesting idea. I have read/seen some articles on utilizing AI in library assement but have not seen clear examples of assessing the AI itself from a library perspective. I am also curious if someone has already developed an AI evaluation tool they want to share. We currently utilize standard resource workflows from staff/faculty reviews/feedback to gadge the usefulness of a specific AI tool and if we would like to consider it for adoption. We don’t utilize a formulaic approach, at least not now. Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA317.0835CB00]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> From: Bunton, Glenn <BUNTONGA@mailbox.sc.edu> Sent: Tuesday, April 1, 2025 2:26 PM To: ai-sig@exlibrisusers.org Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) We are currently trying to develop an evaluation/assessment tool for use with AI/GenAI research products. I suspect many others have similar interests and may be working on related projects. Our goals/purposes are primarily to try and assess which tool(s) might be worth pursuing a paid subscription relative to others. If anyone has already developed something that might be useful for that purpose, would you be willing to share? If you know of rubrics or assessment tools that might be useful, that would be great as well. Thanks in advance for any information and suggestions. __ Glenn Bunton Data Visualization Librarian 803-777-2903 buntonga@mailbox.sc.edu<mailto:buntonga@mailbox.sc.edu> Digital Research Services University Libraries University of South Carolina 1322 Greene Street Columbia, SC 29208 [cid:image002.jpg@01DBA317.0835CB00] ________________________________ From: Zhang, Hui <Hui.Zhang@oregonstate.edu<mailto:Hui.Zhang@oregonstate.edu>> Sent: Tuesday, April 1, 2025 3:10 PM To: Pate, Davin <djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>>; ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org> <ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org>> Subject: [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) Hello, all: Thank you Davin for initiating this conversation! We at Oregon State University are investigating several AI tools, including Primo Research Assistant, Web of Science RA, and other vendors (Elicit, Scite). For Primo RA, we have: * Enabled it in a test view * Offered a public webinar on AI tools for libraries Initial feedback we have received: * What are the use cases for Primo RA? To patrons, it is unclear what are the benefits and advantages of using it over other AI tools or Primo. * IZ and NZ records are unavailable in Primo RA. * What patrons like is the multilingual feature. For example, you can search in Chinese, get English articles in the results, and the summary is in Chinese. That feature is missing in Primo, but is it available in NDE? I hope to track the information on a spreadsheet and provide it to the related working groups/steering. Maybe we can discuss the strategy as a group. ELUNA? I am planning to attend it, but only on the 19th and 20th. Thanks -- Hui From: Pate, Davin <djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>> Date: Monday, March 31, 2025 at 2:30 PM To: ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org> <ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org>> Subject: [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) You don't often get email from djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> [This email originated from outside of OSU. Use caution with links and attachments.] Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA317.0835CB00]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...>

Hi, IEEE examines AI across four ethical ontologies: accountability, algorithmic bias, privacy, and transparency. Each ontology is a separate product certification. The certification process is proprietary, so I can't go into details, but you'll see in the attached ontologies that we examine products from the point of various duty holders. There are only five duty holders: developer, integrator, operator, maintainer, and regulator whereas stakeholders is much broader. Regardless, IEEE has created some excellent rubrics in the Annexes of these documents that may be a good starting point for other institutions. Speaking of which, I believe there are multiple presentations at the upcoming Annual Conference on this topic. Dejah From: Zhang, Hui <Hui.Zhang@oregonstate.edu> Sent: Tuesday, April 1, 2025 3:10 PM To: Pate, Davin <djp130330@utdallas.edu>; ai-sig@exlibrisusers.org Subject: *EXT* [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) **External Email** Hello, all: Thank you Davin for initiating this conversation! We at Oregon State University are investigating several AI tools, including Primo Research Assistant, Web of Science RA, and other vendors (Elicit, Scite). For Primo RA, we have: * Enabled it in a test view * Offered a public webinar on AI tools for libraries Initial feedback we have received: * What are the use cases for Primo RA? To patrons, it is unclear what are the benefits and advantages of using it over other AI tools or Primo. * IZ and NZ records are unavailable in Primo RA. * What patrons like is the multilingual feature. For example, you can search in Chinese, get English articles in the results, and the summary is in Chinese. That feature is missing in Primo, but is it available in NDE? I hope to track the information on a spreadsheet and provide it to the related working groups/steering. Maybe we can discuss the strategy as a group. ELUNA? I am planning to attend it, but only on the 19th and 20th. Thanks -- Hui From: Pate, Davin <djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>> Date: Monday, March 31, 2025 at 2:30 PM To: ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org> <ai-sig@exlibrisusers.org<mailto:ai-sig@exlibrisusers.org>> Subject: [Ai-sig] Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products) You don't often get email from djp130330@utdallas.edu<mailto:djp130330@utdallas.edu>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> [This email originated from outside of OSU. Use caution with links and attachments.] Hi Everyone, I wanted to begin our initial Topic Discussion series with the following: What Clarivate/ProQuest/Ex Libris AI Products/Tools are you or your university currently investigating? What have you liked about the products and have you seen any Initial problems or concerns regarding the products? If you have seen a significant concern, what case have you submitted, if any, and the case number associated with the issue. I hope to track the information on a spreadsheet and provide it to the related working groups/steering. We have a number of Ex Libris developers on this list so it will also assist with pointing out issues as well as sharing what people are finding helpful. Thanks again for your contributions, Davin Pate, M.L.S. Assistant Director for Scholarly Communications and Collections (972) 883-2908 |davin.pate@utdallas.edu<mailto:%7Cdavin.pate@utdallas.edu> http://www.utdallas.edu/library/ The University of Texas at Dallas [cid:image001.png@01DBA47B.13612D70]<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> Book time to meet with me<https://outlook.office.com/bookwithme/user/af697753dcc24b9e926afe35e8e84e9d@...> **Notice** This message is from a sender outside of the Ferris Office 365 mail system. Please use caution when clicking links or opening attachments. If you are unsure if this email is safe, please report it using the Report Suspected Phishing button on the Outlook ribbon and the Information Security Office will review it. ________________________________
participants (9)
-
Ann Roselle
-
Bunton, Glenn
-
Dejah T Rubel
-
Elizabeth York
-
Engwall, Keith
-
Erin Nettifee (she/her/hers)
-
Pate, Davin
-
Ping Fu
-
Zhang, Hui