Hi,
IEEE examines AI across four ethical ontologies: accountability, algorithmic bias, privacy, and transparency. Each ontology is a separate product certification. The certification process is proprietary,
so I can't go into details, but you'll see in the attached ontologies that we examine products from the point of various duty holders. There are only five duty holders: developer, integrator, operator, maintainer, and regulator whereas stakeholders is much
broader. Regardless, IEEE has created some excellent rubrics in the Annexes of these documents that may be a good starting point for other institutions.
Speaking of which, I believe there are multiple presentations at the upcoming Annual Conference on this topic.
Dejah
From: Zhang, Hui <Hui.Zhang@oregonstate.edu>
Sent: Tuesday, April 1, 2025 3:10 PM
To: Pate, Davin <djp130330@utdallas.edu>; ai-sig@exlibrisusers.org
Subject: *EXT* [Ai-sig] Re: Topic Discussion (Ex Libris/ProQuest/Clarivate AI Products)
**External Email**
Hello, all:
Thank you Davin for initiating this conversation! We at Oregon State University are investigating several AI tools, including Primo Research Assistant, Web of Science RA, and other vendors (Elicit, Scite).
For Primo RA, we have:
Initial feedback we have received:
I hope to track the information on a spreadsheet and provide it to the related working groups/steering.
Maybe we can discuss the strategy as a group. ELUNA? I am planning to attend it, but only on the 19th and 20th.
Thanks
--
Hui
**Notice** This message is from a sender outside of the Ferris Office 365 mail system. Please use caution when clicking links or opening attachments. If you are unsure if this email is safe,
please report it using the Report Suspected Phishing button on the Outlook ribbon and the Information Security Office will review it.