Becking, M. (2024, October 22). Fraudulent Anishinaabemowin resources a serious concern.
Anishinabek News. https://anishinabeknews.ca/2024/10/22/fraudulent-anishinaabemowin-resources-a-serious-concern/
Burke, G., & Schellmann, H. (2024, October 26). Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said
. AP News. https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14
Canadian Centre for Cyber Security. (2024).
How to identify misinformation, disinformation, and malinformation (ITSAP.00.300).
https://www.cyber.gc.ca/en/guidance/how-identify-misinformation-disinformation-and-malinformation-itsap00300#misinformation
Gal, U. (2024, September 19). OpenAI's data hunger raises privacy concerns.
The Conversation. https://theconversation.com/openais-data-hunger-raises-privacy-concerns-237448
Haim, A., Salinas, A., & Nyarko, J. (2024). What's in a name? Auditing Large Language models for race and gender bias.
ArXiv. https://arxiv.org/abs/2402.14875
IBM. (2023, October 12).
Understanding the different types of artificial intelligence. https://www.ibm.com/think/topics/artificial-intelligence-types
IBM. (2024a, August 9).
What is artificial intelligence?. https://www.ibm.com/think/topics/artificial-intelligence
IBM. (2024b, March 22).
What is generative AI?. https://www.ibm.com/think/topics/generative-ai
Jonker, A., & Rogers, J. (2024, September 20).
What is algorithmic bias? IBM.
https://www.ibm.com/think/topics/algorithmic-bias
Lo, L. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering.
The Journal of Academic Librarianship, 49(4), 102720.
https://doi.org/10.1016/j.acalib.2023.102720
Maslej, N., Fattorini, L., Perraut, R., Parli, V. Reuel, A. Brynjolfsson, E., Etchemendy, J. Ligett, K., Lyons, T., Manyika, J., Niebles, J., Shoham, Y., Wald, R., & Clark, J.
Artificial Intelligence index report 2024. Stanford University; Human-Centered Artificial Intelligence.
https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf
Monteith, S., Glenn, T., Geddes, J. R., Whybrow, P. C., Achtyes, E., & Bauer, M. (2024). Artificial intelligence and increasing misinformation.
The British Journal of Psychiatry,
224(2), 33–35.
https://doi.org/10.1192/bjp.2023.136
Museum of Science. (2022, March 29). What is AI? [Video]. YouTube.
https://www.youtube.com/watch?v=NbEbs6I3eLw
Piers, G. (2024, February 7). Even ChatGPT says ChatGPT is racially biased.
Scientific American.
https://www.scientificamerican.com/article/even-chatgpt-says-chatgpt-is-racially-biased/
Privette, A. P. (2024, October 11).
AI's challenging waters. University of Illinois Urbana-Champain.
https://cee.illinois.edu/news/AIs-Challenging-Waters
Ryan-Mosley, T. (2023, October 4). How generative AI is boosting the spread of disinformation and propaganda.
MIT Technology Review.
https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/
Susarla, A. (2024, March 22). Generative AI could leave users holding the bag for copyright violations.
The Conversation.
http://theconversation.com/generative-ai-could-leave-users-holding-the-bag-for-copyright-violations-225760
Whitney, L. (2024, September 5).
That’s Not Right: How to Tell ChatGPT When It’s Wrong. PCMAG.
https://www.pcmag.com/how-to/thats-not-right-how-to-tell-chatgpt-when-its-wrong
Worrell, T. (2024, October 10). AI affects everyone - including indigenous people. It's time we have a say in how it's built.
The Conversation. https://theconversation.com/ai-affects-everyone-including-indigenous-people-its-time-we-have-a-say-in-how-its-built-239605
UC Berkeley School of Information. (2020, June 26).
What is Machine Learning (ML)?.
https://ischoolonline.berkeley.edu/blog/what-is-machine-learning/
United Nations Environmental Program. (2024, September 21).
AI has an environmental problem. Here's what the world can do about that. https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about