Skip to Main Content

Artificial Intelligence

An introductory guide about Artificial Intelligence, its potential applications, gaps, and considerations for ethical use and Academic Integrity.

When is it okay to use AI for academic purposes?

Artificial Intelligence has several issues, but it can also have benefits if used appropriately. When using AI, consider the following advice (adapted from TRU Library's guide "Artificial Intelligence: A Guide for Students"):

  • Your instructor allowed it and you used AI exactly as approved: for example, if your instructor allowed the use of ChatGPT for the purpose of brainstorming ideas but required that students refrain from using an AI tool to write an essay about those ideas, it would be considered academic misconduct to use ChatGPT to write your essay. You should also include an acknowledgement of how AI was used, including your prompts and generated outputs (check the Citing AI tab).
  • As a study aid: you can use AI to generate questions and quizzes to help you study, but you cannot upload content created by your instructor or other people without their consent (or otherwise you could be infringing copyright). Similarly, you can ask generative AI to explain concepts and theories in different ways if you are having trouble understanding something. However, content must be checked for accuracy and you cannot use the output in your assignment unless you have the instructor's permission to do so.
  • As an example for critical discussion: with instructor's permission, you may use use the output of an AI tool for the purposes of discussion and critique. You should still acknowledge how you used the tool and cite it correctly. If your outputs include sound, files, code, or images, check if the AI tool mentions use or copyright restrictions regarding the created work. 

Evaluate an AI Tool: The ROBOT Test

ROBOT Test

Based on the work by Hervieux & Wheatley (2020), the ROBOT test is an evaluation tool you can use to help you consider the legitimacy of AI technologies.

Reliability
  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
  • If it is produced by the party responsible for the AI, how much information are they making available? 
  • Is information only partially available due to trade secrets?
  • How biased is they information that they produce?
Objective
  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
  • To inform?
  • To convince?
  • To find financial support?
Bias
  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
  • By the source of information?
  • By the party responsible for the AI?
  • By its users?
Ownership
  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
  • Is it a private company?
  • The government?
  • A think tank or research group?
  • Who has access to it?
  • Who can use it?
Type
  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention? 

The  "ROBOT Test" by S. Hervieux & A. Wheatley via The LibrAIry, is licensed under CC BY-NC-SA 4.0

How to create a good prompt

The CLEAR Framework

 

Created by Lo. (2023), this framework is based on core principles to make a good prompt for Generative AI. Prompts should be Concise, Logical, Explicit, Adaptive, and Reflective.

Concise Use clear, brief, and specific sentences when creating prompts.
Logical Maintain a flow and order of ideas within your prompt.
Explicit Provide precise instructions for the desired output (e.g. specify format, scope, and other constraints for how you want your answer to be).
Adaptive Be flexible; try different prompts and compare their outputs.
Reflective Adjust and improve prompts based on your assessment of the AI tool's answers.

Prompt Engineering

 

"Prompt Engineering is the process of designing and refining text inputs (prompts) to achieve specific application objectives with AI models" (Bambroo, 2024).

A good prompt requires careful thinking about the components of a prompt, and constant evaluation and refinement of outputs. When creating your prompts, consider the following:

Goal: what response do you want from the AI tool?

Instructions:  give clear and concise instructions on what the AI tool should do.

Context & examples: provide background information to help the AI tool understand what kind of information is needed and why. Consider giving examples of desired outputs.

Expectations: provide information about how the AI tool should respond (e.g. tone, reading level, etc.)

Source: what information or samples should be used to create the prompt?

How to write a prompt with prompt engineering: Goal, Context, Source, Expectations

Screenshot of a prompt for Microsoft Copilot. Microsoft (n.d.)

How to assess AI generated content

General Guidelines (any content)

 

Accuracy & Comprehensiveness

a) Compare and verify: can you find alternative sources that corroborate the information provided by the AI? Who (person or organization) is providing this information? Can you find the original source from which the information was extracted?

b) Check AI-generated citations: does the cited source truly exist? Was it cited correctly? Checking the reference on other pages such as the YukonU Library Discovery Search, Google , or WorldCat is recommended to ensure that the AI is not "hallucinating" citations. If the citation in fact exists, check the original source to confirm the information was correctly summarized.

c) Check to see if there is anything missing from the output. Most free AI tools (Microsoft Copilot, ChatGPT and others) retrieve information available from the open web but are not able to retrieve data from sources that exist behind a paywall, such as subscription-based journals and library databases. This means that AI-generated outputs may lack the depth and breadth of more reliable sources of information.

Bias AI tools may provide biased answers as a result of inherent algorithmic biases. Depending on the question asked, consider which groups or perspectives may be absent or misrepresented in the output provided by the AI tool.
Copyright Copyright legislation in regards to AI-generated works is still evolving. Since AI tools are often trained on works that are copyrighted and may end up using content from those works without the appropriate credit, it is worthwhile to do some lateral searching to investigate what the original source(s) might be and credit them accordingly.
Currency

Outputs from AI tools are not always based on the most current information. For example, as of January 2025 ChatGPT-4's training data included information up to April 2023, and Microsoft Copilot up to October 2023.

While some AI tools have the ability to search the general web for more up-to-date information, it is better to consult other sources if your question is related to current events or to areas that are constantly changing.

Assessing Images & Videos