Skip to Main Content

AI Literacy

Writing Prompts for Generative AI Tools

Prompting is what you type into the chat box. 

Your prompts can make a huge difference in the relevance and quality of information you receive. 

There are three best practices for getting better output:

  1. Provide context or assign a role.
  2. Detail how you would like the output formatted.
  3. Keep engaging with the GenAI tool so that it can revise its output to really get to what you're asking it to do. 

Hints & Tips:

To get the best results from your prompts, be specific and clear in your instructions, avoiding vague or broad questions. Don’t hesitate to iterate and experiment with different wording if the initial response isn’t quite right. Providing examples or scenarios can give the AI a concrete reference point, and setting boundaries or constraints helps get the output you need. Ask follow-up questions to clarify points and guide the conversation, and use feedback from initial responses to refine your queries. GenAI uses natural language so you can talk to it like you would another person. Staying conversational and interactive will help you get the most accurate and relevant information from the AI.

Bad Prompt:

"Tell me about climate change."

Better Prompt:

"I'm writing an article about the impact of climate change on coastal cities. Can you provide a search strategy for finding the latest research on rising sea levels, focusing on three specific cities: Miami, New York, and Amsterdam? I want to find research focused on projected impact and potential mitigation strategies. Can you also suggest three scholarly databases to search?"


See the following resources for more information about designing appropriate prompts:

Evaluating AI-generated Content

Although generative AI can produce accurate information, it can also produce inaccurate information. These inaccuracies can arise from the data AI is trained on, the limitations of the particular GenAI tool, and the fact that GenAI is not designed to differentiate between what is true and what's not true. This can be dangerous if we assume all information provided by AI chatbots is true or valid.

When a GenAI tool produces content that isn't rooted in fact, this is called a hallucination. It makes something up because that is its best guess at answering your prompt. This is one of the many reasons why it is essential to fact-check the information provided by AI tools. 

  • One of the most common hallucinations we have seen is when students ask an AI tool to generate a journal article citation with specific criteria. The tool will often produce a citation with real authors, a real journal, and a realistic article title, but the article does not actually exist. You can double-check that the citation is real by searching for the title or DOI in OneSearch
  • In the legal case Mata v. Avianca, a New York attorney relied on ChatGPT to conduct their legal research. The judge overseeing the suit noted that the opinion contained citations and quotes that did not exist (Forbes).
  • Cross-reference AI-generated content with reliable, vetted sources, like those that you can find through the Library's resources. 
  • Be aware that AI tools can reflect biases in their training data or misinterpret complex topics. Always review AI-generated content for potential biases or misrepresentations.
Last Updated: Sep 27, 2024 4:06 PM