ITS AI Services FAQ

How does U-M GPT work?

U-M hosts several Large Language Models (LLMs) that work through the processing of vast amounts of text data. They utilize an artificial intelligence technique called "transformer neural networks." 

These models are initially trained on a diverse range of internet text. They generate responses by predicting the next word in a sentence, with their effectiveness determined by how coherently they can predict and generate these sequences.

Despite the advanced capabilities of these AI systems, it's important to note that they don't possess understanding or consciousness, but simply analyze and generate text based on the input and training they have received.

Does ITS train U-M GPT?

No, the University of Michigan does not engage in training U-M GPT, nor do we share any user-specific data to improve AI models.

How was U-M GPT trained?

The University of Michigan did not train the models hosted in AI models used in U-M GPT. These models (Llama 2, GPT-3.5, GPT-4) were trained by the service providers and are informed by three key data sources: publicly accessible information found on the internet, licensed data from third-party providers, and inputs from users or human trainers.

Can I trust U-M GPT is telling me the truth?

U-M GPT does its best to generate accurate information based on the diverse range of text it has been trained on. However, it does not verify or fact-check the information it provides. Moreover, it does not have access to real-time data or updates, nor does it understand the information it provides in the way humans do.

Therefore, while it is a valuable tool for general information, brainstorming, and idea generation, all ouput it generates should be reviewed and verified with reliable sources. Always consider it prudent to consult with knowledgeable individuals or fact-check using reliable resources for important matters.

What is a prompt?

In the context of generative AI, a prompt is any form of text, question, query, or information that communicates to AI what response you’re looking for. The way you phrase a prompt can influence the way AI responds to it. So, by adjusting and refining your prompt, you can lead the AI towards providing the type of response or answer you’re looking for.

The art of crafting an effective prompt is sometimes referred to as prompt literacy or prompt engineering. See prompt literacy and improving U-M GPT prompts for more information about crafting effective prompts.

Why did U-M GPT give me an answer that wasn’t accurate?
  • Limitations in Training Data: The model is trained on vast amounts of data but doesn't necessarily know every fact or the most up-to-date information. If something has changed or wasn't prevalent in the data the model was trained on; the answer might be off.
  • Ambiguous Questions: If a question is vague or not specific enough, the model may guess at the intended meaning, leading to answers that aren't precisely aligned with the user's intent.
  • Inherent Model Biases: The model has biases resulting from its training data. While efforts have been made to reduce egregious biases, creating a completely neutral model is impossible, given that the data it's trained on can be biased in various ways.
  • No Real Understanding: GPT, like its predecessors, doesn't truly "understand" information in the way humans do. It identifies patterns in the input data and responds based on those patterns. This can sometimes lead to inaccuracies or overly literal interpretations.
  • Complex or Niche Topics: For very specialized or niche subjects, the model might not have enough data to provide a detailed or accurate response.
  • Model's Confidence: Sometimes, the model will produce an answer even if it's not very confident in its accuracy, given the input. It doesn't have the capability to say "I don't know" unless it's been specifically programmed to do so under certain conditions.
  • Heuristics and Shortcuts: The model might take shortcuts in its responses based on patterns it has seen during training. For example, if many texts in its training data say "A is generally true," it might lean towards saying A is true even in contexts where A might not be.
  • Model's Objective: The model's primary objective during training was to predict the next word in a sequence, not necessarily to provide factual accuracy. This can sometimes lead to discrepancies in its output.

If you notice inaccuracies, it's always a good idea to consult other sources or ask for clarification.

Who can view my conversation?

The use of personal information collected by this service is safeguarded through several mechanisms, including Information Assurance and U-M policies, such as Privacy and the Need to Monitor and Access Records (SPG 601.11) and Institutional Data Resource Management Policy (SPG 601.12).

We may also share your personal information when required by law, or when we believe sharing will help to protect the safety, property, or rights of U-M, members of the U-M community, and U-M guests. Please read the privacy notice for ITS AI Services for additional information. 

Will my conversation be used to train the model?

No. U-M does not engage in the training of these models, nor do we share any user-specific data for the purpose of improving these models. The data in our U-M AI platform is ours and is not shared with anyone.

What happens when I delete a chat?

Your chat will be removed from the screen and cannot be retrieved. However, it is still accessible via logging information that is collected when using the product.

What AI models are available in U-M GPT?

GPT 3.5, GPT 4 (U-M GPT only) and Llama 2. 

I’ve never heard of Llama2, why is that an option in U-M GPT?

Llama 2 is an open-source language model available for anyone to use, experiment, and build tools using it as the base. ITS is interested in hosting your LLM and delivering it as a hosted service to the entire community. This is a capability of our U-M GPT Toolkit; hosted services come at a cost.

Why does U-M GPT offer multiple Large Language Models (LLM)? Which LLM should I choose?

Refer to U-M GPT In-Depth for more information on Large Language Models. 

How can I use U-Ms GPT models for my applications?

U-M GPT Toolkit can provide tools for advanced users to connect application environments empowered by ITS AI services. Refer to Getting Started for more information on U-M GPT Toolkit.

Why doesn’t U-M Maizey show my MCommunity group?

You must be the owner of an MCommunity Group to create a project in U-M Maizey. If there are no MCommunity Groups available in the drop-down, it means you are not currently an owner of an MCommunity Group. Refer to Creating, Renewing, and Deleting MCommunity Groups for more information. 

If you created a new MCommunity group while logged in to U-M Maizey, you will need to log-out for U-M Maizey to present your new group as an option. 

How long does it take for a new MCommunity group to show up in U-M Maizey?

It should only take about 15 minutes for a new MCommunity group to appear in U-M Maizey. Users will need to log out of U-M Maizey and log back in to access new MCommunity groups. 

Can I create a project in U-M Maizey without a Shortcode?

No. While the use of U-M Maizey is “free” until December 31, 2023, there will be an expense. ITS will notify you of your potential for costs by mid-December, should you continue to use U-M Maizey beyond December 31, 2023.

What are the costs associated with a Maizey project?

Maizey project costs vary based on the amount of data being indexed and how much use the tool gets (prompts and replies).

See the Pricing page for more details.
 

U-M Maizey isn’t giving me the right answers, does that mean it doesn’t work?

No. Refer to U-M Maizey In-Depth for guidelines on using U-M Maizey.

I have a terrific idea and would like to provide my U-M Maizey environment to U-M, can I do that?

Yes. Contact us to show off what you’ve built. We’d love to learn from you and see if there is an opportunity to share your work with all of U-M.

Can I use U-M GPT, U-M Maizey and U-M GPT Toolkit with U-M information?

Yes. U-M GPT is approved for use with moderately sensitive data. Refer to Sensitive Data Guide ITS AI Services to learn more about what data is appropriate for U-M GPT, U-M Maizey, and U-M GPT Toolkit.

What types of sensitive data can I use with ITS AI Services?

Moderate. Refer to the Sensitive Data Guide ITS AI Services page for more information. 

How does the Canvas Connector work in U-M Maizey?

The connector, given a canvas site ID, uses the Canvas API to read a Canvas site’s: 

  • Modules
  • Pages
  • Announcements
  • Assignments and Files

It is authorized using a token for an ITS-created user. Anything the user can access, the connector can. In order for the module to work, an instructor must add that user as a student to their canvas site (any higher role, and it will index content that may be hidden).

Refer to Getting Started with U-M Maizey for step-by-step instructions.

The following file types are supported: md, htm, html, docx, xls, xlsx, pptx, pdf, rtf, txt

Can U-M Maizey index lecture recordings posted in Canvas?

Yes. Instructors who use CAEN Lecture Capture can enable the Lecture Recordings tool in Canvas. U-M Maizey will then index the recording transcripts. Learn more about Using Maizey with Canvas and CAEN Lecture Capture.

Why can’t I request that my data is deleted?

At this time, data is collected in logs to help inform billing processes and for providing support, and data cannot be deleted. 

Is there a way to receive notifications on updates and changes to the ITS AI Service?

Yes. Join ITS-AI-Services-Notify to receive email updates on ITS AI Services.