Information Security Guidelines for the Use of Generative Artificial Intelligence

 

Artificial Intelligence (AI) is an emerging field encompassing many different technologies. This guideline is specifically focused on the use of generative AI tools, such as Microsoft Copilot and ChatGPT. These applications can produce human-like responses to user provided prompts based on the historical data it has been trained on with the aim of generating completely new content.

 

The purpose of this document is to provide guidelines for the responsible use of generative AI tools by University staff and faculty in a way that protects the confidentiality, integrity, and availability of University information assets and complies with applicable policies, laws, and regulations.


While artificial intelligence has the potential to enhance effectiveness and improve productivity, the use of these tools must be balanced with an awareness of the information security risks involved and the need for responsible data management. This guideline is intended to assist staff and faculty to use generative AI technologies in the most secure manner possible.

 

 

See the full Information Security Guidelines for the Use of Generative Artificial Intelligence (PDF), published on June 19th, 2024.

© 2005 - 2024 ProProfs
-