Artificial Intelligence (AI) is when machines are trained to think in the same way humans think. At the moment, these machines can do tasks like recognising images and faces, understanding speech, making decisions, and translating languages. AI is growing quickly and beginning to change the way we live
Generative AI is a type of AI that can create new things, like text or images, based on what it has learned. Some examples of Generative AI tools are ChatGPT, DALL-E 2, CoPilot, Google Gemini, Midjourney, Claude, and Perplexity. These tools can write text that sounds human and create pictures from descriptions.
Generative AI tools are trained on lots of information, and they can copy different styles and produce artifacts that seem real. But sometimes, they make things up. This is called "hallucinating." They also sometimes make mistakes or show bias (unfairness). That’s why it’s important to check any information they give you by doing your own research, so you can get the full picture. Don’t rely on AI to do all of this for you.
You should use your own judgement when deciding whether or not to use AI tools. They don’t always make things easier or better. A simple rule is: if you would ask a friend or teacher for help, then it’s okay to ask AI for help too. But if you wouldn’t ask a person for help, then don’t ask AI either.
The university has created clear guidance on how to use Generative AI (Gen AI), like ChatGPT, and will keep improving this as AI changes. These principles apply to a wide range of AI tools and will guide the academic support we offer.
We’re committed to including AI in our teaching and learning in various ways, such as in lectures and seminars, tests, and study support.
AI tools are powerful, but they have some important limitations:
These limitations show why it’s important to use AI as a helpful tool, but not to rely on it completely. We still need human thinking and judgement.
There are also some ethical (moral) questions to think about as AI continues to develop: