my-cumbria-logo, my-cumbria-logo Toggle navigation

Like lots of universities, University of Cumbria is exploring the opportunities and challenges of using generative AI. The guidelines presented here are for general use and students need to discuss ethical usage of generative AI in their subject area with their tutors. If tutors permit the use of AI for any aspect of work, this needs to be credited and referenced. Work submitted for assessment that has been created by AI without permission or acknowledgement, will be classed as malpractice and the academic regulations apply.

Also be aware that AI tools often keep copies of materials and might even use those materials for further training. Sensitive information or confidential data should not be shared with such tools. 

The situation with generative AI is constantly evolving and guidelines will therefore continue to develop and adapt accordingly.

  • What do we mean by generative Artificial Intelligence?

    Artificial Intelligence (AI) in our daily lives

    Lots of us have been interacting with AI in various ways for years. For example, all of the following involve AI:

    • when we search for items online or interact with social media, algorithms decide how to present results to us;
    • depending on what we've been searching for or engaging with, we might start to see adverts for similar products pop up on our devices;
    • when we're word processing, autocorrect and predictive tools make suggestions or even change our writing;
    • when we use voice controlled virtual assistants (like Alexa, Siri, and others);
    • when we ask questions of chatbots when shopping/banking/searching online.

    The new kid on the block: generative AI

    These guidelines will be exploring generative AI technologies which generate text in response to prompts from the user. 

    This means technologies such as ChatGPT (powered by ChatGPT and GPT4), Bing, and Google Bard, all of which generate text.

    There are other generative technologies which generate images, like DALL-E, Midjourney and Bing’s Image Creator.

    You might also have come across other technologies that generate other media such as video, audio, music, 3D models and more.

  • How does generative AI work? Is it reliable?

    How do these tools work?

    Essentially, these technologies are a type of Large Language Model (LLM) that has been trained on huge data sets. When given a written prompt, they analyse and combine information from what they’ve learned into what appears to be a reasonable pattern that generates a new response to the original prompt.

    At this stage, the technologies aren’t really thinking, they are algorithms responding to the language we use in our prompts. They link language from our prompts to data from their training and then generate a response to address that prompt. 

    The creators of and enthusiasts for generative AI say that the key to using it well is in the phrasing of the "prompts​" and engaging with the tools as if we were in a conversation with them. The more specific the prompt, the more helpful will be the response and it may take a number of turns (like a conversation) to get to a response that works for us. This process is also known as "prompt engineering". 

    Where is the information coming from and is it reliable? 

    If we're using ChatGPT-3.5 (the free version of OpenAI’s language model), it won’t necessarily tell us where the information is coming from unless we specifically ask it for references. Even when asked for references, it sometimes makes these and other bits of information up and produces what have become known as "hallucinations". Microsoft’s Bing (powered by ChatGPT-4) links the information it generates to sources that it presents. However, all of these technologies come with warnings that information should be cross checked rather than taken at face value. Information and references provided can be wrong or made up, with different sources reporting accuracy rates of between 40 to 60%. 

    Generative AI tools should be used with caution as there are serious limitations to some of the content they generate currently, such as drawing on biased, inaccurate sources or producing hallucinations. We need quality academic sources for academic assignments and have access to these through OneSearch. If you need help navigating OneSearch and finding quality sources, check out our Finding and evaluating information resources and/or contact skills@cumbria.ac.uk 

     

     

  • Is generative AI ethical?

    Some ethical questions about generative AI

    In addition to considering the validity of the content generated by AI, there are also ethical considerations about ways in which generative AI is trained. 

    Some people are saying generative AI needs to be regulated due to fears of how AI might develop in the future. Others are more concerned with what they see as problems within the current generative AI models.

    Bias: the dominant generative AI models tend to come from the global North, and therefore are being trained by and generating information that has a global North bias. This can mean that both the training data and the generated responses include social, cultural, religious, gender and racial biases which can limit research outcomes, promote the spread of disinformation, and perpetuate discriminatory practices (eg consider the concerns surrounding facial recognition technologies). 

    Copyright and plagiarism: when we create work within the academic community of a university and draw on other people's research and ideas, we acknowledge this through citing and referencing. Some generative AI companies are not sharing information about  the data being used to train their models and there are ongoing debates about the originality of generative AI outputs. Writers, artists, musicians, academics, researchers, and others are challenging their work being used as part of training data sets without permission or credit. This raises questions and concerns regarding copyright. The UK government is currently working with stakeholders to develop a code of practice on copyright and AI. 

    Data protection: we need to consider data protection if we are interacting with generative AI. Sensitive information or confidential data should not be shared with such tools. AI tools often keep copies of materials and might even use those materials for further training.

    Unethical work practices: concerns are also being raised about the exploitation of workers employed to tag information in the training data and make it "safe" for others to view. Not only are such workers often low paid, they are also at risk of experiencing psychological harm from the information they are viewing. 

    Enviromental impact: for example, “The MIT Technology Review reported that training just one AI model can emit more than 626,00 pounds of carbon dioxide equivalent – which is nearly five times the lifetime emissions of an average American car” (Marr, 2023).

     

     

  • Academic regulations & Referencing generative AI

    Health warning: academic regulations and AI

    There may be legitimate uses of generative AI in your subject area that you can discuss with your tutors. Using AI to generate work that is submitted for assessment without any acknowledgement and against any directives from tutors, is a form of malpractice. Where it is suspected that AI has been used when it shouldn’t have, or it has not been credited through appropriate referencing, the standard University academic malpractice procedures will apply. See the malpractice and academic regulations pages for more details. 

    Citing and referencing AI

    Guidance on how to cite and reference legitimate uses of generative AI is now available via Cite them Right online.

    At this point in time Cite them Right is treating generative AI outputs from tools such as ChatGPT as a form of "personal communication". This means that citing and referencing would appear as below:

    Reference list

    OpenAI ChatGPT (2023) ChatGPT response to Sandie Donnelly, 10 March.

    In-text citation

    The initial prompt asking ChatGPT 'what is the future self theory and how does this link to procrastination' (Open AI ChatGPT, 2023), yielded a descriptive paragraph with no citations.   

  • Keep a human in the loop: criticality & generative AI

    The speed at which ChatGPT generates responses can feel impressive. However, whatever responses may be generated, it’s down to us to use our critical thinking and analytical skills to evaluate the relevance, validity, and appropriateness of all information, including any content generated through AI.

    We need to be critically reflective researchers and independent thinkers who can verify information from valid sources.

    We need to be the AI literate human in the loop. 

    Being AI literate

    Just as we are encouraged by tutors and librarians to apply critical thinking to evaluate the sources we search and use in our assignments, similarly, we need to:

    • Use critical judgment about when to use and when not to use generative AI 
    • Regard AI with a healthy scepticism by questioning its validity, relevance, application 
    • Recognise the potential of AI but also question those who might have a vested interest in hyping AI 
    • Be aware of the ethical issues and concerns regarding the training and use of generative AI technologies
    • Become more familiar with AI in order to better understand and develop an increased ability to detect AI generated content that might be harmful

    Adapted from and inspired by Marc Watkins

    Being AI literate requires a high level of criticality 

    Academic assignments and professional practice require critical thinking, reading, writing and practice skills. Whilst AI  tools might assist us in some areas of our lives, they can't write critically engaged assignments for us as they don't do critical thinking.

    We may well find we are working with AI in useful ways in our workplaces but our human critical thinking skills are highly regarded by employers. 

  • Academic Integrity and generative AI

    Sometimes students struggle for various reasons and start to believe that the only way out is something like an essay mill or using a generative AI tool.

    If you are finding assignments challenging, feeling overwhelmed by deadlines, or something is happening for you that might make you vulnerable to using AI for the wrong reasons, seek support. You're not on your own. Talk to your tutors, your friends and family, contact student support via the Student Enquiry Point We're all here to support you and we want you to graduate with a sense of achievement, pride, and with your academic integrity intact.

    AI is clearly transforming lives in all sorts of ways. But real transformative power lies in you graduating as a critical reflective practitioner in your own right, with the power, values, skills and capabilities to transform not only your own life, but also those of others in our human communities. AI is no replacement for this. 

  • Ways of interacting with generative AI for study purposes

    Before interacting with generative AI for academic study, discuss with your tutors whether using generative AI is permitted and/or recommended in your subject area. 

    Depending on what is agreed with your tutors regarding generative AI, below are some activities that some people have found useful for academic study.

    Beneath each activity is an example of a basic prompt. As stated elsewhere, the more specific the prompts, the better the outputs. It's also likely that there will be a series of prompts more like a conversation rather than a single question/suggestion/prompt and single response.

    Also, remember data protection and don't share sensitive or confidential information with the tools. 

    Summarising essays and longer pieces to read quickly

    Please note that publishers' terms may not permit the uploading of content (such as journal articles) into 3rd party applications such as large language models and generative AI  

    Prompt example: Summarise [insert title of piece being inserted] in 300 words

    Generate ideas for stories and creative pieces: 

    Prompt example: Provide interesting prompts for writing

    Managing and writing applications: 

    Prompt example: How to write a good CV? Or, How to write a formal letter?

    Summarising class notes: 

    Prompt example: Summarise the main points of [session notes] in bullets

    Coming up with titles and abstracts for projects and research proposals: 

    Prompt example: Provide me with a formal research title for [Enter research paper] in [enter word limit]

    Practising tests and exams: 

    Prompt example: Generate 4 [enter type of question] on [enter topic]

    Organising and managing Time: 

    Prompt Example: Give me a study schedule for [enter time period] for [enter subject name] or How to manage time before exams?

    Use a spelling and grammar checker: 

    Prompt example: Run grammar check on [completed work] or suggest some changes.

Edit page