Skip to main content

Guidance to Staff on the use of Artificial Intelligence

FAQs: Staff use of AI


These guidelines have been developed to support staff undertaking teaching, research or administrative tasks on behalf of the University of Leeds [Cognisant with ICO guidance on the use of AI]. This guidance extends to external lecturers, assessors and PGRs who have a temporary or part-time staff role.

Artificial intelligence can be used effectively and legitimately across all types of teaching, research and administration. The ethical use of such tools can provide significant benefits and will become more commonplace in the next few years. However, there are clear risks and security concerns in doing so and all staff should make themselves familiar with them.

These guidelines are intended to provide colleagues with an overview of the approach adopted at the University of Leeds. More detailed guidance for specific areas of activity (e.g., research and publication, teaching and learning, PGR supervision) is in development and will be made available as soon as possible.

If you have questions relating to the use of AI that are not covered in this document, please email us at AI@leeds.ac.uk.

Guidance

There are a number of basic requirements that need to be considered by all staff engaged in teaching, research, and administrative tasks:

A. Your Responsibilities. It is your responsibility to ensure that your use of AI is done ethically and within the regulations.

B. Your Responsibilities to Others. If you have supervisory responsibilities (either academic or managerial), you should ensure those working with you are aware of the University's guidance on the use of AI.

C. Academic Integrity. All work, either presented for assessment at the University of Leeds or representing the results of research, must comply with University of Leeds regulations for Academic Integrity [Attention is drawn to “Clause 8.7 Grievous Academic Misconduct. Students shall not intentionally or otherwise submit work in whole or in part that is stolen, obtained by deceit or fraud, obtained from a commercial or non-commercial source, including online, requested or commissioned from a third party, or where the content has been manipulated to avoid detection." This category may include the use of artificial intelligence to generate work. For more details, see the Generative AI guidance for taught students. Generative AI guidance for PGRs is currently being produced].

D. Transparency. All material that is wholly or substantially generated using an AI tool should be declared clearly in the document in which it occurs, whether the document is for internal or external use.

E. Teaching and Learning. Staff engaged in the development or use of AI in any type of learning activity should ensure that the AI-based learning activity is ethical and conforms to University of Leeds regulations.

F. Research. Staff engaged in the development or use of AI in any type of research should ensure that the AI usage is ethical and conforms to the University of Leeds regulations, including where appropriate and necessary by obtaining authorisation from a faculty research ethics committee.

G. Communication. The presentation of work (e.g., for conferences, publications, presentations) that has used AI should be carried out in accordance with the guidelines and regulations on the use of AI supplied by the research funder, conference organiser or publisher, as well as those of The University of Leeds.

H. Values. The use of AI tools should always remain consistent with the University of Leeds values of collaboration, compassion, inclusivity, and integrity.

Data Rules

Most artificial intelligence tools produce results in response to prompts and/or other data provided by the human researcher. Certain types of information must never be provided to an AI tool hosted outside the University [Some of the data rules may be waived if the research is being carried out using AI tools hosted entirely on campus. Such in-house research should be required to have a clear Data Security and AI Use plan to ensure the security and govern the use of input data and output products.], including:

  1. Passwords and usernames.
  2. Personally identifiable information (PII) or other sensitive or confidential material, irrespective of any perceived lawful basis, including explicit consent. PII is any information that can be used to confirm or corroborate a person’s identity.
  3. Any data that hasn't been properly anonymised to ensure it is non-identifiable.
  4. Any data that is not fully consistent with the University’s policies on Data Protection, Data Processing, GDPR/DPA2018, Academic Integrity, Attribution and Ethics.
  5. Any data related to University Intellectual Property.
  6. Any data that is protected by Copyright, unless it aligns with principles like fair use, educational exceptions, or if explicit permission has been granted.
  7. Any prompts or data, whose responses might result in reputational damage to The University of Leeds.
  8. Any non-PII data from third parties where the individual has not explicitly consented for their data to be used with AI, with the exception of data that is clearly already in the public domain.

Personal Security

Users of AI should be aware that the data and prompts provided by the researcher are visible and can be recorded by the commercial provider of the AI tool, and depending on their security, to other parties.

Truth, Trust and Validation

Most AI models are created to provide a likely output based on prompts and training. Their outputs are designed to appear convincing even when there is no factual basis for the output. Consequently, ‘facts’ provided by these tools may appear to be trustworthy, but that appearance is false. As a result, ALL outputs from all AI tools should be independently verified for truthfulness.

AI and Academic Misconduct

Our policies and detection methods to counter plagiarism are technology-neutral and cover all forms of academic misconduct, including the misuse of AI platforms. Penalties for academic misconduct in any form by staff or students including the misuse of AI technologies can be severe. For staff, this includes formal disciplinary proceedings and possible impact on professional standing. For students it can include the full loss of marks on an assignment and/or exclusion from the University, impacting on the qualifications a student might be hoping to achieve.

AI Detection Tools

If you suspect that material presented to you, for any purpose, has been wholly or partially generated using AI, you should not use an online AI detection tool. This is because (i) currently these tools are insufficiently accurate for the University to use them as evidence, (ii) submission of the work to such tools may represent a data security breach, (iii) the document being checked becomes available as training data for other AI tools. The University will keep our approach under continual review to account for new developments in technologies.

If You Suspect a breach of University Regulations

The University of Leeds has formal processes for considering possible breaches of academic integrity. These should be used in the case where misuse of AI is suspected in material presented to you for any purpose.