Skip to main content

FAQs: Staff use of AI

What is Artificial Intelligence (AI)?

Artificial Intelligence (AI) is “The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this.” (OED, 3rd edn., Dec 2021). This includes a range of techniques commonly used in research to find patterns in complex data, including machine learning and generative AI.

What is Generative Artificial Intelligence (GAI)?

Generative artificial intelligence or generative AI is a type of artificial intelligence (AI) system capable of generating text, images, or other media in response to prompts. Generative AI models learn the patterns and structure of their input training data, and then generate new data that has similar characteristics.

What is ethical research?

Ethical research is characterised by a set of principles and practices that prioritise the well-being and rights of all individuals involved in the research process, as well as the broader society. These principles help to ensure that research is conducted in a responsible, transparent, and respectful manner. Here are some key characteristics of ethical research:

  • Respect for Participants' Autonomy: Ethical research requires obtaining informed consent from participants.
  • Benefit and Risk: Researchers must aim to maximise benefits and minimise potential harms to participants and other stakeholders.
  • Justice: Ethical research involves treating all participants fairly and ensuring that the benefits and burdens of research are distributed equitably among different groups.
  • Privacy and Confidentiality: Researchers must protect the privacy of participants by ensuring that their personal information and responses are kept confidential.
  • Transparency and Honesty: Ethical research requires transparent reporting of methods, procedures, and findings without falsification or conflicts of interest.
  • Inclusion and Diversity: Ethical research strives to include diverse populations, ensuring that the results can be generalised to a broader range of individuals.
  • Social Responsibility: Researchers have a responsibility to consider the broader implications of their work on society, the environment, and various stakeholders.
  • Openness to Feedback and Corrections: Ethical researchers are open to feedback from peers and the wider community, and correction.
  • Continual Monitoring and Assessment: Ethical research involves ongoing monitoring of the research process to ensure that ethical standards are maintained throughout the study.

In summary, ethical research is characterised by a commitment to the well-being and rights of participants, a transparent and responsible approach, and adherence to established ethical principles and guidelines.

For fuller information, please see the University's Research Ethic Policy.

How has the University of Leeds developed its approach to the use of AI?

The University of Leeds incorporated two working groups on AI in early to mid-2023. These are the Working Group on AI Technology in Research (WAITR) and the Working Group on AI in Student Education (WAISE), and together number 40 academics and University staff with a range of expertise and links to external organisations considering the implications of AI.

The joint working groups have together developed Interim Guidance on AI for Staff, as well as forthcoming guidance for students (WAISE) and a document outlining the University of Leeds Principles on the Use of AI in Research (WAITR).

The guidance that has been published so far is consistent with and is informed by the Russell Group Principles on the Use of Generative AI Tools in Education.

Does the guidance and regulations of the University of Leeds apply to all AI or just GAI?

All guidance and regulations of The University of Leeds apply to AI in its widest sense, including GAI as a subgroup.

Is there guidance available on the use of AI by University staff members?

Yes, please see the Interim Guidance to Staff on the use of AI.

Is there guidance available for the use of Generative AI by University staff members?

Does this guidance apply to my work?

Yes, if you use or plan to use any type of AI teaching, research or administrative tasks on behalf of the University of Leeds.

To whom does the guidance apply?

All university staff, whether full-time or part-time, and for PGRs who have temporary teaching roles.

Does the guidance apply external/visiting lecturers and assessors?

Yes, it applies to all visiting academics, guest lecturers and external examiners.

Is there guidance available on the use of AI by students?

Yes, please go to the University's webpages on Generative AI, and ensure your students know where to find the information.

Is there a source of further information on the use of AI?

Yes, please email your questions to AI@leeds.ac.uk.

Who has responsibility for your AI applications?

It your responsibility to ensure that all your activities using AI are carried out ethically and within the regulations.

Who has the responsibility for the AI applications carried out by students (PGRs, PGTs and UGs)?

Students have full responsibility for their use of AI in their learning and research work. However, supervisors have responsibility for ensuring that their students are fully informed of University policies and regulations pertaining to the ethical and responsible use of AI in their learning and research work.  Specific guidance on responsibilities of students for students is forthcoming.

Which documents are covered by University guidance and regulation on AI?

All work, either presented for assessment at the University of Leeds or representing the results of research, must comply with University of Leeds policies including the principles and regulations on Academic Integrity.

Should documents that have used some form of AI in their production be declared?

Yes, all material that is wholly or partially generated, modified or proof-read using an AI tool should be declared clearly in the document in which it occurs, whether the document is for internal or external use.

What is a prompt text?

A prompt text is the instruction (or instructions) provided by the user to the AI tool asking it to do something. For example, “Write a page of text about snowboarding in the style of Goethe.” Or “Does the function fc(z^2+c) diverge to infinity if z is complex and the initial value of z=0?”

Should documents contain the prompt texts that were used?

Yes, they can, but it is not compulsory to do so. The prompt texts represent a critical part of the methodology involved in creating the response. As such, the reader will understand the process of arriving at the response better if the prompt texts are available. This is similar to a researcher explaining their methodology clearly.

What form should the declaration take?

At the moment there is no standard method. However, some have included the user’s name, AI use date, name and version of the AI software and whether the output has been validated.

Are there any rules governing the use of AI in teaching, learning and assessment?

Yes, staff engaged in the development or use of AI in any type of learning activity should ensure that the AI-based learning activity is ethical and conforms to University of Leeds regulations. Guidance can be found on the University's webpages on Generative AI.

Are there any rules or procedures governing the use of AI in research?

Yes, staff engaged in the development or use of AI in any type of research should ensure that the AI usage is ethical and conforms to the University of Leeds regulations, by obtaining authorisation from a faculty research ethics committee (FREC). See the Interim Guidance to Staff on the use of AI.

To what extent do regulations imposed by bodies outside The University need to be taken account of?

The presentation of work (e.g. for conferences, publications, presentations) that has used AI should be carried out in accordance with the guidelines and regulations on the use of AI supplied by the research funder, conference organiser or publisher, as well as those of The University of Leeds. In the case of any incompatibility in the internal and external regulations please contact AI@leeds.ac.uk.

To what extent do regulations imposed by research funders need to be taken account of?

Research funders are developing robust procedures to ensure that research funded by them is carried out ethically and responsibly. Some of these requirements are in the form of the provision of information when a grant is applied for and there are regulations when the grant is awarded. All funder’s regulations must be followed as well as the policies and regulations of The University of Leeds. In the case of any incompatibility in the internal and external regulations please contact AI@leeds.ac.uk.

 

Is the data that I put into AI software restricted?

Yes, certain types of data must never be put into any AI software. These include (i)  passwords and usernames, (ii) personally identifiable information (PII) or other sensitive or confidential material, (iii) any data that is not fully consistent with University’s policies on Data Protection, Data Processing, GDPR/Data Protection Act 2018, Academic Integrity, Attribution and Ethics, (iv) any data related to University Intellectual Property, (v) any data that is protected by Copyright, unless explicit permission for its use with AI tools has been obtained, (vi) any data, whose responses might result in reputational damage to The University of Leeds, (vii) any non-PII data from third parties where the individual has not explicitly consented for their data to be used with AI, with the exception of data that is clearly already in the public domain, and (viii) any non-PII data from third parties where the explicit use of the data with AI has not been authorised by a University Faculty Research Ethics Committee application, irrespective of whether the data is in the public domain.

 

What is personally identifiable information (PII)?

PII is any information that can be used to confirm or corroborate a person’s identity.

Am I restricted in the prompts I can provide to AI software?

Yes, a prompt counts as input data. Consequently, a prompt must never contain any of the data types listed in the interim guidance document.

Who is responsible for checking the outputs from AI tools?

Primarily, the responsibility is that of the person who has used the AI tool to generate the response. However, in the case of responses that already exist (say from a previous study), it is the responsibility of the person who wishes to use or publicise the response whether internally or externally. All responses re-used in this way must be cited to the origin of the response.

Am I restricted in my use of the outputs from an AI system?

Yes, in several ways:

  1. You should not publicise AI outputs that have not been checked for biases in the AI process or arising from the input data or prompts.
  2. You should not publicise AI outputs that are sensitive, especially with regard to the formally protected characteristics, without internal review by the relevant FREC.
  3. You should not publicise AI outputs without clearly describing their AI origin, together with the relevant data management, bias and validation methodologies.

Who is responsible for the factual accuracy and/or bias in responses?

The person using the AI tool is responsible. All input data, prompts and outputs should be checked (validated) for factual accuracy and bias. It is not always possible to remove all traces of bias from input/output data, but any document (internal or external) reporting the use of AI tools will be expected to consider at reasonable length the effects of factual error and bias and what has been done to minimise them.

What are protected characteristics?

Protected characteristics are:

It is against the law to discriminate against someone because of a protected characteristic.

Can I use AI-detection tools?

No, not under any circumstance. This is because (i) submission of the work to such tools may represent a data security breach, (ii) the document being checked becomes available as training data for other AI tools, and (ii) these tools are significantly inaccurate.

What should I do if I suspect a student or PGR has submitted work that is generated by AI without making the appropriate declaration?

Gather the evidence. Write your reasons why you suspect the student or PGR and submit it with the evidence to your appropriate Academic Integrity Lead.

When using AI, will my prompts and input data be secure?

No, both data, prompts and output will be recorded by the AI provider and considered to be their own intellectual property. The material may be used to improve the AI tool, may be passed to national crime and security services, and may be sold to third parties mining data.

Are there any circumstances where the data rules can be relaxed?

Only, exceptionally and under extremely controlled circumstances, where all the AI processes take place on University of Leeds-hosted systems and the research methodology has been authorised by an appropriate FREC application.

Can the results of AI prompts be trusted?

Most AI models are created to provide a likely output based on prompts and training. Their outputs are designed to appear convincing even when there is no factual basis for the output. Consequently, ‘facts’ provided by these tools may appear to be trustworthy, but that appearance is false. Both input data and prompts may lead to bias in the output.  As a result, ALL outputs from all AI tools must be independently verified for truthfulness.