Guidance on the Appropriate Use of Generative Artificial Intelligence in Graduate Theses

Overview

In response to the rapidly evolving landscape of generative artificial intelligence (AI)[1] use in academic and educational settings, this preliminary guidance has been produced to address frequently asked questions (FAQ) in the context of graduate thesis work at the University of Toronto. More detailed guidance on this topic, as well as new or updated policies may be issued in future, in which case this preliminary guidance will also be updated. The FAQs below outline important considerations for graduate students, supervisors, supervisory committees, and graduate units on the use of generative AI tools (such as ChatGPT) in graduate student research and thesis writing, while upholding the core principles of academic quality, research integrity, and transparency. The FAQs cover requirements both for approval and for documentation of the use of generative AI tools in graduate thesis research and writing, as well as risks and other considerations in using generative AI in graduate thesis research and writing.

Innovative and creative uses of generative AI may support scholarly activities and help facilitate high quality research, particularly in certain disciplines. Graduate students and faculty supervisors are expected to strive for the highest standards of academic quality and research integrity in all scholarly activities, and therefore the use of generative AI tools in the process of graduate thesis research and writing must always take place with full transparency. This includes transparency between students and their supervisors, who must agree in advance how any generative AI tools will be used; as well as transparency between graduate students and the audiences of their work, who must be provided a clear and complete description and citation of any use of generative AI tools in creating the scholarly work.

Students who plan to use generative AI tools in researching or writing their graduate thesis must always seek and document in writing unambiguous approval for the planned uses in advance from their supervisor(s) and supervisory committee. Unauthorized use of generative AI tools for scholarly work at the University of Toronto may be considered an offence under the Code of Behaviour on Academic Matters, and research misconduct as defined in the Policy on Ethical Conduct in Research and the Framework to Address Allegations of Research Misconduct. Furthermore, careful attention must be paid in the thesis to appropriate citation and describing any use of generative AI tools that took place in the research or writing process, in line with disciplinary norms. This includes, for example, using generative AI tools in searching, designing, outlining, drafting, writing, or editing the thesis, or in producing audio or visual content for the thesis, and may include other uses of generative AI. Even when engaging in authorized generative AI use, faculty and graduate students must be aware of the risks in using such tools, some of which are discussed below.

Faculties and graduate units may have specific requirements or restrictions regarding the use of generative AI in some or all phases of the graduate research lifecycle. Individual graduate units may therefore issue additional guidance outlining field-specific appropriate uses of generative AI tools in researching and writing a doctoral thesis. This could include, for example, guidance on use in writing text, conducting analytical work, reporting results (e.g., tables or figures) or writing computer code. Graduate units issuing additional guidance should take into account the issues discussed in the FAQ below. Additional relevant guidance and further reading can be found in the FAQs and guidance on syllabi and assignments (PDF) issued by the Office of the Vice-Provost, Innovations in Undergraduate Education, and in the guidance on generative AI in the classroom from the Centre for Teaching Support & Innovation.


[1] In referring to generative AI in this document, we include tools that use predictive technology to produce new text, charts, images, audio, or video. For example uses and more detail, please see the FAQs and guidance on syllabi and assignments issued by the Office of the Vice-Provost, Innovations in Undergraduate Education, and the guidance on generative AI in the classroom from the Centre for Teaching Support & Innovation.

Frequently Asked Questions (FAQ)

Last Updated: May 2, 2024

The School of Graduate Studies (SGS) Doctoral Thesis Guidelines state that students must produce a thesis that demonstrates academic rigour and makes a distinct contribution to the knowledge in the student’s field. The University expects that a thesis submitted to satisfy degree requirements at the doctoral level is the work of the student, carried out under the guidance of the supervisor and committee. The SGS Guidelines specify Key Criteria of the Doctoral Thesis that students must meet, in addition to the Ontario Council of Academic Vice-Presidents’ Doctoral Degree Expectations for Doctoral Students in Ontario. The Key Criteria of the Doctoral Thesis include presenting the results and analysis of original research, and demonstrating that the thesis makes an original contribution to advancing knowledge. These originality requirements may not be met by work produced using generative AI tools, which rely on existing sources to generate content-based probabilistic or other predictive functions that may not result in sufficiently original content to meet the criteria.

If a student plans to use generative AI tools in any aspect of researching or writing of their thesis, this must be done with the prior approval of the supervisor(s) and supervisory committee. This is consistent with how other decisions about the thesis, including structure and format, are decided, as detailed in the SGS guidance. (See also the Guideline for Graduate Student Supervision & Mentorship for more detail on the supervisor’s and committee’s roles in guiding students to produce research of high quality and integrity). Careful attention must be paid in the thesis to appropriately citing and describing any use of generative AI tools in the research process. It must be clear to the reader which generative AI tools were used, as well as how and why they were used. In the same way that analytical tools and specific analytical approaches are identified and described in the thesis, generative AI tools and interactions with them must be equivalently described.

When supervisors and committees approve student use of generative AI in any aspect of producing the doctoral thesis, it must be clear how the student’s versus the AI tool’s contributions will be identified, and it must be possible for the student to provide sufficient evidence that they themselves have met the Key Criteria of the Doctoral Thesis and demonstrated the doctoral level degree expectations. It must be clear to the student what evidence they need to provide to clarify their own contributions and how they made use of any AI tools, and how their work will be assessed by the supervisor and committee at each supervisory committee meeting. (Consult the Guidelines for Departmental Monitoring of the Progress of Doctoral Students and the Guideline for Graduate Student Supervision & Mentorship for more detail on responsibilities in student evaluation and monitoring of doctoral student progress.) Students are responsible for any content generated by AI that they include in their thesis. Note also that at the University of Toronto, the outcome of the final oral examination is based not only on the submitted written thesis, but also the student’s performance in the oral examination. Students must be able to describe and defend any use of generative AI, as well as the contents of the thesis during their final oral examination.

Last Updated: May 2, 2024

Learning the practices of disciplinary scholarly writing is a key aspect of graduate education. The use of generative AI could hamper the development of graduate writing skills because writing capacity is highly dependent on practice. Novice scholarly writers need to write frequently, with in-depth feedback from members of their disciplinary community. Using AI to lessen the burdens of writing could undermine the development of these invaluable skills. This diminished capacity in writing could have serious consequence for graduate students, who need to be able to use writing as a vital element of their overall research process. The act of drafting and revising scholarly work often entails an essential deepening of the engagement with their research. Most writers learn about their own thinking through the iterative act of writing; if AI is doing some part of that writing, writers may be missing a crucial opportunity to cement their own scholarly expertise.

Last Updated: July 4, 2023

The same principles that apply to the use of generative AI tools to produce or edit text also apply to the use of these tools to produce or edit figures, images, graphs, sound files, videos, or other audio or visual content. It should be noted, however, that some publication policies permitting the use of AI-generated text in certain contexts apply more stringent criteria to image content, in some cases completely prohibiting such content, for example, see the editorial policy on the use of AI-generated images at Nature.

Last Updated: July 4, 2023

SGS does not regulate master’s-level theses, Major Research Papers, or qualifying / comprehensive exams. However, SGS recommends that graduate units make use of the principles outlined for doctoral-level work in articulating requirements for any master’s-level research-based works. The Code of Behaviour on Academic Matters, Policy on Ethical Conduct in Research and Framework to Address Allegations of Research Misconduct also apply to master’s research, theses, Major Research Papers, and other research-based works, including qualifying and comprehensive exams. Faculties and graduate units may issue specific guidance on the use of generative AI in such works. The FAQs and guidance on syllabi and assignments issued by the Office of the Vice-Provost, Innovations in Undergraduate Education may be more relevant in the context of some papers or projects, and also apply to the undergraduate context.

Last Updated: July 4, 2023

Different disciplinary norms are likely to emerge around the appropriate use of generative AI in research, even in fields in which the focus of the research is not specifically the development and implementation of AI. If use of generative AI is permitted by a graduate unit in the research process, it must be clear to faculty and students which methods (if any) are acceptable and which (if any) are not. Supervisors should seek clarification from their graduate unit if uncertain about a particular use of generative AI in doctoral research and thesis writing.

Last Updated: July 4, 2023

Privacy concerns have been raised in relation to the data processing undertaken to train generative AI tools, as well as the (mis)information that such tools provide about individuals or groups. Investigations have been initiated in Canada and in the EU regarding the privacy implications of ChatGPT, for example. For graduate student researchers working with certain kinds of data, using third-party generative AI tools to process the data may come with additional privacy and security risks. For example, students working with data from human research participants must not submit any personal or identifying participant information, nor any information that could be used to reidentify an individual or group of participants to third-party generative AI tools, as these data may then become available to others, constituting a major breach of research participant privacy. Similarly, students working with other types of confidential information, such as information disclosed as part of an industry partnership, must not submit these data to third-party generative AI tools, as this could breach non-disclosure terms in an agreement. Students wishing to use generative AI tools for processing such data must have documented appropriate permissions to do so, for example, explicit approval from a Research Ethics Board or industry partner.

Researchers are advised to seek help assessing the risk prior to engaging in any data or information processing with third-party AI tools. Information Security and Enterprise Architecture has additional guidance on information risk management, including a risk assessment questionnaire. Your Divisional IT team or Library may be able to provide help assessing the risk attached to a particular use case. The Centre for Teaching Support & Innovation also has a checklist designed for teaching tools that may be a helpful starting point in assessing particular tools for use in academic contexts.

Last Updated: July 4, 2023

If a graduate unit permits the use of generative AI in research, the graduate unit should ensure discipline-specific norms regarding description of the method of use and appropriate references are clear. For example, is it adequate to include the prompts provided to a tool along with excerpts of responses? Should students save or include the full text of their interactions with AI tools in an appendix? Different citation style guides are starting to include specific information on how to cite generative AI tools, for example, see the American Psychological Association Style Blog. Links to major style guides can be found on the University of Toronto Library citation webpages.

Last Updated: July 4, 2023

Graduate units and individual supervisors who embrace the use of generative AI tools in research methods may still wish to restrict the use of such tools in other aspects of writing or editing papers or theses. There must be clear guidance for graduate students on what degree of engagement with generative AI in writing is acceptable (if any). If there are specific tools that are (un)authorized, these should be explained. Graduate students must seek and document approval from their supervisors and committees for the use of generative AI in writing even if they already have approval to use generative AI tools in their research.

Last Updated: July 4, 2023

Most major journals and scholarly publishers now have policies regarding the use of generative AI in publication. These policies vary widely, and researchers must ensure they are adhering to the specific policies of the pre-print server, journal, or publisher to which they are submitting. For example, some publishers allow use of generative AI in the research process, with appropriate description, references, and supplementary material to show the interaction with the AI tool, but do not allow the inclusion of AI-generated text. Others allow the inclusion of AI-generated text, but not images.

Emerging consensus is that generative AI tools do not meet the criteria for authorship of scholarly works, because these tools cannot take responsibility or be held accountable for submitted work. These issues are discussed in more detail in the statements on authorship from the Committee on Publication Ethics, and the World Association of Medical Editors, for example.

Graduate units, supervisors, and students must be familiar with and adhere to the requirements in their field regarding authorship and use of AI in works submitted to pre-print servers or for publication.

Last Updated: July 4, 2023

Generative AI may produce content that is wholly inaccurate or biased. AI tools can reproduce biases that already exist in the content they are trained on, include outdated information, and can present untrue statements as facts. Students remain responsible for the content of their thesis, no matter what sources are used (see also Who is responsible for AI-generated content used in research or other scholarly work?) Generative AI tools have also been shown to reference scholarly works that do not exist, and to generate offensive content. Therefore, AI-generated content may not meet the academic or research integrity standards expected at the University of Toronto. Generative AI tools are also predictive, and may not generate the type of novel content expected of graduate students, nor arrange existing knowledge in such a way as to reveal the need for the novel contribution made by the research underlying a graduate student thesis. (See also the information on originality in Can students use generative AI tools to research or write a doctoral thesis?)

Last Updated: October 12, 2023

The legal landscape with respect to intellectual property and copyright in the context of generative AI is uneven across jurisdictions and rapidly evolving, and the full implications are not yet clear. Researchers, including graduate students, must exercise caution in using generative AI tools, because some uses may infringe on copyright or other intellectual property protections. Similarly, providing data to an AI tool may complicate future attempts to enforce intellectual property protections. Generative AI may also produce content that plagiarizes others’ work, failing to cite sources or make appropriate attribution. Graduate students including AI-generated content in their own academic writing risk including plagiarized material or someone else’s intellectual property. Since students are responsible for the content of their academic work, including AI-generated content may result in a violation of the Code of Behaviour on Academic Matters or other University of Toronto policies.

For more information, please see:

Last Updated: July 4, 2023

Graduate students who make use of AI tools and include the output in their research and written work are ultimately responsible for the content. This applies to work submitted as part of degree requirements, as well as in scholarly publishing or the use of pre-print servers. Graduate students and their co-authors must understand the terms and conditions of any submission of their work and for any tools they use, as these often hold the user responsible for the content. This means graduate students may find themselves in a position where they face allegations of perpetuating false or misleading information, infringement of intellectual property rights, violating the conditions of research ethics approval, other research misconduct, infringement of privacy rights, or other issues that carry academic, civil, or criminal penalties.

Relevant Policies and Further Reading