University of Melbourne AI principles

The emergence of generative AI tools and their ongoing evolution has major implications across the economy and society, including for Australia’s universities. These tools generate opportunities to innovate and enhance core aspects of university activities. Equally, they entail risks for academic and research integrity, intellectual property, and data privacy.

The University of Melbourne is adopting the following ten principles to articulate its position regarding these challenges, and to help guide actions around the adoption and use of AI tools and systems. The principles are intentionally broad: the aim is not to prescribe specific initiatives or actions, but to support decision-making across the University, and to ensure the principles can be adapted to developments in the technology. The intention is to periodically review the principles, given the ongoing evolution of AI tools.

Our AI principles

1. AI literacy

The University of Melbourne will support students, academic and professional staff to become AI literate, and will seek to ensure that students who graduate from the University are proficient in the responsible use of AI tools.

2. Academic and research integrity

The University of Melbourne will build awareness among students and staff of their responsibilities around the use of AI tools in the preparation of work and we will manage integrity-related risks posed by AI tools.

3. Research and innovation

The University of Melbourne will be a leader in research into AI and the implications of new technologies, driving innovation through research and engagement with industry and other external partners.

4. Workforce and operations

The University of Melbourne will harness the potential for AI tools to deliver improvements in our working operations and how staff engage in work, supporting staff to realise the benefits offered by these tools and to manage the risks that they pose.

5. Accessibility

The University of Melbourne will provide broad access to relevant AI tools for our students. We will seek to ensure that financial disadvantage, disability, and other factors that disadvantage specific cohorts do not represent barriers for students accessing AI tools that are necessary to their study.

6. Fairness

The University of Melbourne will seek to ensure that enterprise AI tools and systems we make available for use are evaluated carefully before deployment and do not unfairly discriminate against individuals or groups.

7. Privacy and security

The University of Melbourne will undertake risk assessments and implement approaches to ensure that our use of AI systems and tools do not compromise data security or intellectual property rights, and that privacy is maintained.

8. Positive impact

The University of Melbourne will seek to ensure that the AI systems and tools it uses generate the intended benefits to those who interact with them, and will seek to mitigate foreseeable adverse impacts of these systems and tools.

9. Responsibility and human oversight

The University of Melbourne will maintain human oversight of and responsibility for the AI tools and systems used by staff, offer appropriate avenues for redress in case of errors or misuse, and commit to procedures by which those who are impacted can access an explanation for relevant decisions.

10. Collaboration and ongoing review

The University of Melbourne will consult widely on the use of AI tools and systems, including with groups that are at a high risk of being harmed or treated unfairly. The University will regularly review our use of AI tools to ensure that they are delivering the intended benefits, and regularly review and update our policies and practices to ensure appropriate safeguards are in place.