By Avantika Tomar, The author is partner and Learning & Development head, Education, EY-Parthenon India

Artificial intelligence (AI) is rapidly reshaping higher education, influencing how students learn, how faculty teach, and how institutions operate. As AI in education expands across India and globally, universities are grappling with how to embed responsible AI in higher education into everyday academic practice.

While AI offers opportunities for personalised and efficient learning, it also raises concerns around academic integrity and AI, AI-enabled learning risks, and data protection in education. For universities, the challenge is not simply revising assessment rules, but also creating institutional models of AI governance in universities that can support inclusive adoption at scale.

Key risks of AI use

The growing presence of AI tools has begun to alter how knowledge is produced and evaluated. Recent surveys, including the Digital Education Council’s Global AI Faculty Survey 2025, as well as FICCI-EY-Parthenon’s AI Adoption in Higher Education Survey 2025, reflect concerns not just about misuse, but also about students losing essential critical thinking skills in AI-mediated learning environments. These AI-enabled learning risks highlight the need for clearer AI policies for universities and strengthened higher education governance systems.

An area of concern involves the use of AI classroom tools for grading and feedback. Automated scoring tools, especially those trained on standardised writing patterns, can disadvantage students whose linguistic backgrounds or writing styles differ from dominant norms. This makes safeguarding student equity in AI-based grading and evaluation a critical institutional responsibility. At the same time, inconsistent faculty familiarity with AI tools leads to uneven classroom implementation, unclear campus AI guidelines, and varied interpretations of ethical AI use in academia, leaving students confused about what is permitted.

Blanket prohibitions on AI, however, are neither realistic nor helpful for preparing learners for AI-enabled professional environments. As a result, universities are rethinking the design of assessments themselves. Many are implementing strategies to mitigate academic risks created by AI-led learning through structured academic assessment redesign that emphasises real-time, process-driven tasks, rather than polished outputs easily generated by AI.

Academic integrity

Institutions are adopting three broad categories of assessments: AI-free, AI-assisted, and AI-integrated.
– AI-free assessments rely on formats like supervised exams, in class writing tasks, and oral examinations to test unaided student thinking.
– AI-assisted assessments allow controlled use of tools, requiring students to document how AI contributed to their work.
– AI-integrated assessments embed AI tools directly into learning activities, treating them as partners for ideation and feedback, while evaluating students’ reasoning, judgement, and conceptual mastery.

An example is from the NYU Stern School of Business, where faculty piloted an AI-enabled oral exam for students enrolled in a product management course. An AI agent posed questions, analysed responses, and adjusted follow-ups to probe deeper understanding. The recorded interactions were later evaluated using AI-supported grading to improve consistency. This model illustrates how AI assessment frameworks can evolve responsibly without lowering academic standards.

Rather than attempting to restrict AI use in written work, faculty are redesigning assessments to work with AI realities, using tools and formats such as oral exams that test real-time understanding rather than polished outputs. This shift signals a broader move towards responsible AI framework design within universities.

Governance frameworks

Effective AI governance in universities extends far beyond classrooms and assessments. Because AI influences teaching, research, administration, admissions, and student facing services, universities must adopt governance models that cut across functions.
– Faculty uphold academic standards and develop discipline specific guidelines.
– Students, as primary AI users, need consistency, transparency, and protection from opaque algorithmic decisions.
– IT and compliance teams ensure cybersecurity, system integrity, and strong student data privacy protocols.
– Institutional leadership oversees strategy, accountability, and regulatory compliance.

These responsibilities underscore why responsible AI governance frameworks for universities must be collaborative and multilayered. Many institutions are forming central AI councils to develop principles and review emerging risks. Others are adopting federated models, setting shared guardrails while allowing flexibility for different disciplines. Regardless of the structure, governance must include mechanisms for monitoring, revision, and adaptation as technology evolves.

Data protection

In India, responsible AI adoption must comply with the Digital Personal Data Protection Act (DPDP Act), formally notified in November 2025. Under it, educational institutions are categorised as data fiduciaries, which imposes specific obligations around lawful data collection, processing, retention, and sharing. All processing must be purpose-specific and supported by informed consent.

These requirements influence how universities can deploy AI systems. Legacy datasets – originally created for administrative or instructional purposes – cannot automatically be repurposed for new analytics or AI training. If such secondary uses were not disclosed earlier, universities may need to obtain fresh consent from data principals. Cross-border data transfers remain possible, but may be restricted or subject to localisation requirements. With compliance monitored by the Data Protection Board, institutions must treat data governance as a central pillar of their AI strategy.

Because AI systems rely on continuous data flows, alignment with the DPDP Act will shape both the pace and scope of adoption. Strong data protection in education practices ensures that innovation does not come at the cost of privacy, trust, or institutional credibility.

Conclusion

Responsible AI integration in higher education requires thoughtful design, inclusive governance, and robust privacy safeguards. Institutions that recognise AI as a long-term element of academic life – and respond with clear assessment redesign, strong oversight, and compliance mechanisms – will be best placed to preserve academic rigour and public trust. Embedding a responsible AI framework across teaching, evaluation, and data management is essential for building universities prepared for an AI-enabled future.