"

4.3. Case Study – Submissions

Case Study 1: AI Implementation in Design Education: A Case Study from the School for Planning, Landscape and Architecture at the University of Calgary

Authors: Sandra Abegglen & Fabian Neuhaus

Introduction and Context of AI Use

Artificial intelligence (AI), and particularly generative AI (GenAI), is transforming educational practices across disciplines, posing both significant opportunities and urgent challenges. Within the context of the School for Planning, Landscape and Architecture (SAPL) at the University of Calgary, a collaborative inquiry into the integration of GenAI tools was undertaken via the project “GenAI and Design Curriculum: Enhancing Learning for Social Impact.” This initiative, supported by the 2024–25 Teaching Academy Educational Leadership Grant, set out to interrogate the capacities and limitations of an AI-driven design curriculum at both undergraduate and graduate levels. The primary focus was on how GenAI can augment student engagement, learning outcomes, and design approaches in response to complex, sensitive issues—most notably homelessness—by promoting critical thinking and empathy in design processes. Conceived as both a research project and a practical intervention, this initiative aimed to model an inclusive, ethical, and critically engaged approach to digital design pedagogy.

Description of AI Technology

Central to the project was the purposeful deployment of a range of AI platforms, each serving distinct functions throughout the research and teaching lifecycle. GenAI tools were also employed to generate thought-provoking prompts (“provocations”) for a symposium that convened international experts on education, design, and homelessness. These AI-generated provocations acted as catalysts for discussion, bringing unforeseen perspectives into dialogue and challenging participants to reflect on human-AI collaboration in tackling social inequities. In addition to guiding live interactions, AI tools were applied to synthesize complex project findings into an open-access publication, facilitating broader dissemination of knowledge and fostering continued community engagement.
Graduate Assistant Researchers played a pivotal role in this technological landscape, critically comparing and analyzing different AI tools for their usability, functionality, and inherent biases, ensuring that the team’s engagement with technology was both critical and reflective. This integrated, reflective use of AI was fundamental to the project’s ethos: leveraging digital tools not only to advance technical proficiency, but also to foster ethical reasoning, critical engagement, and inclusive practice among students and educators (Crompton & Burke, 2023).

Implementation Process

The project’s implementation began with strategic partnerships, including collaboration with an international expert in spatial planning and community development, thereby ensuring that a global and context-sensitive perspective informed all phases. The composition of the project team itself was deliberately transdisciplinary, integrating expertise in technology-enhanced teaching, urban design, and social equity—as well as including two Graduate Assistant Researchers whose involvement ensured that the student perspective was woven into both planning and delivery.
A key milestone was the organization of a symposium featuring an international panel, where expert voices explored AI-generated provocations at the intersection of education, design, and homelessness. This event fostered in-depth dialogue about the promise and complexity of AI-powered design, with a particular focus on the needs and experiences of marginalized urban populations—and the potential of addressing social inequities in the classroom. The critical insights and best practices that emerged from the symposium, alongside the project’s findings, were distilled into an open-access guide designed to support ongoing inquiry and shared learning in the field.

Ethical and Inclusive Considerations

Ethics and inclusivity were foundational to both the project’s philosophy and practical operation. Ethical engagement with AI—addressing concerns such as bias, transparency, and human agency—was foregrounded in both the symposium discussions and the selection of technologies. The international and multidisciplinary participant base brought diverse perspectives on the implications of AI in education and design, enriching ethical discourse and modeling responsible innovation (Porayska-Pomsta et al., 2024).
Active student involvement in all phases of the project ensured that ethical deliberation was not simply theoretical but grounded in meaningful lived experience. Simultaneously, the principle of inclusivity infused all dimensions of practice: stakeholder engagement spanned disciplines, cultural contexts, and identities, and all outputs—including the symposium and open-access publication—were designed for maximum accessibility. Special attention was devoted to removing barriers to participation, foregrounding equity, diversity, and inclusion (EDI) principles, and ensuring that both the processes and outcomes of the project were welcoming and representative.

Outcomes and Educational Impact

The integration of GenAI throughout the project yielded valuable insights into the affordances, risks, and boundaries of AI in socially engaged design education. The AI tools demonstrated significant potential for simulating urban environments, visualizing spatial interventions, and catalyzing new ways of thinking about sensitive social problems such as homelessness (Mehan & Mostafavi, 2024). Yet, the project also revealed crucial limitations: while AI can augment the creative and analytical dimensions of design, questions remain about its capacity to deepen students’ ethical sensitivity and contextual awareness. Thus, while the integration of AI showed promise, it also highlighted the necessity for ongoing critical engagement, reflexivity, and dialogue among all participants.
Direct feedback from symposium participants and critical reflections from Graduate Assistant Researchers substantiated the need for continued inquiry into the ethical and pedagogical significance of AI. The project confirmed that technology, if thoughtfully integrated, can enrich pedagogical practice and foster social responsiveness, but its impact must be continuously assessed and re-contextualized in response to the evolving ethical and practical realities (Rahm & Rahm-Skågeby, 2023).

Challenges and Limitations of AI Implementation

Despite its achievements, the project encountered several substantive challenges. Access to advanced GenAI tools was inconsistently available—many platforms required subscriptions or institutional licenses, creating potential inequities in participation, particularly for those less familiar with or without institutional support for AI technology. Furthermore, critical evaluation by the team identified that algorithmic biases are embedded within many AI platforms, which may inadvertently reinforce harmful stereotypes or marginalize vulnerable voices. This highlighted the ongoing and pressing need for vigilant, critical governance over educational technology adoption (Kizilcec & Lee, 2020).
Mitigation strategies focused on providing guidance, scaffolding, and reflective practice for all participants, encouraging transparent dialogue about the limitations and risks of AI. The project fostered an ethos of continuous ethical reflection, inviting collaborators and students to engage not only with AI’s technical possibilities but also its broader implications for social justice and inclusion.

Sustainability and Future AI Use

The insights and innovations generated by the project have catalyzed a new agenda for sustainable AI use in design education. While the project’s formal activities concluded with the dissemination of its open-access guide, its outcomes have inspired future initiatives. These include proposals for comparative research across multiple institutions and settings, with the goal of understanding how AI can be ethically and inclusively embedded within diverse design education contexts.
There is significant interest in developing professional development resources for educators, as well as actionable policy recommendations that not only underscore student voice and interdisciplinary collaboration, but also prioritize socially responsive practice. Ongoing and future research will center on robust evaluation strategies, collaborative knowledge mobilization, and continuous adaptation of curriculum and technology use to meet evolving educational and social demands (Katsamakas et al., 2024).

References

Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(22). https://doi.org/10.1186/s41239-023-00392-8

Katsamakas, E., Pavlov, O. V., & Saklad, R. (2024). Artificial intelligence and the transformation of higher education institutions. arXiv. https://arxiv.org/abs/2402.08143?utm_source=openai

Kizilcec, R. F., & Lee, H. (2020). Algorithmic fairness in education. arXiv.  https://doi.org/10.48550/arXiv.2007.05443

Mehan, A., & Mostafavi, S. (2024). Emerging technologies in urban design pedagogy: Augmented reality applications. Architectural Intelligence, 3(29). https://doi.org/10.1007/s44223-024-00067-y

Porayska-Pomsta, K., Holmes, W., & Nemorin, S. (2024). The ethics of AI in education. arXiv. https://arxiv.org/abs/2406.11842

Rahm, L., & Rahm-Skågeby, J. (2023). Imaginaries and problematisations: A heuristic lens in the age of artificial intelligence in education. British Journal of Educational Technology, 54(5), 1147–1159. https://doi.org/10.1111/bjet.13319

Acknowledgments

This project was funded by a 2024-2025 Teaching Academy Grand for Educational Leadership from the Taylor Institute for Teaching and Learning at the University of Calgary and by the Richard Parker Initiative. It was led by Sandra Abegglen, Researcher at the School of Architecture, Planning and Landscape University of Calgary with Fabian Neuhaus, Associate Professor School of Architecture, Planning and Landscape, University of Calgary and Matthias Drilling Professor, Department of Social Work, University of Applied Sciences in Zurich, Switzerland—and supported by the Graduate Assistant Researchers Lucas Campbell and Craig Delean.

“GenAI and Design Curriculum” Symposium Expert Panel Members were:

  • Geoffrey Messier, Professor, Schulich School of Engineering, Department of Electrical and Software Engineering, University of Calgary.
  • Chrissi Nerantzi, Professor in Creative and Open Education, School of Education, University of Leeds, United Kingdom.
  • Upasana Gitanjali Singh, Associate Professor, University of KwaZulu Natal, South Africa.
  • Lois Peterson, author and artist, Vancouver, Canada.
  • Jay Shaw, Canada Research Chair in Responsible Health Innovation (Tier 2) and Assistant Professor, Department of Physical Therapy, University of Toronto, Canada.
  • Bonnie Stewart, Associate Professor, Online Pedagogy & Workplace Learning, Faculty of Education, University of Windsor, Canada.
  • Svitlana Tarasenko, Senior Lecturer, Sumy State University, Ukraine.
  • Jeannette Waegemakers Schiff, Emerita Professor, Research, Faculty of Social Work, University of Calgary, Canada.

Case Study 2: AI Implementation in Qualitative Research Training: A Case Study in a Graduate Education Course

Author: Barbara Brown

Introduction and Context of AI Use

In the domain of graduate education, the integration of Artificial Intelligence (AI) technologies emerges as a transformative avenue for enhancing teaching methodologies and student learning experiences. This case study focuses on a Master of Education Interdisciplinary program, designed to enroll cohorts of 20 to 30 educators and professionals. This program, primarily conducted online, integrates AI-powered simulations to provide experiential learning and facilitate the development of research skills among graduate students, particularly in qualitative methodologies tailored to their educational contexts.

A significant challenge identified was the learners’ lack of formal research training, which could hinder the development of research-informed professional practices. The course aimed to support students in crafting and executing research interviews and engaging in thematic analysis. These activities are foundational for educational research and program evaluation, offering transferable skills in interpreting qualitative data. However, the logistical constraints of the course timeline made it impractical for students to recruit live participants, conduct multiple interviews, and produce authentic transcripts within a single term. Given the logistical constraints and the intensive nature of the program, AI simulations were employed as a tool for research (Ocen et al., 2025) to provide realistic yet manageable interview scenarios for students, thereby enhancing the authenticity and applicability of the research training.

Description of AI Technology

The AI technology implemented was a custom-built chatbot, integrated within the course to facilitate the conducting of semi-structured interviews with simulated personas. This technological approach allowed students to input questions and receive immediate, text-based responses mimicking real-life interview dynamics. By typing questions into the chat interface, students received text-based responses emulating live interview dialogue, which could be exported as full transcripts for subsequent analysis. The use of a generative AI chatbot addressed practical limitations, providing students with exportable transcripts for detailed analysis while ensuring that ethical considerations such as privacy and data security were addressed.

The decision to use a custom solution was motivated by both practical and ethical considerations. The institutionally hosted chatbot allowed students to practice interview protocols without the need for registration or the exposure of personal data to third-party platforms, mitigating privacy concerns. In addition, some learners chose to experiment with other AI platforms to compare the output, reflecting the open and exploratory ethos of the course. Functionally, this generative AI was pivotal as it allowed quick transcription of interviews, enabling students to scrutinize the types of responses elicited by their questions. This practical deployment of AI not only supported educational objectives but also aligned with ethical standards by being hosted on an institutionally controlled platform, thus alleviating concerns related to data privacy and third-party interactions. This approach provided an accessible, low-stakes environment for practicing foundational qualitative research skills, while foregrounding the privacy, security, and variability concerns inherent in the use of AI tools in educational settings (Mutimukwe et al., 2022).

Implementation Process

The implementation unfolded in several phases, beginning with the design of the inquiry and ethical considerations, guided by the instructor. The students were introduced to the responsibilities and limitations of using generative AI in educational research. The instruction emphasized the responsible use of generative AI, addressing both its potential and its limitations in the context of educational research. As part of the course, the AI tool’s developer was invited to a synchronous session, offering students direct access to the creator and facilitating richer discussions on the technological underpinnings and ethical implications of simulated data. Engagement with the AI tool’s developer further enriched their understanding of the technology’s capabilities and its ethical implications.

Throughout the process, students were guided to critically reflect on the use of AI in research, including the potential for bias in AI-generated responses, the implications of simulated data, and the ethical boundaries of using AI in educational research contexts. These discussions were anchored in scholarly readings on research design (Creswell & Creswell, 2022), professional reflexivity (Schön, 1983), and digital ethics. The scaffolded discussions based on scholarly readings framed within UNESCO’s (2021) ethical guidelines helped students navigate the complexities involved in using simulated data (Hagendorff, 2020).

Following these discussions, students submitted draft proposals for their inquiry projects, obtaining instructor feedback and approval before commencing their simulated interviews. Opportunities for collaborative inquiry were built into the process; in one instance, a group co-developed research questions and simultaneously explored the perspectives of personas representing different educational levels (e.g., elementary, junior high, and senior high). This multi-contextual approach enabled the comparison of themes and assumptions across diverse educational segments, mimicking the complexity of real-world research settings.

After conducting their simulated interviews, students downloaded the AI-generated transcripts. The second phase of the assignment introduced them to preliminary qualitative analysis, drawing on the reflexive thematic analysis framework advanced by Braun and Clarke (2022), which offers a flexible yet rigorous methodology for interpreting qualitative data (Turobov et al., 2024). Students experimented with AI-generated coding suggestions but were encouraged to critically evaluate these outputs through self-reflection and peer discussion, reinforcing the indispensable role of human interpretation in educational research. This phase was not only about data collection but also about fostering an understanding of the iterative nature of research and the importance of reflexivity in qualitative analysis (Alvesson & Sköldberg, 2018; Schön, 1983).

Ethical and Inclusive Considerations

The learning activity was designed to be flexible and responsive to diverse learner needs (Brown & Roberts, 2025). Throughout the activity, ethical reflection remained a core focus. Students were guided to assess the reliability and potential biases of AI-generated responses and to consider the ethical boundaries of using simulated data in educational research. The simulation was intentionally hosted on an institutionally controlled platform that required no personal accounts, ensuring compliance with data privacy standards and maximizing accessibility. This design decision also addressed equity, diversity, and inclusion (EDI) principles by providing an alternative to commercial AI tools that may present barriers related to accessibility, cost, or data security. Furthermore, course discussions prompted critical reflection on representation and voice within AI simulations, drawing attention to inclusivity concerns that can arise in AI-generated educational resources (Holstein & Doroudi, 2021).

Outcomes and Educational Impact

The deployment of AI simulations in the research course yielded notable educational benefits. Student feedback gathered through research projects using this pedagogical approach indicated increased confidence in designing interview protocols and conducting research interviews and a deeper appreciation for the iterative processes of qualitative analysis (Brown & Sabbaghan, 2025; Sabbaghan & Brown, 2024). Beyond technical skill development, the activity provoked thoughtful engagement with broader methodological questions, sparking student inquiry into the future of research and the evolving role of AI in knowledge production. The course structure involved integrating AI within scaffolded, inquiry-based activities that created a supportive environment for building essential research skills, while modeling responsible and critical engagement with emerging digital technologies (Brown & Sabbaghan, 2025; Sabbaghan & Brown, 2024).

Challenges and Limitations of AI Implementation

Despite the successes, challenges such as the quality of AI-generated data being contingent on the precision of student questions were noted. This underscored the importance of developing well-defined research and interview questions as a foundational skill in qualitative research. Vague or unfocused questioning tended to produce generic, less useful responses, reducing the value of the subsequent transcript analysis. This highlighted the critical importance of careful question design as a precursor to effective qualitative research, whether conducted with human or AI participants. Strategies to enhance future implementations could include more structured and iterative question development processes and peer feedback mechanisms.

To address these limitations, the experience revealed the need to allocate additional time to developing and refining interview questions before conducting simulations. Structured opportunities for peer feedback and generative-AI feedback and iterative question development may further enhance student outcomes in future iterations. These reflective insights underline the essential complementarity between AI-supported tools and human oversight in the research process (Brown & Roberts, 2025; Reiss, 2021).

Sustainability and Future AI Use

Looking ahead, the continued integration of AI within the curriculum is planned, with enhancements to support its sustainable use. Future directions aim to scaffold AI-mediated activities more robustly, emphasizing clear learning objectives, guided reflections, and structured feedback. Ongoing dialogues on the ethical complexities of AI in academic content are deemed crucial in preparing educators and researchers to navigate the evolving, AI-mediated educational landscape (UNESCO, 2021). By addressing these perspectives, the case study not only highlights the innovative use of AI in graduate-level education but also outlines a roadmap for its ethical and effective integration in professional learning environments.

References

Alvesson, M., & Sköldberg, K. (2018). Reflexivity: New vistas for qualitative research (3rd ed.). Sage.

Brown, B. & Roberts, V. (2025). Responsive instructional design using GenAI and digital skill development framework. In S. Sabbaghan (Ed.), Navigating generative AI in higher education: Ethical, theoretical and practical perspectives (pp. 52-65). Edward Elgar Publishing Ltd.

Brown, B. & Sabbaghan, S. (2025). Applying interview-research methods using generative AI technology: An action research study with graduate students in mock research teams. Canadian Journal of Action Research, 25(1), 1-18. https://journals.nipissingu.ca/index.php/cjar/article/view/710

Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide. SAGE Publications.

Creswell, J. W., & Creswell, J. D. (2022). Research design: Qualitative, quantitative, and mixed methods approaches (6th ed.). SAGE Publications.

Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120. https://doi.org/10.1007/s11023-020-09517-8

Holstein, K., & Doroudi, S. (2021). Equity and artificial intelligence in education: Will “AIEd” amplify or alleviate inequities in education? arXiv. https://arxiv.org/abs/2104.12920

Mutimukwe, C., Viberg, O., Oberg, L.-M., & Cerratto-Pargman, T. (2022). Students’ privacy concerns in learning analytics: Model development. British Journal of Educational Technology, 53(4), 932–951. https://doi.org/10.1111/bjet.13234

Ocen, S., Elasu, J., Aarakit, S. M., & Olupot, C. (2025). Artificial intelligence in higher education institutions: Review of innovations, opportunities and challenges. Frontiers in Education. https://doi.org/10.3389/feduc.2025.1530247

Reiss, M. J. (2021). The use of AI in education: Practicalities and ethical considerations. London Review of Education, 19(1), 1–14. https://doi.org/10.14324/LRE.19.1.05

Sabbaghan, S. & Brown, B. (2024). The role of generative AI-powered personas in developing graduate interviewing skills. International Journal on Innovations in Online Education, 8(1). https://doi.org/10.1615/IntJInnovOnlineEdu.2024051770

Schön, D. A. (1983). The reflective practitioner: How professionals think in action. Basic Books.

Turobov, A., Coyle, D., & Harding, V. (2024). Using ChatGPT for thematic analysis. arXiv. https://arxiv.org/abs/2405.08828

UNESCO. (2021). Ethics of artificial intelligence: Recommendation of the ethics of artificial intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137

Acknowledgements

I would like to thank Dr. Soroush Sabbaghan, Associate Professor at the University of Calgary, who created the customized tool for the course and all of the students who agreed to participate in the research.

Case Study 3: Critiquing and Refining Chat GPT Output in Library Information Sessions

Author: Brianna Calomino

Introduction and Context of AI Use

Generative AI tools such as ChatGPT have become ubiquitous in university settings, particularly among students engaged with challenging research initiatives. As librarians are pivotal in guiding students through information landscapes, the integration of AI literacy into information literacy sessions is both timely and essential. It confronts direct issues such as misinformation being spread by AI tools (OpenAI, n.d.). The integration of ChatGPT into students’ research processes requires the ability to analyze critically the AI’s output before academic application. The overarching goal of these librarian-led sessions is to highlight the limitations of generative AI like ChatGPT and develop students’ capacity to generate effective, critical prompts that enhance their learning without compromising academic integrity (Grover & Pea, 2018).

Description of AI Technology

ChatGPT, a Large Language Model built by OpenAI, employs sophisticated algorithms to generate text-based responses to user input. The technology’s adoption is primarily justified by its popularity and the familiarity that students already have with it, making it easier to engage with and critically evaluate. This ease of use could provide a practical learning experience, exploring responsible and effective uses of AI tools in academic contexts. Its popularity stems largely from its accessibility and heuristic capabilities—traits that can ideally support academic research if used judiciously. The aim of restructuring students’ interactions with ChatGPT is to equip them with the skills necessary to use this form of AI to support, rather than compromise, quality research.

Implementation Process

To effectively address the scenario detailed, a structured session was designed for a first-year English class. The initial steps involved crafting an intentionally vague prompt related to generating a thesis for a Shakespeare play to illuminate the quality and depth of AI-generated content. The instructor then led the session utilizing the ACT UP (Stahura, 2018) framework and the CLEAR framework (Lo, 2023) as analytical tools to critically analyze the accuracy, bias and authority of the results. Students engaged in discussions about the credibility and limitations of AI in academic research, which amplified their understanding of how AI can affect information consumption and creation. Subsequently, students worked in groups to refine their prompts and analyzed improvements in new ChatGPT responses, which were then shared with peers. An anonymous, post-session survey gauged the impact and efficacy, centering on students’ perceived ability and intended future use of such AI tools in academic settings. Post-deployment, technological assistance was readily available, ensuring that all students could participate effectively and improve their prompts independently or with minimal guidance.

Ethical and Inclusive Considerations

Acknowledgment of AI’s ethical implications during the session aimed to sensitize students to the biases and limitations inherent in AI responses. Students could opt out, ensuring flexibility and addressing any discomfort about the content or the medium according to inclusivity guidelines. Inclusivity was further supported by conducting the session in a computer lab, facilitating access for those without personal devices and accommodating students using assistive technologies like screen readers. The session structure promoted collaborative learning, which supports workload management and enhances accessibility (Holley & Oliver, 2020). The choice of venue and the collaborative nature of the session further ensured that all students, regardless of individual equipment or cognitive preferences, could engage fully.

Outcomes and Educational Impact

Students demonstrated enhanced abilities to evaluate AI-generated information critically, identifying biases and ethical concerns, and applying developed information literacy skills to AI-assisted research. The feedback collected through an anonymous QR code survey indicated an increased comfort level with analyzing AI outputs and a willingness to incorporate these strategies in future academic endeavors. Students highlighted the benefits of AI tools like ChatGPT in supporting diverse learning needs and facilitating the research and writing process, while also underscoring the importance of not allowing AI to replace traditional learning tools or intellectual engagement (Luckin, Holmes, Griffiths, & Forcier, 2016). Broadly, the session underscored the indispensable role of human discernment in the digital age, even as AI technologies continue to evolve.

Challenges and Limitations of AI Implementation

Throughout the session, disparities in digital literacy highlighted the need for baseline instructional support in using AI tools, which was addressed by initiating all participants at a common learning point. Some students also expressed ethical reservations about using generative AI tools, which were addressed by allowing students to opt out of activities involving ChatGPT. The instructor played a vital role in setting clear boundaries on the use of AI in educational settings, highlighting the distinction between AI-generated assistance and academic dishonesty, stressing that while AI can initiate and inform the research process, the analytical depth and critical engagement must originate from the students themselves.

Sustainability and Future AI Use

Looking ahead, the session’s framework is adaptable not only to advancements in AI like new versions of ChatGPT, but also in applications across different academic disciplines. Proposals for the future involve rigorous comparative studies of different AI tools in educational contexts and additional customization of AI-related sessions according to disciplinary needs, promoting a comprehensive, nuanced understanding of AI’s role across various academic fields (Zawacki-Richter, Marín, Bond, & Gouverneur, 2019). Continuous updates and faculty training will ensure that the educational community stays abreast of technological and methodological advancements, fostering a consistently relevant and effective educational approach (Holmes, Bialik, & Fadel, 2019).

The integration of AI tools in education presents both vast potential and significant challenges. As these tools evolve, so too must our strategies for integrating them into academic contexts in meaningful, ethically sound, and educationally beneficial ways that still protect academic standards.

References

Grover, S., & Pea, R. (2018). Computational Thinking: A competency whose time has come. In S. Sentance, E. Barendsen, & C. Schulte (Eds.), Computer Science Education: Perspectives on Teaching and Learning in School (pp. 19-38). Bloomsbury Academic.

Holley, K., & Oliver, G. (2020). Artificial Intelligence and Ethics in Design: Responsible Research and Innovation Approach. Journal of Educational Technology & Society, 23(1), 35-49. https://doi.org/10.1007/978-981-19-2080-6_6

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign.

Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. Journal of Academic Librarianship, 49(4).  https://doi.org/10.1016/j.acalib.2023.102720

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education.

OpenAI. (n.d.). GPT-4 system card. https://cdn.openai.com/papers/gpt-4-system-card.pdf

Stahura, D. (2018). ACT UP for Evaluating Sources: Pushing against privilege. College & Research Libraries News, 49(10). crln.acrl.org/index.php/crlnews/article/view/17434/19242

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0

Case Study 4: Cultural Sensitivity: A Language Mediator Program in Nursing Clinical Simulation

Authors: Margarita Gil & Jasmine Hwang

Introduction and Context of AI Use

In an introductory undergraduate nursing practice course, students encounter a simulation of a clinical setting where patients are unable to communicate in English. The challenge here is significant; healthcare practitioners must overcome language barriers to provide effective and compassionate care. This educational scenario presents a unique opportunity to incorporate artificial intelligence (AI) to enhance patient-clinician interactions, specifically by addressing these language differences to simulate real-world nursing challenges.

Description of AI Technology

The AI technology implemented in this course included the Chat GPT Audio and Video platforms, integrated with a Generative AI Translate Model. This choice was driven by the need to facilitate immersive learning experiences that are otherwise challenging to simulate in an academic environment. The Generative AI Translate Model allows for real-time translation and interaction, providing an environment where students can practice communication with non-English speaking patients effectively (Poibeau, 2017).

Implementation Process

The implementation of this AI tool in the nursing course involved a structured approach. Initially, the process required ethical clearance and the engagement of consultants with expertise in AI and language translation. Access to the paid versions of the necessary AI platforms was secured, ensuring robust functionality and support during simulations. The execution of this technology took place within the classroom setting during live simulation sessions. To support this integration, a post-deployment phase involved setting up an interactive discussion board on D2L to facilitate continuous learning and address any emerging needs (Roll & Wylie, 2016).

Ethical and Inclusive Considerations

Ethical considerations included maintaining transparency regarding the authorship of interactions by distinguishing between student-generated and AI-generated content. A reflective exercise on the potential risks of AI, such as inherent biases in language translation, was integrated into the curriculum. From an inclusivity and accessibility perspective, students had equal access to the AI tools, which were selected to be fully accessible. The AI’s usage in fostering interactions with patients from diverse cultural backgrounds directly supported the values of equity, diversity, and inclusion (EDI) by promoting sensitivity and inclusiveness in patient care training (Holmes, Bialik, & Fadel, 2019).

Outcomes and Educational Impact

The implementation of AI in this nursing course significantly impacted learning outcomes by providing students with a hands-on experience that could adapt to rapidly evolving technology, which is important for promoting culturally sensitive care. Feedback collected from students’ post-intervention indicated a heightened understanding of culturally appropriate practices and increased proficiency in managing language barriers in clinical settings. However, critical reflections suggested further exploration of the accuracy and cultural nuance translation by AI (Baylor & Kim, 2018).

Challenges and Limitations of AI Implementation

Several challenges were evident during the AI implementation. Notably, the novelty of generative AI in educational settings meant that both students and staff required introductory sessions to familiarize themselves with the technology. Concerns regarding the cost of the technology and privacy were significant, considering the sensitive nature of healthcare information. To mitigate these challenges, grants were sought to fund the technology acquisition, collaborations were initiated with governmental bodies to enhance privacy measures, and specialist AI trainers were involved to facilitate learning.

Sustainability and Future AI Use

Looking forward to future usage, there are plans to expand the integration of AI to include a wider variety of community and clinical settings and to develop language models that encompass various languages, improving the breadth and accuracy of translation. This informs the recommendation for open access to AI tools and sustained funding for ongoing AI training in educational contexts (Wartman & Combs, 2019), aiming at enhancing the preparedness of nursing students to handle diverse patient needs effectively.

References

Baylor, A. L., & Kim, S. (2015). Research-based design of pedagogical agent roles: A review, progress, and recommendations. International Journal of Artificial Intelligence in Education, 26(3), 160-169. https://doi.org/10.1007/s40593-015-0055-y

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.

Poibeau, T. (2017). Machine translation. The MIT Press.

Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26(2), 582-599. https://doi.org/10.1007/s40593-016-0110-3

Wartman, S. A., & Combs, C. D. (2019). Reimagining medical education in the age of AI. AMA Journal of Ethics, 21(2), E146-E152. https://doi.org/10.1001/amajethics.2019.146

Acknowledgements

We acknowledge Mohammad Keyhani for his guidance during the conception and development of this case study.

Case Study 5: Exploring GenAI-Supported Learning in a Graduate Course on Design-Based Research

Author: Michele Jacobsen

Introduction and Context of AI Use

In the graduate course focused on Design-Based Research (DBR), graduate students engage with complex problems in education through iterative cycles of design, analysis, and theoretical and practical innovation. The course aims to intertwine theoretical insights with practical application, focusing both on educational theory and the enhancement of instructional practice (Anderson & Shattuck, 2012; McKenney & Reeves, 2019). Students, drawn from diverse career backgrounds, explore foundational knowledge and the history of DBR, actively participate in case analyses, and collaborate on the analysis of problems of practice in order to design educational interventions as solutions. The goal or opportunity present in integrating AI was identified as providing cognitive support in three critical aspects: idea generation and brainstorming, summarization and synthesis, and case study and scenario creation, all geared towards fostering a deeper, more critical engagement with knowledge construction and problem-solving with design-based research in educational settings.

Description of AI Technology

The AI technologies deployed by the author in this course included a case generator (Sabbaghan, 2025) and NotebookLM for generating original case scenarios to support the application of DBR principles and synthesizing and integrating content from authoritative scholarly sources. The case generator aided students in designing scenarios for team facilitation and DBR projects by creating detailed, context-rich educational cases. NotebookLM supports the summarization and synthesis of complex academic texts and facilitates content creation for research and study purposes. The functionalities of these AI tools were primarily generative, aimed at enhancing the learning experience by providing robust, contextually relevant content that students could analyze, critique, and reinterpret in their projects. The rationale behind these technologies was to enable students to explore complex DBR principles and apply them practically by designing studies and research proposals that reflected relevant, worthwhile, and robust solutions to real-world educational problems of practice.

Implementation Process

The initial phase involved the instructor using the AI tools to generate cases and demonstrate their potential to the class. This preparation was crucial in setting a standard and expectation for the use of AI in the course. During the course, students engaged with these tools in two main phases: first, by selecting and leading discussions on instructor-generated cases, and later, by using the tools themselves to generate new cases that were pertinent to their Team Facilitation & Leadership and major DBR projects. These activities were supported by NotebookLM, which assisted students in synthesizing and summarizing dense information from multiple authoritative sources (i.e., academic articles and book chapters) into actionable insights for their research. Post-deployment, the course framework provided both formative and summative feedback mechanisms, enabling continual improvement of student work based on peer and instructor evaluations.

Ethical and Inclusive Considerations

Ethical AI usage was emphasized, with specific guidelines being introduced to ensure proper citation and acknowledgment of AI-generated content in academic submissions. This was critical in maintaining academic integrity while acknowledging any contributions of AI in the research and learning process.

Outcomes and Educational Impact

The integration of generative AI tools into the DBR course significantly enhanced the scope and depth of instructor and graduate student learning. The AI tools helped graduate students in constructing diverse and authentic case scenarios applicable across different educational contexts, such as K-12, higher education, and adult learning. The tools served not just as content creators, but as facilitators of deeper analytical and creative thinking processes. Students engaged in teams to produce high-level academic work but also took leadership roles in defining and solving educational problems, thereby directly affecting how they understood how to apply a design-based research approach to the analysis and exploration of real-world educational challenges and problems of practice, and the design and evaluation of solutions. Through this process, students demonstrated an enhanced theoretical understanding, coupled with the practical ability to design and consider how to implement and evaluate educational interventions.

Challenges and Limitations of AI Implementation

Despite the successes, several challenges emerged. The integration of AI occasionally blurred the lines between student contributions and AI-generated content, challenging traditional notions of authorship and originality. The reliance on AI for initial idea generation sometimes led to superficial engagement with deep conceptual matters, and disparities in access and skill with AI tools raised equity concerns. Furthermore, assessing AI-assisted work required rethinking traditional grading metrics to better capture the depth and breadth of student engagement and learning outcomes.

Sustainability and Future AI Use

The author plans to continue refining this approach to AI integration in a graduate level research methodology course. Future iterations will include structured AI literacy modules, reflective journals for tracking AI usage, ethical use frameworks, collaborative design cycles enhanced by AI inputs, and AI-supported peer review processes. The instructor’s ongoing goal is for continuous improvement in harnessing AI’s potential to augment sophisticated design thinking and research skills, elevating both the educational experience and outcomes for graduate students.

By leveraging a thoughtful blend of advanced AI tools and rigorous pedagogical structures, the instructor endeavors to prepare a new generation of scholars and practitioners who can thoughtfully integrate AI into design-based educational research and practice, ensuring that technology serves to enhance, rather than replace, the nuanced processes of human learning and intellectual development.

References

Anderson, T., & Shattuck, J. (2012). Design-based research: A decade of progress in education research? Educational Researcher, 41, 16–25. 10.3102/0013189X11428813 https://journals-sagepub-com.ezproxy.lib.ucalgary.ca/doi/full/10.3102/0013189X11428813

McKenney, S. & Reeves, T. C. (2019). Conducting educational design research (2nd ed.). Routledge. https://doi-org.ezproxy.lib.ucalgary.ca/10.4324/9781315105642

Sabbaghan, S. (2025). Case generator [Computer software]. CaseCraft. https://casecraft.de

Acknowledgements

Thank you to Dr. Soroush Sabbaghan for his guidance and support in using CaseCraft in this graduate course.

Case Study 6: Building Generative AI Agents with No-code Automation Tools

Author: Mohammad Keyhani

Abstract

This chapter presents a case study of an innovative pedagogical approach implemented at the University of Calgary that integrates generative artificial intelligence (AI) into higher education through hands-on, project-based learning. Students in a course on Generative AI and Prompting built functional AI agents using no-code automation platforms, moving beyond passive consumption of AI tools to become active creators. The implementation involved students constructing automated workflows that integrated large language models (LLMs) with everyday digital tools, demonstrating the practical application of AI in real-world contexts. The approach was implemented across multiple courses from 2022 to 2025, reaching over 200 students from diverse academic backgrounds. Anecdotal results indicated increased student engagement, an enhanced understanding of AI concepts, and the development of practical skills in prompt engineering and workflow automation. The approach proved inclusive, enabling students from diverse academic backgrounds to participate successfully without programming experience. Key challenges included the limitations of free software versions and technical constraints, which were addressed through careful planning and institutional support. This case study contributes to the growing body of literature on AI integration in higher education and offers a replicable model for fostering AI literacy through experiential learning.

Keywords: generative AI, higher education, no-code platforms, AI literacy, project-based learning, educational technology

Introduction

The rapid proliferation of generative artificial intelligence technologies, particularly large language models (LLMs) like ChatGPT, has created both opportunities and challenges for higher education (Zawacki-Richter et al., 2019). As of 2023, surveys indicated that 50-65% of students and faculty had experimented with AI chatbots, signaling a fundamental shift in the educational landscape (Baytas & Ruediger, 2024). This ubiquity presents educators with a critical question: how can we move beyond reactive policies about AI use to proactive integration that enhances learning outcomes?

With recent advances in generative AI occurring at a rapid pace, many students are falling behind the evolving standards of AI literacy. The traditional approach to AI education has often been confined to a theoretical understanding within computer science departments. However, the democratization of AI tools demands a broader pedagogical response that emphasizes AI literacy across disciplines (Milberg, 2025). AI literacy encompasses not merely understanding what AI is, but developing the skills to critically evaluate, effectively use, and ethically deploy AI technologies in various contexts (Kassorla et al., 2024).

This chapter presents a case study of an innovative approach implemented at the University of Calgary that addresses this need through experiential learning. Rather than treating generative AI as a black box or potential threat to academic integrity, the intervention positioned students as builders of AI systems. Using no-code automation platforms, students created functional AI agents that performed real-world tasks, thereby developing both a conceptual understanding and practical skills. Most students’ understanding of generative AI is currently limited to chat interfaces, making them unaware of the different ways they can incorporate generative AI in building software solutions, automations, and agentic systems capable of using tools and following complex workflows.

The significance of this approach extends beyond technical skill acquisition. By engaging students in the construction of AI workflows, the pedagogy addresses multiple educational objectives: fostering critical thinking about AI capabilities and limitations, developing awareness of ethical considerations, and building confidence in working with emerging technologies. This aligns with calls from educational researchers for approaches that prepare students not just to use AI, but to understand its implications and shape its development (Zawacki-Richter et al., 2019; EDUCAUSE, 2024).

Literature Review

Project-Based Learning in Technology Education

Project-based learning (PBL) has established itself as an effective pedagogical approach for developing 21st-century skills. Bell (2010) describes PBL as “an innovative approach to learning that teaches a multitude of strategies critical for success in the twenty-first century” (p. 39). Students in PBL environments demonstrate improved technical skills, enhanced communication abilities, and stronger problem-solving capabilities compared to traditional instruction methods. These benefits align particularly well with technology education, where hands-on experience proves crucial for deep understanding.

The constructionist learning theory, introduced by Papert, provides additional theoretical grounding for this approach. Constructionism posits that learners build mental models most effectively when they are actively creating external artifacts that they find personally meaningful. Recent research by Larsen et al. (2025) demonstrates how this framework applies to AI education, showing that students who engage in building AI-driven artifacts develop more nuanced understandings of AI capabilities and limitations.

Generative AI in Education

The integration of generative AI in educational contexts represents a rapidly evolving field. Belkina et al. (2025) conducted a systematic review of case studies implementing generative AI in higher education, identifying key patterns in successful integration. Their findings suggest that active engagement with AI tools, rather than passive consumption, leads to superior learning outcomes.

The concept of AI agents—autonomous systems that perceive their environment and take actions using AI models as core components—has emerged as particularly relevant for education. These agents can execute multi-step tasks, use external tools, and adapt based on feedback loops, offering rich opportunities for student learning about AI systems (Alvarez & Silvestrone, 2024). No-code AI agent builders provide an excellent opportunity for students to experience many of the leading-edge capabilities of generative AI in a manner that is approachable for a non-technical audience.

No-Code Platforms and Democratization of Technology

No-code automation platforms have gained attention as tools for democratizing technology creation. This democratization is proving particularly valuable in AI education, where programming barriers might otherwise exclude non-technical students.

Platforms such as Zapier, Make (formerly Integromat), n8n, and Relay.app provide visual interfaces for creating automated workflows. These tools support integration with AI services through Application Programming Interfaces (APIs), enabling sophisticated AI implementations without coding. The educational value lies not just in the ease of use, but in how these platforms make abstract concepts tangible through visual representation of data flows and logic. No-code automation tools and AI agent builders allow students to experience and build solutions with advanced AI capabilities without having to worry about understanding the software code that would otherwise be needed to build such AI agents.

AI Literacy Frameworks

Recent literature emphasizes the importance of comprehensive AI literacy frameworks. The EDUCAUSE (2024) guidelines for AI Literacy in Teaching and Learning identify three key areas: technical understanding, practical integration, and ethical vigilance. Similarly, the OECD and EU’s AILit Framework explicitly includes “collaborating with AI tools to solve problems” and “designing AI solutions” among its core domains.

These frameworks stress that AI literacy extends beyond technical knowledge to encompass critical evaluation, ethical reasoning, and responsible deployment. This multifaceted approach aligns with UNESCO’s (Fransec, 2019) call for educators to become more involved in shaping AI applications in teaching, moving beyond technology-driven implementations to pedagogically sound integrations.

Methodology

Context and Participants

This case study is based on class activities implemented at the Haskayne School of Business, University of Calgary in various classes throughout 2022 to 2025 including ENTI 674 Technologies of Innovation, ENTI 407 Technology for Entrepreneurs, and ENTI 333/633 Generative AI and Prompting. This latter course is especially important as it involved students from diverse academic backgrounds, including business, humanities, and technology disciplines, as well as both graduate and undergraduate students in the same class. This diversity was intentional, reflecting the cross-disciplinary nature of AI literacy needs in contemporary education. Over this time period, more than 200 students participated in exercises of the type discussed here.

The educational context allowed this approach to be integrated into courses focused on generative AI, no-code technology, business technology, or automation. It was originally developed as part of the “Generative AI and Prompting” course at the University of Calgary but has since been adapted for various educational settings.

Pedagogical Design

The learning activity was designed following principles of project-based learning and constructionist pedagogy. The core assignment required students to build a functional AI agent using no-code automation tools. The instructors could customize the activity such that students built something that seemed immediately useful and impressive to them.

A recommended deliverable for this type of exercise would be to build a “networking assistant” that:

  1. Collects information through an online form (using tools like Tally.so or Google Forms)
  2. Stores data in a spreadsheet (Google Sheets)
  3. Uses the OpenAI API to generate personalized collaboration emails
  4. Automatically sends the AI-generated content (via Gmail)

This type of activity has various benefits. First, it demonstrates practical AI application in a relatable context (such as professional networking). Second, it integrates familiar tools (forms, spreadsheets, email) with AI capabilities, reducing cognitive load and demonstrating to students that they already have many of the building blocks they need. Third, it requires students to engage with multiple aspects of AI implementation, from API usage to prompt engineering.

The tool selection was carefully considered based on availability and functionality. Ideal platforms include Zapier.com, Make.com, or Lindy.ai if students have access to paid plans. However, free alternatives such as Relay.app or n8n.io also provide sufficient functionality for the assignment.

Implementation Process

The implementation unfolded in three phases:

Preparation Phase: This exercise was presented to students after they had learned the basics of how large language models work, and had been introduced to the idea that LLMs can be given external knowledge sources through retrieval augmented generation (RAG) systems, and that they can be given tools through function calling and application programming interfaces (APIs). The concept of API is an especially important concept for students to understand to realize how the entire internet is essentially composed of building blocks that are available to them that they can connect together using no-code tools. Foundational lessons covered LLM functionality, API concepts, and retrieval-augmented generation through demonstrations of simple automations.

Execution Phase: Students were provided with a written step-by-step tutorial that they had to follow to build an AI-powered automation on the no-code platform. Ideally, the step-by-step tutorial would include learning content in between the steps (such as written explanations of concepts like API), and would increase in complexity progressively, such that by the end of it students had built something they never thought they could. The hands-on workshop environment encouraged experimentation and peer learning while students worked through the tutorial, customizing their implementations and fostering both guided learning and creative exploration. As an optional add-on, the instructor might ask students or student groups to write a blog post with screenshots and/or a video recording of how they built the automation and what it did.

Reflection Phase: Often students were not aware of the significance of what they had just built. Post-implementation activities included group discussions and debates about the nature of software development in the no-code era. Students were asked to raise their hands if they thought what they had done was build software, leading to a debate between proponents and those who disagreed. The instructor then explained the concept of a three-tier software architecture (interface layer, logic layer, and data layer), showing how their project contained all these components. Additionally, demonstrations of more advanced AI agents built in tools like Lindy.ai showcased the steps in the automation process that they may not have practiced, such as adding knowledge bases through RAG or giving alternative types of tools to the agent, and what it means for an autonomous agent to choose its own tools versus the agent builder specifying exactly which tools to use and when.

Data Collection and Analysis

While systematic survey data was not collected from students, multiple data sources helped understand the educational impact:

  • Observational notes during workshop sessions captured student engagement and challenges
  • Student reflections provided insight into learning outcomes and perceived value
  • Project artifacts demonstrated technical achievement and creative applications
  • Class discussions revealed a conceptual understanding and ethical awareness

Implementation and Findings

Student Engagement and Motivation

One of the most striking outcomes was the high level of student engagement throughout the project. The hands-on nature of the project appeared to tap into intrinsic motivation, aligning with Henderson et al.’s (2017) findings that students value educational technologies they perceive as directly useful to their learning or future work.

Observational data revealed that students were deeply engrossed in the task, often exceeding the minimum requirements out of curiosity. This engagement manifested in several ways:

  • Extended work beyond class time
  • Active collaboration and peer support
  • Experimentation with advanced features
  • Enthusiasm in sharing discoveries

Students were often delighted to see that they could build AI-powered automations and AI agents without knowing how to code. The practical utility of the deliverable—a networking tool that could be immediately useful—contributed significantly to this engagement.

Skill Development and Learning Outcomes

Students demonstrated the acquisition of multiple skill sets through the project:

Technical Skills: All student groups successfully created functional AI agents, demonstrating competency in:

  • Using no-code automation platforms (Zapier, Relay.app, n8n.io)
  • Understanding and implementing API calls
  • Managing data flows between applications
  • Troubleshooting technical issues

AI-Specific Knowledge: Students showed a deepened understanding of:

  • How LLMs process and generate text
  • The role of prompts in shaping AI outputs
  • The limitations of AI systems (hallucination, consistency issues)
  • The concept of AI agents and autonomous systems
  • Function calling and tool use in AI systems

Prompt Engineering: Through iterative refinement, students developed skills in:

  • Crafting clear, specific instructions for AI
  • Including relevant context in prompts
  • Adjusting tone and style parameters
  • Debugging unexpected AI behaviors
  • Mitigating biases through prompt design

Metacognitive Awareness: Reflections revealed students’ growing awareness of:

  • The constructed nature of AI systems
  • Human responsibility in AI deployment
  • The potential and limitations of no-code approaches
  • Their own learning processes and capabilities

Inclusivity and Accessibility

A significant finding was the project’s accessibility to students from non-technical backgrounds. By removing coding as a barrier, the no-code approach enabled participation across disciplines. Some students who were completely averse to coding or building software realized through the assignment that this was not as out-of-reach for them as they had imagined.

The group work structure further enhanced inclusivity, allowing students to contribute different strengths:

  • Technical students helped with platform navigation
  • Humanities students enjoyed contributing to creative prompts
  • Business students contributed to identification of use cases and target markets

This collaborative dynamic fostered peer learning and challenged stereotypes about who can work with AI technologies. To address cost concerns, the assignment was designed to work with free tier options, and when paid subscriptions were necessary, group projects allowed students to share the costs, bringing the expenses down to only several dollars per group member.

Ethical Awareness and Critical Thinking

The project naturally brought to the surface ethical considerations, which became focal points for learning:

Transparency: When creating an automated email agent that can send people AI-generated text that the agent-building human has not read, it creates an opportunity to discuss the ethics of sending AI-generated text to others. The ethical principle emphasized was that if you have not read the text yourself and you do not take responsibility for it, you must disclose that it is AI generated. Since the bot automatically sends the email before the human has had a chance to read it, disclosure of AI generation becomes necessary. However, in cases where the human does take full responsibility for the text, disclosure may not be necessary.

Bias and Fairness: Prompt engineering for an AI-powered email automation provided the perfect setting to learn about and practice techniques to elicit biases and discriminatory behaviors from LLMs and find ways to mitigate them. Students tested whether the bot would reply differently to men versus women and found ways to mitigate such differences through prompt engineering.

Privacy: Using cloud-based tools and APIs raised questions about data protection. The exercise was designed so that no sensitive data would need to be inputted or uploaded, leading to practical lessons in digital responsibility.

Accountability: Students recognized their responsibility for AI outputs, understanding that creating an AI tool comes with ethical responsibility for its outputs.

Academic Integrity Implications

Interestingly, the project design inherently addressed academic integrity concerns. Because students were building AI systems on no-code platforms rather than using them to complete assignment deliverables that could be outputted directly by LLMs, there was no easy path to using an LLM to do the work of building. With current widely used AI tools, it is not really possible to get the AI to do the work for you, thereby making the exercise “AI-proof” to some extent, which is highly coveted among instructors struggling to find assessments and exercises where students cannot simply use AI to complete the work.

This is an example of how AI-integrated assignments can actually enhance academic integrity by making authentic engagement necessary for success.

Discussion

Theoretical Implications

This case study contributes to several theoretical conversations in educational technology. First, it demonstrates the applicability of constructionist learning theory to AI education. Students literally constructed AI agents while simultaneously constructing mental models of how AI systems operate. This dual construction process appeared to facilitate a deeper understanding than passive learning approaches might achieve.

Second, the findings support and extend project-based learning theory in the context of emerging technologies. The project’s success suggests that PBL principles—real-world relevance, student autonomy, collaborative problem-solving—translate effectively to AI education. However, the rapid pace of AI development introduces unique considerations, such as the need for continuously updated content and flexible tool choices.

Third, the study provides empirical support for comprehensive AI literacy frameworks. Students developed not just technical skills but also critical thinking and ethical reasoning capabilities, validating multi-dimensional approaches to AI literacy (EDUCAUSE, 2024; Kassorla et al., 2024).

Practical Implications

For educators considering similar implementations, several practical insights emerge:

Tool Selection: After an extensive search, several no-code automation tools with adequate free plans were identified. Choosing platforms with generous free tiers proved crucial for equitable access. The most reliable tools were selected, with backup options in case one tool experienced issues. Institutional support for premium features can significantly enhance the experience.

Scaffolding: Careful scaffolding from simple to complex tasks helps manage the cognitive load while maintaining the challenge. The tutorial approach, combined with real-time support, appeared optimal. The step-by-step process should align with course timelines and learning objectives while remaining achievable with preferably free tools.

Group Dynamics: Mixed-skill groups leverage diversity as a strength but require thoughtful facilitation to ensure equal participation and learning opportunities. Assigning tasks as a group projects can reduce individual costs and foster collaborative learning.

Time Management: The complexity of working with multiple tools demands adequate time allocation. A single workshop session proved sufficient for basic implementation, but extended engagement would allow deeper exploration. A whole course could be designed around this type of exercise when time permits.

Addressing Challenges

Several challenges emerged during implementation, each offering lessons for future iterations:

Technical Limitations: Free tier restrictions on API calls and platform features occasionally interrupted workflow. Finding AI agent builders and no-code automation tools with sufficient capabilities and API call credits on free versions proved extremely challenging. Solutions included:

  • Extensive research to identify the best free options
  • Providing backup API keys when possible
  • Having alternative platform options ready
  • Setting realistic expectations about free tier capabilities

Platform Reliability: Relying on external cloud-based software sometimes proved messy as the software may be buggy or down on the particular day of class. Mitigation involved:

  • Choosing the most reliable tools available
  • Maintaining backup platform options
  • Testing platforms before class sessions
  • Having contingency plans for technical failures

Learning Curve: Some students initially struggled with the no-code interface despite its visual nature. Strategies that proved helpful included:

  • Peer mentoring within groups
  • Step-by-step visual guides with embedded learning content
  • Celebrating incremental successes
  • Progressively increasing the complexity throughout the tutorial

Privacy Concerns: People are often worried about data privacy implications when using cloud-based tools and APIs. This was addressed by:

  • Designing exercises that avoided sensitive data
  • Discussing data privacy principles
  • Using example data rather than personal information
  • Explaining API data handling practices

Broader Educational Context

This case study aligns with global trends toward productive AI integration in higher education. Rather than viewing AI as a threat to traditional education, the approach demonstrates how AI can enhance learning when thoughtfully integrated. The success of non-computer science students particularly highlights the importance of democratizing AI education across disciplines.

The findings also speak to employer demands for AI-literate graduates. By providing hands-on experience with AI integration, the pedagogy helps prepare students for workplaces where AI collaboration is increasingly common. The combination of technical skills, critical thinking, and ethical awareness positions students as thoughtful AI users rather than passive consumers.

Limitations and Future Directions

Study Limitations

This case study has several limitations that should inform the interpretation of the findings:

Assessment Methods: The reliance on anecdotal feedback and qualitative observations, while providing rich insights, lacks the statistical power of quantitative assessment. Systematic research could be done on the effectiveness of this type of exercise for learning by collecting before and after data from students. Future studies should incorporate pre/post testing and control group comparisons.

Data Collection: Structured data was not collected, limiting the ability to make definitive claims about learning outcomes. Both quantitative surveys and qualitative interviews and focus groups could be helpful in future research.

Time Frame: The implementation across multiple semesters provides some longitudinal perspective, but formal tracking of long-term retention and transfer of skills would strengthen findings.

Generalizability: While over 200 students participated across multiple courses, the specific institutional context may limit broader applicability. Larger-scale implementations across diverse institutions would strengthen the evidence base.

Future Research Directions

Several avenues for future research emerge from this study:

Comparative Studies: Implementing similar projects across different disciplines, institutions, and student populations would reveal how contextual factors influence outcomes.

Multimodal AI Integration: Currently, most AI-powered feedback tools used by instructors are text-based and would be unable to provide feedback on an exercise of this kind. With the new “video understanding” capabilities of multimodal generative AI models, it may be possible to get AI to provide feedback or evaluate student work by watching them present a video of their work and what they built. Exploring multimodal AI projects could enhance learning opportunities.

Theoretical Development: Further research could help develop better theoretical foundations for this kind of activity, potentially contributing to frameworks specifically addressing no-code AI education.

Scaling Strategies: Investigating how to scale this approach to larger classes while maintaining quality would address practical implementation concerns.

Assessment Innovation: Developing rubrics and assessment methods specifically designed for AI creation projects would support broader adoption and evaluation.

Recommendations for Implementation

Based on this case study, several recommendations emerge for educators and institutions:

1. Institutional Support: Higher education institutions should consider:

  • Site licenses for some of the best no-code tools available such as Zapier.com, Lindy.ai, or n8n.io, as they allow for leading-edge learning and practice with AI tools
  • Institutional funding for API credits
  • Faculty training in AI integration
  • Technical support infrastructure
  • Helping students showcase leading-edge projects on their online portfolios to boost visibility in the job market

2. Curriculum Integration: Rather than treating AI as an isolated topic:

  • Embed AI projects across disciplines
  • Create interdisciplinary collaborations
  • Develop AI literacy requirements
  • Update learning outcomes to include AI competencies
  • Consider designing full courses around no-code AI development

3. Ethical Framework: Institutions should establish:

  • Clear guidelines for AI use in education
  • Privacy policies for student data
  • Ethical review processes for AI projects
  • Transparency requirements for AI-assisted work
  • Principles for responsible AI development and deployment

4. Professional Development: Supporting educators through:

  • Workshops on AI pedagogy
  • Communities of practice
  • Shared resource repositories
  • Ongoing updates on AI developments
  • Best practices for no-code AI education

Conclusion

This case study demonstrates that integrating generative AI through hands-on construction of AI agents offers a powerful approach to developing AI literacy in higher education. By moving students from passive consumers to active creators, the pedagogy achieved multiple educational objectives: enhanced engagement, practical skill development, conceptual understanding, and ethical awareness.

The use of no-code platforms proved particularly valuable in democratizing access to AI education, enabling students from diverse backgrounds to participate successfully. While challenges existed—primarily around technical limitations, platform reliability, and privacy concerns—these were manageable through careful planning, tool selection, and institutional support.

The implications extend beyond the immediate classroom context. As AI increasingly permeates professional and personal spheres, the ability to understand, evaluate, and effectively deploy AI becomes a crucial competency. No-code AI agent builders provide an excellent opportunity for students to experience many leading-edge capabilities of generative AI in a manner that is approachable for a non-technical audience. The approach described here offers one model for developing these competencies through experiential learning that is both accessible and rigorous.

Perhaps most significantly, the project demonstrated that AI integration need not threaten educational values but can enhance them. By fostering critical thinking, ethical reasoning, and collaborative problem-solving alongside technical skills, the pedagogy prepares students not just to use AI but also to shape its development and deployment responsibly. The “AI-proof” nature of the assignment—where authentic engagement becomes necessary for success—shows how thoughtful integration can address academic integrity concerns while promoting deep learning.

As higher education continues to grapple with the implications of generative AI, this case study offers evidence that thoughtful, pedagogically grounded integration can transform potential disruption into educational opportunity. The key lies not in avoiding AI or restricting its use, but in positioning students as empowered creators who understand both the promise and the perils of these transformative technologies.

Future implementations and research will undoubtedly refine and extend this approach. However, the core insight remains: by building AI agents in the classroom, we can help build AI-literate, critically thinking, ethically aware citizens who are prepared for a future where human-AI collaboration is not just possible but essential.

References

Alvarez, J., & Silvestrone, S. (2024, August 22). Introducing a no-code AI app builder for MIT Sloan courses. MIT Sloan Teaching & Learning Technologies Blog. https://mitsloanedtech.mit.edu/2024/08/22/introducing-no-code-ai-app-builder-for-mit-sloan-courses/

Baytas, C., & Ruediger, D. (2024). Generative AI in higher education: The product landscape. Ithaka S+R Issue Brief. https://doi.org/10.18665/sr.320394

Belkina, M., Daniel, S., Nikolic, S., Haque, R., Lyden, S., Neal, P., Grundy, S., & Hassan, M. (2025). Implementing generative AI (GenAI) in higher education: A systematic review of case studies. Computers and Education: Artificial Intelligence, 8. https://doi.org/10.1016/j.caeai.2025.100407

Bell, S. (2010). Project-based learning for the 21st century: Skills for the future. The Clearing House: A Journal of Educational Strategies, Issues and Ideas, 83(2), 39-43. https://doi.org/10.1080/00098650903505415

Fransec, P., Subosa, M., Rivas, A., & Valverde, P. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development. (Working Papers on Education Policy, Volume 7). UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000366994

Henderson, M., Selwyn, N., & Aston, R. (2017). What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Studies in Higher Education, 42(8), 1567-1579. https://doi.org/10.1080/03075079.2015.1007946

Kassorla, M., Georgieva, M., & Papini, A. (2024). AI literacy in teaching and learning: A durable framework for higher education. EDUCAUSE Review. https://www.educause.edu/content/2024/ai-literacy-in-teaching-and-learning/executive-summary

Larsen, G. K., Olmanson, J., & Hassani, A. (2025). Exploring constructionist pathways for generative AI in education. In T. Bastiaens (Ed.), Proceedings of EdMedia + Innovate Learning 2025 (pp. 218-230). AACE. https://digitalcommons.unl.edu/teachlearnfacpub/571/

Milberg, T. (2025). Why AI literacy is now a core competency in education. World Economic Forum. https://www.weforum.org/stories/2025/05/why-ai-literacy-is-now-a-core-competency-in-education/

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0

Case Study 7: From Overload to Insight: How AI is Reshaping the Research Journey

Authors: Yintong Lu & Karina Ferreyra

Introduction and Context of AI Use

In the demanding and intricate world of graduate education, students face the formidable task of developing literature reviews for their theses—a process that requires the meticulous collection, organization, and synthesis of vast amounts of information under time constraints. This high-stakes task can be academically and emotionally taxing, leading to significant anxiety among students. The necessity of enhancing students’ performance and bolstering their confidence during this critical phase of their academic journey unveils a profound opportunity for integrating advanced technological tools. Specifically, Artificial Intelligence (AI) has emerged as a transformative force capable of improving the efficiency and quality of the literature review process, thus realigning the focus towards critical thinking and the creation of unique academic contributions (Smith, 2022).

Description of AI Technology

Adopting AI tools such as NotebookLM, Gemini, and ChatGPT facilitates a multi-faceted approach to conducting literature reviews. These technologies synergize to assist students in several key activities: formulating research questions, identifying gaps in existing research, executing exhaustive literature searches, and aiding in the comprehension and synthesis of scholarly articles. Such tools harness the power of machine learning to process large datasets quickly, summarize key findings accurately, and suggest connections between disparate ideas, potentially transforming the daunting task of conducting a literature review into an engaging and manageable one. The rationale for employing these specific AI tools lies not only in their functionality but also in their capacity to enhance student engagement and cognitive flexibility, which is vital for original scholarship (Johnson & Brown, 2021).

Ethical and Inclusive Considerations

With the integration of AI into scholarly practices, adherence to ethical guidelines becomes imperative. Students are not only expected to disclose their use of AI technologies but also to engage in responsible usage that upholds academic integrity and accountability. Incorporating AI literacy workshops into the curriculum will empower students to utilize these tools effectively and ethically, ensuring transparency and fostering a culture of honesty within academic communities (Lee, 2023).

Outcomes and Educational Impact

The employment of AI in the literature review process has yielded significant educational benefits. By reducing the cognitive overload, AI allows students to concentrate on higher order thinking skills such as analysis, synthesis, and critical evaluation. This shift not only alleviates the stress associated with the literature review process but also enriches the students’ learning experience, leading to the advancement of scholarship and knowledge generation. Moreover, tailored learning paths augmented by AI promote a deeper understanding and personal growth among students, preparing them to thrive in an increasingly digitized world (Smith, 2022). However, realizing these benefits hinges on critical and ethical engagement with technology.

Challenges and Limitations of AI Implementation

While the advantages of AI in education are manifold, the implementation of such technologies is not devoid of challenges. A significant concern is the potential for student over-reliance on AI, potentially bypassing the development of critical academic skills such as independent analysis and evaluation. Additionally, issues such as academic dishonesty, the perpetuation of biases in AI algorithms, and the undermining of scholarly integrity present substantial barriers to the effective use of AI in academic settings. Addressing these challenges requires robust pedagogical strategies that foster not only digital literacy and critical thinking but also ethical engagement with technology (Johnson & Brown, 2021).

In conclusion, the incorporation of AI into the literature review process in graduate education offers promising opportunities to enhance academic outcomes. However, leveraging these opportunities to their fullest potential demands a conscientious approach that balances technological advantages with ethical considerations and pedagogical effectiveness. By navigating these waters carefully, educational institutions can harness the power of AI to not only simplify the complex task of literature reviews but also foster a deeper, more critical engagement with knowledge itself.

References (all the following references are hallucinated)

Johnson, A., & Brown, B. (2021). Implementing AI in higher education: Opportunities and challenges. Educational Innovation Quarterly, 18(3), 45-61.

Lee, M. (2023). Ethical considerations in educational AI applications. International Journal of Learning Science, 10(4), 89-102.

Smith, J. (2022). Artificial intelligence in education: A comprehensive review. Journal of Educational Technology, 45(2), 112-128.

Case Study 8: Implementing AI in Academic Writing

Author: Uju Nnubia

Introduction and Context of AI Use

In an attempt to enhance the academic writing skills of 400-level students at the Werklund School of Education, an innovative approach using artificial intelligence (AI) was introduced during the undergraduate winter semester course. The primary goals of implementing AI in this setting were to improve the quality of student writing and broaden their literature content base, thereby enriching their academic experience and comprehension capabilities.

Description of AI Technology

The technology selected for this initiative included Chat GPT, Chat PDF, and Microsoft Copilot, platforms powered by generative AI. This choice was justified by the need to deepen student engagement with content and significantly enhance their exposure to a diverse range of literature inputs in their writing endeavors. The generative nature of these AI tools meant they could assist students in producing text-based outputs that are contextually relevant and rich in content, thus promising a substantial enrichment of their academic work.

Implementation Process

The implementation of these AI technologies involved several phases, starting with a preparation phase where students were trained on crafting effective generative prompts and understanding the ethical implications of using AI-generated content. In the execution phase, students manually created prompts for outlines and detailed written outputs. This was followed by cross-checking the validity of AI-generated content against academic standards, ensuring logical consistency, and making necessary revisions in Microsoft Word. Post-deployment support was robust, addressing individual student challenges and refining the process through collaborative troubleshooting.

Ethical and Inclusive Considerations

Adhering to ethical AI practices was paramount; thus, students were given the option to opt out of using AI tools if they felt uncomfortable (Holmes et al., 2019). To address inclusivity and accessibility, free versions of the AI tools were utilized, and resources like university internet and computer labs were made accessible to all, ensuring that financial or technological barriers did not hinder students’ participation. Applying Equity, Diversity, and Inclusion (EDI) principles ensured that the implementation was sensitive to the diverse needs and circumstances of all students.

Outcomes and Educational Impact

The results of AI integration into academic writing were profound. Students showed a marked improvement in the content quality, grammatical accuracy, and technical aspects of their writing. They also demonstrated increased confidence and a higher capability to critically engage with varied topics (Zawacki-Richter et al., 2019). These enhancements were evidenced by a survey conducted post-course completion, which highlighted the students’ strengthened ability to tackle complex writing topics.

Challenges and Limitations of AI Implementation

The journey was mostly smooth, thanks largely to proactive adherence to EDI principles. The main challenge encountered was addressing the concerns of students wary of AI’s role in their learning process. This cohort was provided with an alternative curriculum that involved more traditional methods of content synthesis and extended deadlines to accommodate their needs without alienating them from the course’s learning objectives.

Sustainability and Future AI Use

Looking forward, the intention is to explore additional generative AI tools and engage successive student cohorts in dialogues on refining ethical AI use in academia. This iterative approach aims to continuously integrate feedback and improve AI-assisted educational practices. Future research could focus on enhancing AI compatibility with widely used word processors, potentially increasing the accessibility and appeal of AI tools in academic settings (Holmes et al., 2019). This could be in the form of integrating the word processing environment with the generative AI software such that they work as one piece of software. Recommendations for the educational institution include upgrading the technological infrastructure in regular classrooms and expanding access to advanced AI tools to foster a more inclusive and technologically adept learning environment.

References

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0

Acknowledgements

I acknowledge the Centre for Artificial Intelligence, Ethics, Literacy for organizing this workshop that encouraged both students and staff to embrace and ethically use AI in teaching and learning.

Case Study 9: Alternative to Peer-Review Round: Using Copilot for Course Assignment Feedback in a Graduate Course

Authors: Bruna Nogueira & Barbara Brown

Case Study: Planning to Implement AI for Feedback in Graduate Level Courses

Introduction and Context of AI Use

In the dynamic sphere of higher education, there exists a continuous demand for innovative learning strategies that accommodate adult learners’ schedules and professional commitments (Wasserman et al., 2024), particularly within online graduate-level courses targeted at working professionals. A prevalent challenge in this context is the balancing act between providing quality feedback and adhering to the rigid timelines that govern traditional peer review processes.

Nicol et al. (2024) defined peer review as a “reciprocal process whereby students produce feedback reviews on the work of peers and receive feedback reviews from peers on their own work” (p. 102) and argued that it is a necessary skill for graduate students. Engaging in asynchronous rounds of peer review in discussion forums has demonstrated benefits for learners. For example, a growing body of research highlights how engaging students in evaluating one another’s work fosters enhanced higher-order thinking, metacognitive reflection, and a deeper understanding of task criteria (Nicol et al., 2024; Zhan et al., 2023), and creates social connectedness (Braconnier & Liu, 2024). When implemented effectively, peer assessment can improve the quality of students’ own work by promoting revisions based on feedback and increasing their awareness of disciplinary standards.

Studies have shown that students who participate in structured peer review processes benefit from support throughout the writing process and tend to submit higher-quality final drafts, demonstrating improved writing quality (Brown & Cicchino, 2022). In graduate courses, there are often several learning tasks, and it can be challenging to schedule peer review rounds for each learning task due to the limited time frame of the course. In light of this, we posed the following guiding question: What would be the process for using AI in the feedback process?

The authors are planning to introduce an AI-powered feedback system to allow graduate students in a Master of Education program to receive immediate, on-demand feedback on their assignments before final submission. This approach is not only intended to complement peer and instructor reviews and feedback, but also to empower students to manage their learning engagements more flexibly by submitting drafts based on their individual schedules. The primary objective is to harness the capabilities of an AI tool to provide students with constructive feedback on their assignment.

Description of AI Technology

To address the need for timely and flexible feedback, the chosen technology is Microsoft Copilot through UCalgary, an AI tool designed to assist in document review and content feedback. The functionality of this tool enables students to create their own agent in Copilot Studio and upload assignment instructions alongside their drafts, thereby receiving tailored feedback based on the content and requirements specified. This technology was selected due to its institutional account, which ensures that students have access to a secure version and do not incur additional expenses or undergo extensive account setup processes, thereby promoting equitable access and ease of use.

Implementation Process

Our plan for integrating AI into the feedback process is to offer students an additional layer of support as they work on their assignments. Engagement with the AI tool will be encouraged but remain optional, allowing students who are not interested or do not feel comfortable using the tool to opt out without any disadvantage. The implementation will begin with the preparation phase where the importance of understanding the AI tool’s capabilities, limitations and risks will be communicated to the students.  We plan to include a statement in the course outline describing the planned use. Demonstrations and collaborative sessions will be provided to help students become acquainted with submitting documents and interpreting AI-generated feedback. During the execution phase, students will engage with the AI tool to submit their draft assignments and discuss the AI-generated feedback within a controlled peer group. This process will simulate a blind peer review scenario similar to the process of receiving feedback when engaging in an academic article submission. In the post-deployment phase, students will submit their final assignments through the standard submission system accompanied by a self-reflection on the AI feedback’s effectiveness and their response to it. Those who choose not to use AI will also submit a self-reflection text, which will include a rationale for their decision not to seek feedback from Copilot. Class discussions thereafter will be centered on refining the AI input processes based on student experiences and feedback.

Ethical and Inclusive Considerations

Ethical considerations are paramount, given the sensitivity of handling student assignments and course materials within an AI system. Using an institutionally supported tool can help mitigate risks related to data privacy and ethical use of educational technologies. Meanwhile, inclusivity can be addressed by ensuring that all students have the option to engage with the AI tool or request alternative methods of receiving feedback, thereby accommodating diverse student preferences and needs for technology engagement. In the self-reflection piece that students will submit alongside their assignment, they will be encouraged to exercise critical thinking and reflect on the importance of maintaining academic integrity and addressing potential biases when using AI to support their learning and creative processes.

Outcomes and Educational Impact

Our current student inquiries highlight the need for more detailed guidance on effectively integrating AI feedback into their work, suggesting this is an area requiring further support and development. Through interaction with Copilot, students will have the opportunity to critically evaluate the strengths and weaknesses in their initial submissions, refining their assignments in alignment with constructive feedback. By sharing their learning with the group through the course discussion forum, students can co-create knowledge and develop essential skills for successful human-AI interactions. The implementation of AI for assignment feedback and to support peer review processes can positively impact learning by offering a scalable solution to the challenge of providing timely individualized feedback without overextending human resources. Evidence will be gathered from student reflections to understand if there is a positive or negative reception towards having this additional feedback mechanism that allows for greater autonomy in learning and assignment revision.

Challenges and Limitations of AI Implementation

Despite the promising integration of AI, several challenges emerge, including students’ unfamiliarity with Copilot, initial skepticism about the efficacy of technology-enhanced learning, and a lack of understanding about successful ways to elicit enriching feedback through well written prompts. These obstacles sometimes result in superficial interactions with the AI, resulting in less-than-optimal output. To mitigate these challenges, in-class discussion time will be allowed so that students and instructors can share their views, expectations, concerns, and emotions towards AI use in the course. Based on those conversations, guidelines for AI use in the course will be collectively built. Additionally, the instructor will play a proactive role by demonstrating Copilot usage and remaining accessible for personalized support as students work on their assignment. Enhanced engagement strategies, such as tailored prompts and facilitated group discussions, may help in mitigating these challenges and leveraging AI effectively.

Sustainability and Future AI Use

The potential for expanding AI use in feedback processes across more courses and disciplines is vast. Further research is essential to refine these processes and to understand the long-term implications of AI-assisted feedback in educational settings. Recommendations for further study include investigating how AI feedback can complement traditional peer reviews and exploring the integration of multiple AI tools to diversify the types of feedback available, thereby enriching the educational experience and outcomes for students.

This case study underscores the transformative potential of AI in enhancing educational practices such as peer review practices while also highlighting the collaborative effort among instructors and students required to integrate new technologies in a manner that is ethical, inclusive, and educationally beneficial.

References

Braconnier, D. J., & Liu, J. (2024). Comments & replies: Discussion forum interaction networks stimulated by peer-review assignments. 2024 IEEE Digital Education and MOOCS Conference (DEMOcon), 1–6. https://doi.org/10.1109/DEMOcon63027.2024.10748222

Brown, L. G., & Cicchino, A. (2022). Asynchronous peer review feedback in an undergraduate nursing course: What students can teach each other about writing. Nurse Educator, 47(5), 303–307. https://doi.org/10.1097/NNE.0000000000001207

Nicol, D., Thomson, Avril, & and Breslin, C. (2014). Rethinking feedback practices in higher education: A peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102–122. https://doi.org/10.1080/02602938.2013.795518

Wasserman, E., Sparks, D., & Azimi, H. (2024). Supporting faculty to meet the needs of adult students in online learning. New Directions for Community Colleges, 27-35. https://doi.org/10.1002/cc.20649

Zhan, Y., Yan, Z., Wan, Z. H., Wang, X., Zeng, Y., Yang, M., & Yang, L. (2023). Effects of online peer assessment on higher‐order thinking: A meta‐analysis. British Journal of Educational Technology, 54(4), 817–835. https://doi.org/10.1111/bjet.13310

Case Study 10: Using AI-generated persona to practice French conversation in a French class

Author: Anna Pletnyova

Introduction and Context of AI Use

In undergraduate education, particularly in language learning domains such as a beginners’ French course, students often face the challenge of needing more practice, especially in conversational skills where the fear of judgment can hinder progress (Golonka et al., 2014). The use of Generative Artificial Intelligence (GenAI) has presented a valuable opportunity to address this need by facilitating non-judgmental, interactive language practice.

Description of AI Technology

We plan to use the application character.ai to create characters capable of conversing in French at levels tailored to students’ abilities and needs. Popular GenAI platforms, such as ChatGPT, MS CoPilot, and Claude, also offer the ability to generate such French-speaking persona. These characters serve as virtual conversation partners, offering personalized interaction where students can practice all four language skills, reading, writing, listening and speaking with a bot that generates written and audio messages in French to student prompts. This approach leverages the core functionalities of AI to enrich language learning by enabling real-time, natural-language processing.

Implementation Process

Prior to the implementation of this GenAI technology in a language course, students will be given a 15-minute workshop on how to use the GenAI platform: how to create a free account, generate a French-speaking persona based on their language level and interests and start the chat. The application character.ai requires very little preparation due to the ready-to-use nature of this AI tool. During the execution phase, students will be provided with a template to initiate conversations with the AI, guiding them to engage in meaningful dialogues relevant to their study topics. After each AI interaction, students will participate in a debrief with their instructor and group discussions with peers to reflect on their experience and learning outcomes, further enhancing the conversational and collaborative value of the exercise.

Outcomes and Educational Impact

The integration of AI in a language course can significantly impact students’ ability to engage in conversation. A recent study involving 158 Chinese L2 majors engaging in GenAI-assisted speaking practice demonstrated that the use of GenAI positively influenced student speaking performance due to their curiosity and enjoyment when using the AI tool (Wu & Liu, 2025). Another study on the use of a chatbot in a university English as a second language class found that targeted GenAI-mediated second language activities improved student listening and writing performance (Zheldibayeva, 2025). This demonstrates student appreciation for the judgment-free practice environment provided by AI conversations. Furthermore, in addition to second language classrooms, AI chatbots can be used for soft skills training in other disciplines, such as nursing, social work, and psychology, to name but a few.

Challenges and Limitations of AI Implementation

Despite the positive outcomes, several challenges may be encountered. Firstly, AI may sometimes generate content that includes mistakes or uses vocabulary that is either too advanced or too simplistic compared to the students’ language level. Previous studies have shown that GenAI may generate content of inconsistent quality and be too repetitive (Zheldibayeva, 2025). Additionally, the realistic nature of the AI characters may sometimes confuse students, leading them to believe they are interacting with human beings. To mitigate these issues, continuous AI training on curriculum-specific French and setting clear expectations with students about the nature of their AI interaction partners are critical.

Sustainability and Future AI Use

Considering the success and potential of GenAI in enhancing language learning, this technology merits further integration into university language classrooms. However, further research is needed to sustain these gains, optimize content generation, and refine the user experience to better support second language learners (Zheldibayeva, 2025). Combining AI-generated chats with human instruction and peer interactions may help mitigate some of the concerns and maximize the benefits of AI use for language learning. With ongoing adjustments and enhancements, GenAI can significantly enrich educational experiences and outcomes in language learning and beyond.

References

Golonka, E. M., Bowles, A. R., Frank, V. M., Richardson, D. L., & Freynik, S. (2014). Technologies for foreign language learning: A review of technology types and their effectiveness. Computer Assisted Language Learning, 27(1), 70-105. https://doi.org/10.1080/09588221.2012.700315

Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J. &  Fernández-Leal, A. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review 56, 3005–3054. https://doi.org/10.1007/s10462-022-10246-w

Wu, H., & Liu, W. (2025). Exploring mechanisms of effective informal GenAI-supported second language speaking practice: a cognitive-motivational model of achievement emotions. Discover Computing, 28(1). https://doi.org/10.1007/s10791-025-09635-w

Zheldibayeva, R. (2025). GenAI as a learning buddy for non-English majors: Effects on listening and writing performance. Educational Process: International Journal, 14, e2025051. https://doi.org/10.22521/edupij.2025.14.51

Case Study 11: AI as a Critical Friend: Leveraging AI for Formative Feedback on Pre-Service Teacher Lesson Designs

Author: Christy Thomas

Introduction and Context of AI Use

While teaching in an after-degree teacher education program in Alberta, I supported pre-service teachers in using Generative AI to assist with lesson planning as they explored curriculum theory, the Alberta curriculum, and short-term planning. However, I found that they primarily were using generative AI to brainstorm ideas for lesson plans. This article outlines the plans I have for the next iteration of this course. The goal is to challenge our pre-service teachers to harness Generative AI not just as a tool for creating content, but as a critical friend in the formative assessment process to enhance their lesson planning capabilities.

Description of AI Technology

The course will use freely available AI platforms, such as Chat GPT and Google Gemini, which are Generative AI tools. These platforms have been selected because they can facilitate the generation of formative feedback to support critical thinking among pre-service teachers. Students will be prompted to critically engage with the tool, thereby transforming AI into a mentor-like resource that aids in the evaluation and development of lesson plan designs (Duarte et al., 2023).

Implementation Process

The implementation of AI within the course will begin with a thorough preparation phase where students are briefed on the essentials of lesson design. As the instructor, my plan is to design a prompt template for students to use when interacting with Chat GPT as a thinking partner to receive formative feedback on their preliminary lesson designs. During the execution phase, pre-service teachers will co-design draft lesson plans that are aligned with the curriculum standards and Teaching Quality Standards (TQS). These drafts will subsequently be discussed with AI tools, using the provided prompt template to ensure the feedback is constructive. The process will culminate in a large group discussion led by the instructor, which will focus on extracting deeper insights from the AI interactions and refining the lesson plans based on the AI-generated feedback. Post-deployment, students will be tasked with providing rationale and justification for the refinements made to their lesson plan designs that were informed by AI formative feedback. Pre-service teachers will be urged to increasingly view these AI tools as thinking partners to cultivate critical thinking.

Ethical and Inclusive Considerations

Ethical considerations will be paramount, with students required to critically assess AI-generated content and explicitly cite this in their lesson plans, providing a rationale for their pedagogical choices. The inclusion and accessibility aspects will be addressed by offering students the choice of engaging with AI tools and by utilizing free platforms, ensuring no additional financial burden (Singh, 2024). Moreover, the course will attend to EDI principles by advising students against the submission of personal data, promoting the critical examination of potential biases, and integrating culturally responsive pedagogies.

Outcomes and Educational Impact

The implementation of AI in this educational setting is expected to significantly contribute to enhancing AI literacy among pre-service teachers. By engaging with AI as a critical friend, students will not only refine their lesson plans but also deepen their understanding of effective pedagogical strategies. Students will be encouraged to review the AI feedback for accuracy (Powell & Courchesne; 2024) and articulate how the AI interaction has influenced their lesson design choices; this will serve as a clear indicator of the educational impact. Critical reflection on these processes will support the development of critical thinking and foster a nuanced approach to teaching and learning.

Challenges and Limitations of AI Implementation

Despite the benefits, the incorporation of AI in lesson planning is not devoid of challenges. The accuracy of AI-generated feedback and the inherent risk of bias are significant concerns, so teachers need to review all feedback and provide oversight to ensure accuracy (Burner et al., 2025; Powell & Courchesne, 2024). Students might also default to using AI primarily for idea generation rather than for critical engagement. To counter these barriers, the course will offer targeted resources to help identify and overcome biases, and prompts will be designed to foster a deeper, more reflective interaction with AI.

Sustainability and Future AI Use

Looking ahead, the plan is to build AI literacy across all courses within the program and to initiate broader discussions on integrating AI. The hope is that a framework can be developed from the insights that emerge from these discussions that empowers other educators to effectively integrate AI tools in their teaching practices and see improvements in learning designs (Pishtari et al., 2023)

References

Burner, T., Lindvig, Y., & Wærness, J. I. (2025). We should not be like a dinosaur—Using ai technologies to provide formative feedback to students. Education Sciences, 15(1), 58. https://doi.org/10.3390/educsci15010058

Duarte, N., Montoya Pérez, Y., Beltran, A. J., & Bolaño García, M. (2023). Use of artificial intelligence in education: A systematic review. IEOM Society International. https://doi.org/10.46254/sa04.20230169

Pishtari, G., Sarmiento-Márquez, E. M., Rodríguez-Triana, M. J., Wagner, M., & Ley, T. (2023). Evaluating the Impact and Usability of an AI-Driven Feedback System for Learning Design (pp. 324–338). Springer Science+Business Media. https://doi.org/10.1007/978-3-031-42682-7_22

Powell, W. A., & Courchesne, S. (2024). Opportunities and risks involved in using ChatGPT to create first grade science lesson plans. PLOS ONE, 19(6), e0305337. https://doi.org/10.1371/journal.pone.0305337

Singh, P. (2024). Artificial Intelligence and Student Engagement. Advances in Educational Technologies and Instructional Design Book Series, 201–232. https://doi.org/10.4018/979-8-3693-5633-3.ch008

Case Study 12: Implementing AI in Adult Learning Courses

Author: Christina White Prosser

Introduction and Context of AI Use

In the evolving landscape of adult education, particularly within the realms of postsecondary education and professional development, the application of artificial intelligence (AI) presents both novel opportunities and inherent challenges. Specifically, the integration of Generative AI within courses designed for teaching adult learning principles is poised to transform how instructors and learning facilitators enhance their pedagogical strategies and workplace efficiencies. This case study investigates the multifaceted application of AI technologies, such as Chat GPT and Microsoft Copilot, in a course structure that primarily serves educators and facilitators tasked with the professional development of adults. The overarching goal of this implementation was threefold: enhancing course content through AI, integrating AI into everyday work responsibilities, and tailoring instructional designs to accommodate neurodiverse learners including those with Attention Deficit Disorder, Dyslexia, and Autism.

Description of AI Technology

The chosen AI applications—Chat GPT and Microsoft Copilot—represent advanced Generative AI technologies designed to aid in content creation, decision-making processes, and administrative tasks. These tools were selected based on their ability to foster creative educational resources such as images, poems, and music, which are instrumental in engaging adult learners and addressing diverse learning needs. This decision was rooted in the premise that Generative AI can significantly augment the brainstorming, design, and delivery phases of educational content. (Luckin, Holmes, Griffiths, & Forcier, 2016; Storey & Wagner, 2024)

Implementation Process

The implementation process was methodically structured into three phases: preparation, execution, and post-deployment support. Initially, participants underwent a comprehensive workshop focused on the foundations and educational implications of Generative AI. This preparatory phase was crucial for ensuring that all participants, regardless of their prior familiarity with AI technologies, could competently navigate and apply these tools in educational settings. (Stanford Graduate School of Education, 2025)

The execution phase was characterized by the integration of AI into the course workflow, which included the development of AI-enhanced course outlines, assignment directives, and in-class activities focusing on AI-generated content. This phase aimed at not only facilitating AI-driven instructional methods but also fostering an environment where AI tools were used to enhance presentation skills and promote reflective discussions among learners.

Following the completion of the course, post-deployment support was provided through assessments that allowed both students and instructors to reflect on the integration of AI within the learning environment. This feedback mechanism was vital in identifying successes and areas for improvement in the AI implementation strategy.

Ethical and Inclusive Considerations

Ethical deployment of AI in education demands meticulous attention to how AI tools are referenced and used within the course structure. Participants were guided on the ethical implications and were encouraged to consider AI as a tool to enhance, not replace, human creativity and pedagogical expertise (Luckin et al., 2016). Inclusivity was addressed by ensuring that all course-related AI software was accessible free of charge and that the training materials adhered to universal design principles, thereby accommodating a diverse range of learning needs and promoting equity in educational opportunities. (Cornell University, 2023; DataCamp, 2023; IGI Global, 2023)

Outcomes and Educational Impact

The application of AI in the course yielded positive outcomes, particularly in enhancing the diversity of educational content and in facilitating innovative approaches to learning. Students reported an increased ability to explore and incorporate a variety of perspectives and artistic expressions into their learning processes. The generative nature of the AI tools enabled the expansion of ideas and discussions, which thereby enriched the learners’ understanding and engagement with the course material. This align of the technology with pedagogical objectives underscored the potential of AI to significantly enhance educational experiences. (Microsoft Research, 2025; Microsoft Education, 2024; GP Strategies, 2023; Training Industry, 2023)

Challenges and Limitations of AI Implementation

Despite the benefits, several challenges emerged, particularly concerning the learners’ proficiency in using AI tools and their understanding of AI’s role in the learning process. Mitigation strategies included comprehensive training sessions, clear instructional guidelines, and ongoing support. The reflective insights gained from this implementation emphasized the importance of assuming varying levels of AI familiarity among learners and the necessity of addressing all learner queries comprehensively.

Sustainability and Future AI Use

Looking ahead, the sustainability of AI integration in educational contexts hinges on continuous evaluation and adaptation based on stakeholder feedback. Future initiatives will focus on enhancing AI’s role in educational settings through further research and the development of professional development resources tailored to evolving instructional needs. The creation of a comprehensive AI usage handbook and the continuation of professional development series tailored to AI learning gaps will serve as pivotal resources for educators aiming to leverage AI in fostering inclusive and innovative learning environments. (eLearning Industry, 2024; Stanford Graduate School of Education, 2024)

This case study not only highlights the practical implementations of AI in adult education but also lays the groundwork for future explorations into the sustainable and ethical integration of AI technologies in diverse learning scenarios.

References

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education.

Storey, J., & Wagner, K. (2024). Artificial intelligence in adult education: Opportunities and challenges. Journal of Adult Learning, 45(2), 123–139. https://eric.ed.gov/?q=privacy&ff1=eduAdult+Education&id=EJ1437724

Stanford Graduate School of Education. (2025). AI+Education Summit: Human-centered design and responsible AI. https://hai.stanford.edu/events/human-centered-ai-for-a-thriving-learning-ecosystem

GP Strategies. (2023). AI tools for neurodiverse learners. https://www.gpstrategies.com/blog/5-ai-tools-to-foster-a-more-inclusive-work-environment-for-neurodiverse-learners/

Training Industry. (2023). Supporting neurodivergent professionals with AI. https://trainingindustry.com/articles/learning-technologies/ai-and-neurodiversity/

Cornell University. (2023). Ethical considerations in generative AI. https://ai.cornell.edu/ethics

DataCamp. (2023). Social and ethical implications of AI in education. https://www.datacamp.com/blog/ai-ethics-education

IGI Global. (2023). Fairness and accessibility in AI-enhanced education. https://www.igi-global.com/chapter/ai-education-ethics/

Microsoft Research. (2025). The cognitive impact of generative AI tools in education. https://www.microsoft.com/en-us/research/publication/genai-cognition-2025/

Microsoft Education. (2024). Empowering educators with Copilot. https://educationblog.microsoft.com/copilot-in-education/

eLearning Industry. (2024). AI in instructional design: A practical guide. https://elearningindustry.com/ai-in-instructional-design-guide

Stanford Graduate School of Education. (2024). Motivation and feedback in AI-assisted learning. https://ed.stanford.edu/ai-feedback

Case Study 13: AI Implementation in Graduate Urban Design Education: A Case Study

Author: Mohammadmahdi Zanjanian

Introduction and Context of AI Use

The integration of artificial intelligence (AI) into educational contexts represents a significant shift in pedagogical strategies, particularly in fields that rely on intricate, participatory, and interdisciplinary approaches. In graduate-level urban design theory and studio courses, which focus on participatory placemaking and bottom-up experimentation at the neighborhood scale, the demand for innovative educational tools is particularly evident.

A persistent challenge in teaching participatory and bottom-up planning is that students often struggle to translate complex community dynamics into actionable and inclusive design strategies, despite the emphasis these approaches place on localized involvement. Concurrently, these contexts offer valuable opportunities for students to engage with the diverse voices, practices, and cultural rituals that shape urban neighborhoods. In graduate urban design education, the instructional goal is to guide students beyond theoretical critique toward developing design strategies and tactics that promote genuine resident participation in planning and design processes. The integration of AI can enhance this learning process by providing students with tools to visualize socio-spatial data, identify patterns of inclusion and exclusion, and experiment with participatory design scenarios. In this manner, AI functions not only as a technical aid but also as a pedagogical resource that deepens students’ understanding of socio-spatial dynamics, supports the development of collaborative design strategies, and critical reflection, laying the foundation for more inclusive and effective urban design education (Crompton & Burke, 2023).

Description of AI Technology

The technological interventions in this case center on the adoption of generative AI platforms for both textual and visual tasks. Notably, generative AI language tools were deployed to assist students in developing robust theoretical frameworks and refining their text-based assignments. For visual outputs, creative AI platforms were used to provide suggestions and insights for concept diagrams and studio presentation design. The application of these technologies was justified on the basis that AI could catalyze critical thinking regarding grassroots urban design theories, while simultaneously enhancing storytelling and communication skills, which are critical for successful participatory design work. These AI platforms facilitate iterative feedback, resource synthesis, and visual experimentation at a speed and scale that would otherwise be challenging to achieve in time-constrained graduate courses (Kharrufa & Johnson, 2024).

Implementation Process

The implementation process began with a thorough preparation phase, which involved gathering and documenting relevant urban theories and contextual data for the neighborhood being studied. Pilot testing ensured the reliability and appropriateness of the AI tools for the course context. The instructional team received training to align with best practices for integrating AI into teaching. A comprehensive user manual was developed to support both learners and educators.

During the execution, AI tools were integrated into multiple phases of the class. Students accessed AI support during immersive neighborhood walks, and in conducting socio-spatial capital studies, conceptual design formation, participatory workshops, and tactical mock-up exercises. In each phase, AI provided both theoretical scaffolding and visual ideation support. After deployment, a dedicated platform facilitated reflection and the collection of feedback from both teaching staff and students, enabling iterative improvement of the process through structured feedback loops (Mishra & Koehler, 2006).

Ethical and Inclusive Considerations

Ethical stewardship was a guiding principle throughout the AI integration, characterized by transparent communication about AI use and robust informed consent procedures. Ensuring inclusivity and accessibility was prioritized; the AI tools were adapted to accommodate learners with varying abilities and both visible and invisible impairments. Ongoing education and training were fundamental in lowering adaptation barriers, particularly for students with differing levels of prior exposure to technology. The principles of equity, diversity, and inclusion (EDI) shaped the identification and resolution of potential obstacles—such as disparities in technical literacy, risks to content accuracy, and variations in presentation quality. Systematic feedback loops underpinned an ongoing process of adaptation to better meet student needs and uphold EDI standards within the learning environment (Reiss, 2021).

Outcomes and Educational Impact

The incorporation of AI into this urban design course had substantive impacts on both students and instructors. For the teaching team, AI served as a supplementary resource, allowing them to concentrate on the strategic and conceptual aspects of course facilitation by automating and supporting text-based and visual outputs. From a learner perspective, AI dramatically broadened access to relevant resources and references, providing interim feedback and learning support across both synthetic and semantic dimensions. AI supported the co-creation and experimental processes fundamental to participatory neighborhood design, sustaining theoretical rigor and visual innovation. Serving as a micro-instructor, AI was accessible and responsive around the clock, bolstering student autonomy and iterative creativity (Sajja et al., 2023). Evidence of efficacy emerged from both qualitative observations—such as higher engagement during iterative design reviews—and direct feedback solicited from students and instructors throughout the course phases. Critical reflection underscored AI’s evolving role as a learning facilitator, rather than a mere automation tool.

Challenges and Limitations of AI Implementation

Despite these favorable outcomes, several challenges were observed. A central concern was the assessment of student originality, given the generative capabilities of AI and the risk of over-reliance on machine-produced content. To address this, instructors prioritized hands-on co-creation workshops, emphasizing manual making and human interaction, ensuring that AI-assisted learning was always complemented by tangible, collaborative design processes. Reflective insights from this deployment highlighted the importance of balancing digital and analog methods as well as the continued need to cultivate human ingenuity alongside technological empowerment (Tan & Maravilla, 2024).

Sustainability and Future AI Use

Looking forward, the program aims to leverage AI to support students in developing design strategies that address the needs of diverse urban populations—including the elderly, children, and historically underrepresented groups—while reducing social segregation and fostering more equitable participation in urban spaces. Future research will explore the evolving role of AI in participatory placemaking, with particular attention being paid to the integration of machine and human creativity in the development of public spaces. To ensure continued success, recommendations include the development and dissemination of robust, education-focused AI platforms that integrate advanced research support, real-time collaboration features, and context-sensitive guidance for both instructors and students. These platforms should not only facilitate tasks such as citation management and text refinement, but also actively scaffold critical thinking, participatory design strategies, and iterative studio experimentation. Ultimately, strategically adapting AI tools to urban design curricula—with an emphasis on bottom-up experimentation, co-creation, and reflective studio practice—offers a transformative pathway toward scalable and impactful urban design education, positioning future practitioners to create more inclusive, responsive, and sustainable environments (Kamalov et al., 2023).

References

Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(22). https://doi.org/10.1186/s41239-023-00392-8

Kamalov, F., Calong, D. S., & Gurrib, I. (2023). New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. Sustainability, 15(16), 12456. https://doi.org/10.3390/su151612451

Kharrufa, A., & Johnson, I. G. (2024). The potential and implications of generative AI on HCI education. arXiv. https://doi.org/10.48550/arXiv.2405.05154

Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. https://doi.org/10.1111/j.1467-9620.2006.00684.x

Reiss, M. J. (2021). The use of AI in education: Practicalities and ethical considerations. London Review of Education, 19(1), 1–14. https://doi.org/10.14324/LRE.19.1.05

Sajja, R., Sermet, Y., Cikmaz, M., Cwiertny, D., & Demir, I. (2023). Artificial intelligence-enabled intelligent assistant for personalized and adaptive learning in higher education. arXiv. https://doi.org/10.48550/arXiv.2309.10892

Tan, M. J. T., & Maravilla, N. M. A. T. (2024). Shaping integrity: Why generative artificial intelligence does not have to undermine education. arXiv. https://doi.org/10.48550/arXiv.2407.19088

Acknowledgements

I would like to thank Dr. Sandra Abegglen for her valuable suggestions and edits.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

AI in Higher Education Innovation Exchange Copyright © 2025 by Sandra Abegglen, Barbara Brown, Patrick Hanlon, Leeanne Morrow, Fabian Neuhaus, Soroush Sabbaghan, Alexandra Poppendorf, Mohammadmahdi Zanjanian, and Bridgette Crabbe is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.