Understanding the Role of Large Language Models (LLMs) in Higher Education

Duration - 10 weeks

Role - UX Researcher

Methods - Netnography, Competitive Analysis, Interviews, Observations, Diary Study.

This project, conducted at the University of Washington’s College of Engineering, examines the role of Large Language Models (LLMs) like ChatGPT in higher education. As LLMs become more accessible to students and educators, they present unique opportunities and challenges within academic environments. Our study focuses on understanding how educators in the Human-Centered Design and Engineering (HCDE) department perceive and use these AI tools in teaching, grading, and student engagement.

THE PROBLEM

As Large Language Models (LLMs) like ChatGPT and Claude become part of the educational landscape, universities face a significant challenge. These tools bring meaningful improvements for both students and educators by streamlining tasks, supporting non-native English speakers, and introducing innovative ways to engage with content.

However, they also raise serious questions:

Are students developing a dependency that could undermine critical thinking?

Are educators equipped with the guidance they need to navigate this new technology responsibly?

Through a mix of interviews, observations, and comparative analyses, we aimed to uncover this tension and provide insights for educational institutions on how best to support faculty in the evolving landscape of AI in education.

RESEARCH GOALS

THE PROCESS

BUILDING A COMPREHENSIVE UNDERSTANDING

Our research process evolved to capture the complexity of LLM use in academia, and each method played a crucial role in refining our understanding of this topic.

01 Initial Exploration through Netnography

We began by investigating general perceptions of LLMs on platforms like Reddit, where public discourse offered insights into both positive and skeptical views on AI tools. This initial exploration, allowed us to gather broader public sentiment and informed our initial research questions about trust and ethical considerations in LLM use.

THE PIVOT

THE CHALLENGE

DATA ANALYSIS

📖 Key Insight

Discussions revealed a spectrum of user trust in LLMs, with common concerns about accuracy, ethical use, and dependency. This led us to explore and compare different LLMs through Competitive Analysis.

❔ How?

Wordcloud based on Netnography findings

📈 Impact on the process

The netnography findings inspired us to focus more specifically on educator views, particularly around the dual themes of trustworthiness and utility.

02 Conducting Competitive Analysis

Next, we conducted a competitive analysis of four prominent LLMs - ChatGPT, Claude, Google Gemini, and Perplexity, to better understand the tools available to educators. Each tool was evaluated for its unique features, strengths, and limitations, which provided a more grounded context for understanding educator interactions with these tools.

❓How did we maintain consistency to yield unbiased results?

We used the same prompt across all the LLMs to maintain uniformity. Prompt - Provide a short summary explaining qualitative research.
Similar language and themes were observed throughout all four models - Link to the Figjam file.

📖 Key Insight

The analysis revealed that each LLM had distinct affordances and “personalities”, with some better suited for content creation and others for interactive tasks.

📈 Impact on the process

This step allowed us to empathize with our participants. It informed the structure of our interview questions, helping us better probe the nuanced uses and challenges of each tool in an academic setting.

With a foundational understanding of LLMs and public sentiment, we conducted semi-structured interviews with HCDE faculty and Ph.D. candidates. Each 30-minute session was designed to explore specific use cases, challenges, and ethical considerations directly from the educators’ perspectives.

Our initial goal was to explore trust in LLMs across various applications. However, netnography and competitive analysis revealed a vast landscape where trust varies by context. We identified a specific gap in understanding educators’ views on LLMs in higher education. To address this, we narrowed our focus to the role of LLMs in education, specifically within the engineering and HCDE departments at the University of Washington, allowing us to gather in-depth, qualitative insights into the practical and ethical challenges of LLM use in university teaching.

📍 Where?

THE APPROACH

KEY INSIGHTS

03 In-Depth Interviews with Educators

Form used for the study

🙋🏻‍♀️ Who?

LESSONS LEARNED

- Human Centered Design and Engineering faculty and PhD Candidates.

- Zoom

- Semi-structured Interviews (30 mins)

📖 Key Insight

Educators valued LLMs for repetitive tasks and accessibility support for non-native English speakers. However, they voiced strong concerns about potential student dependency, which could undermine critical thinking.

📈 Impact on the process

These insights clarified the practical applications and ethical concerns educators encounter, allowing us to refine our research focus on the balance between support and reliance in LLM use.

“LLMs are great for the grunt work” - P1 | “Students are using it without understanding” - P3

|

“LLMs are great for the grunt work” - P1 | “Students are using it without understanding” - P3 |

04 Diary Study

Originally, we planned a week-long diary study to capture participants’ daily reflections on LLM use.

However, due to scheduling conflicts and limited participation, we shifted focus and instead included a single entry that offered real-time insight into how one educator experienced LLMs in their day-to-day work.

📈 Impact on the process

The challenges in recruiting for this diary study reinforced the need for adaptability in research design and led us to prioritize interviews and observational insights instead,

05 Observational Learning at a “Lunch & Learn” Session

Our final data collection involved observing a “Lunch & Learn” session where faculty gathered to discuss AI and LLMs. This informal setting provided a unique opportunity to see real-time discussions among educators, sharing their uses, challenges, and ideas for responsible LLM integration.

📖 Key Insight

Educators valued these sessions for peer-to-peer learning and mutual support, discussing issues like bias in AI, best practices, and the need for clear ethical guidelines.

📈 Impact on the process

Observing this social learning environment underscored the collaborative spirit in academia regarding new technologies and highlighted the importance of institutional support for effective LLM adoption.

Our data analysis process combined open coding and affinity mapping, ensuring a structured and objective approach.

LLMs Enhance Accessibility but Pose Risks of Dependency

Increased Accessibility: Educators used LLMs to assist non-native English speakers with translating complex texts and synthesizing information, making academic content more accessible.

Dependency Concerns: Educators worry that students might rely too heavily on LLMs, leading to superficial understanding and reduced critical thinking.

Educators Seek Practical and Ethical Guidance

Policy Gaps: Educators felt a need for more structured guidance around ethical LLM use, particularly regarding plagiarism and data privacy.

Time Constraints: With busy academic schedules, educators noted that learning to use LLMs effectively requires time and resources that many faculty members lack.

✅Adaptability is Essential: Shifting away from the diary study highlighted the need for flexibility in research methodology, allowing us to focus more on interviews and observations.

✅Diverse Perspectives Enhance Understanding: The interdisciplinary makeup of participants enriched the analysis, providing a well-rounded view of LLM use in academic settings.

Comparative Analysis by Lakshmi Narayana

View more while you are still here:

Sponsored Project | Usability Studies | UX Design | UX Research | A/B Testing

Examining and evaluating the usability of the newly integrated collaborative elements within the code review feature.