Skip to content

⚡️Assessment Unlocked: Mixed methods, unlocked

4 min read

Quantitative data may travel fast, but qualitative evidence carries the meaning. This week’s post focuses on a practical, responsible way to use GenAI to analyze, synthesize, and actually use qualitative data, closing a long‑standing gap in mixed‑methods assessment work.

🎯 Introduction

Open‑ended survey responses, reflections, focus groups, and artifacts are goldmines for understanding student learning and also where assessment timelines quietly go to die. GenAI offers new ways to work with qualitative evidence at scale, but only if we’re clear about roles, limits, and validity. This is about amplification, not automation.

Key takeaway: Mixed methods don’t fail because of theory, they fail because qualitative data are hard to use under real constraints.


📚 Background

Mixed‑methods assessment combines quantitative and qualitative evidence to produce richer, more actionable understanding of student learning (Creswell & Plano Clark, 2018). In higher education, this approach is especially valuable because learning outcomes often involve complex constructs—critical thinking, integration, reflection—that cannot be captured by numbers alone (Banta & Palomba, 2015).

Assessment scholarship consistently emphasizes that qualitative evidence strengthens validity by providing context and meaning for quantitative findings (Maki, 2010). AAC&U’s VALUE rubrics, for example, were designed to support qualitative judgment at scale, encouraging faculty interpretation rather than mechanistic scoring (AAC&U, 2015). NILOA has similarly argued that narrative evidence—when systematically analyzed—plays a central role in improvement‑focused assessment (NILOA, 2016).

The challenge is operational. Qualitative analysis requires time, training, and coordination. As a result, open‑ended responses are often summarized anecdotally or excluded altogether. This is where GenAI enters—not as an interpretive authority, but as a coding and synthesis assistant. Teaching and assessment centers increasingly note that large language models can support first‑cycle coding, pattern detection, and thematic clustering when humans retain responsibility for meaning‑making and decisions (UCLA Center for the Advancement of Teaching; Vanderbilt Center for Teaching).

Methodologically, this aligns with established qualitative practice. AI‑assisted coding can function like a very fast research assistant—suggesting codes, surfacing co‑occurrences, and generating analytic memos—while humans ensure credibility, dependability, and confirmability (Saldaña, 2016). Used carefully, GenAI lowers the barrier to doing mixed methods well rather than doing qualitative work superficially or not at all.

Key takeaway: GenAI doesn’t replace qualitative rigor, it makes rigorous qualitative work more feasible.

References (Background)

  • Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research.
  • Banta, T. W., & Palomba, C. A. (2015). Assessment essentials.
  • Maki, P. L. (2010). Assessing for learning.
  • AAC&U. (2015). VALUE rubrics.
  • NILOA. (2016). Assessment in practice.
  • Saldaña, J. (2016). The coding manual for qualitative researchers.
  • Vanderbilt Center for Teaching. Generative AI guidance.
  • UCLA Center for the Advancement of Teaching. AI and assessment resources.

🧪 Best practices & tips

Here’s how assessment teams are using GenAI to support mixed‑methods workflows responsibly:

  • 🧩 Start with human‑defined questions
    Define what you’re looking for before involving AI. Prompts should reflect your learning outcomes, not generic sentiment analysis.
  • 🏷️ Use AI for first‑cycle coding only
    Let GenAI suggest initial codes or themes, then review, merge, revise, or discard them. This mirrors established qualitative practice.
  • 🔍 Pair every theme with evidence
    Require AI outputs to include representative excerpts. This supports transparency and faculty trust.
  • 🧠 Validate with humans, always
    Conduct spot checks or norming sessions where faculty review AI‑assisted themes against raw data.

Quick win: Add a short “AI‑assisted qualitative summary” alongside your quantitative tables in assessment reports.

Key takeaway: AI speeds up the mechanics; humans protect meaning and validity.


🏫 Example or case illustration

Setting: An online master’s program in Education conducting an annual PLO assessment.

The program had strong rubric scores showing acceptable performance on reflective practice—but open‑ended reflections told a more complicated story. Faculty suspected surface‑level reflection but lacked time to analyze 300+ student submissions.

The assessment lead piloted an AI‑assisted mixed‑methods approach. After anonymizing data, they asked GenAI to:

  • propose initial codes aligned to the PLO,
  • cluster reflections by depth of reflection, and
  • pull illustrative excerpts for each theme.

The friction point came during review: some faculty worried the AI would “flatten” nuance. To address this, the team held a short validation session. Faculty reviewed excerpts, refined code definitions, and adjusted one major theme.

The result was clarity. Quantitative scores stayed the same—but qualitative analysis revealed why performance plateaued. The program revised one assignment prompt and added a scaffolded reflection activity the following term.

Resolution: AI handled scale; faculty handled interpretation.

Key takeaway: Mixed‑methods insight emerged only when qualitative evidence was usable.


🔮 What’s next

Next week, we’ll explore GenAI and survey design—using AI to improve clarity and reduce bias before data collection.
Prep action: Pull one survey with open‑ended items you rarely analyze.


❓ Question of the day

Which qualitative data on your campus has the most insight—but the least follow‑through?


🚀 Call to action

This week, take one set of open‑ended responses and try AI‑assisted first‑cycle coding only. Review the results with a colleague before drawing conclusions.

Subscribe To Our Newsletter
Enter your email to receive a weekly round-up of our best posts.
icon
Dr. Alaa Alsarhan

Dr. Alaa Alsarhan is a higher education leader and analytics expert specializing in assessment, learning outcomes, and data-informed decision-making. He is CEO & Co-Founder of Horizons Analytics, a consultancy advancing AI-powered assessment and strategic planning in education and business. Dr. Alsarhan has authored multiple publications, delivered national keynotes, and led innovative research on high-impact practices, student success, and AI in higher education. He is a founding member of the GenAI in Higher Education Assessment Community of Practice and a fellow with the NWCCU Mission Fulfillment and Sustainability program.

View All Articles

Leave a Reply