This year, I was honored to attend the Learning Analytics and Knowledge conference in Kyoto as a co-chair of the Practitioners Track. As always, it was a great week of insightful and inspiring presentations and networking (not to mention early cherry blossom viewing!). I always come away from this conference energized about new developments and ideas in the world of Learning Analytics. This year, there were several significant themes worth reporting on.
First, as has been the case for every conference I’ve attended this year, there was a strong emphasis on AI, especially Large Language Models (LLMs) like ChatGPT. (Machine learning, in general, has been a featured topic every year since the start of LAK.) Every session involving them was exceptionally well attended, and many had to be moved to the largest room to accommodate their interests. We saw both researchers and practitioners exploring using LLMs in analyzing learner input, summarizing instructor feedback, constructing “data stories” to inform instructors or advisors of learner progress, and generating lesson content (including assessment items) on demand based on learner interest or needs. Every presentation emphasized the limits of these tools at present and the need to keep “humans in the loop” to ensure the accuracy and appropriateness of the output of these tools. Concerns were also raised about the costs of these tools, whether as licensed commercial models or open models trained and administered by researchers. These costs are partially related to the enormous amounts of computing power — and energy — consumed by training and operating these models. While the proposed benefits of individualized instruction offered by these tools are exciting, they may never be affordable to most learners, at least with the current versions of these technologies.
This leads to the second emerging theme. Across multiple presentations and topics, presenters noted that the accuracy of many models is uneven across different demographic groups. Specifically, models generating predictions of learner risk or suggestions for guidance are often the least accurate for the learners who need the most support. There were also indications that learners who needed the most support might lean more on AI tools in ways that ultimately undermine their learning. New methods of evaluating bias in models help us to have a clearer understanding of the work we still have to do in this area.
The third theme I noted was recognizing the limited value of “efficiency” compared to deeper engagement. In the keynote opening of the conference on Wednesday, Mutlu Cukurova suggested that maybe some learning and teaching just have to be slow. There is a tendency to try to use AI tools to quickly surface “insights” to make teaching or learning more efficient, but this may undermine deeper personal engagement between instructors and learners. The predictions and inferences of AI are based only on visible data, generally learner or instructor behavior, and ignore social contexts or reasoning based on human experience. This theme surfaced in several other presentations. I was reminded of the different priorities and definitions of “learning” and “education” that we discuss in our workshop on defining goals for predictive learning analytics. Most learning analytics projects are still based on a pragmatic, “social efficiency” model of education, in which it is assumed that the goal is to help every
learner achieve a certain level of mastery of key topics as quickly and easily as possible. This does not always reflect parallel values of academic scholarship, lifelong learning, or transformational learning to empower learners to solve real-world problems. At a minimum, learning analytics need to be informed by pedagogical intent.
Finally, I noted a growing awareness that the nature of what we consider “education” is changing with the increasing use of AI tools in the workplace. We need to consider not only how AI tools may affect our definitions of “academic integrity” but also which knowledge and skills are appropriate to assess and what kinds of assessments are most valid. Does it make sense to use essays as a form of summative assessment when much writing may end up being generated using AI tools for practical purposes? What are we really trying to assess when we ask learners to write an essay? Are there better ways of assessing that learning? And even when trying to assess writing ability, a focus on process rather than product may be more appropriate.
On a final note, we organized two ad-hoc meetups during the conference: one on implementing learning analytics in production at scale and one on generating synthetic data. Both were well attended and generated valuable insights that we will consider in planning future developments at IntelliBoard.
Your Custom Learning Analytics Solution
Ready to use these elements to create your organization’s custom learning analytics system? Talk to our experts to learn how we can build a learning analytics solution tailored to your needs.
Ann McGuire is an experienced marketer with more than 20 years creating content, marketing communications programs, and strategies for tech firms. She reads, writes, and lives in New Haven, CT with her husband and two needy cats.
Resources
Explore Learning Analytics Insights
How Learners View Learning Analytics Impacting their Individual Success
Find out how learners view learning analytics impacting their performance by viewing their course progress through live dashboards.
IntelliBoard across Campus: Bridging Your Student Success Ecosystem
Explore why deploying Learning Analytics is a strategic decision that can benefit your learners, faculty, advisors, and academic leadership.
Creating SMART Goals Identify Corporate Training Performance Gaps
See how SMART goals: specific, measurable, achievable, relevant, and time-bound can help you achieve your training goals.