IntelliBoard

What Can Artificial Intelligence Do For Us In 2024.

January 7, 2024

By: Elizabeth Dalton

Reading Time: 15.3

AI and Learning Analytics

I’ve been in educational technology for over 30 years. At IntelliBoard, as a Learning and Development analyst, I’ve been working with our team on gathering data from all the different kinds of technology that learners connect with, primarily Learning Management Systems, Student Information Systems, and Web Conferencing Systems, and bringing it to one place and focusing, manipulating, and interpreting that data to make predictions about learner success. Lately, I’ve been spending a lot of time experimenting, reading, and thinking about how new emerging AI tools can help us to learn to understand what learners are doing and to understand how learners are affected by their learning environments and by the people that they are learning with. Educational technology tools generate massive quantities of data. I think AI can provide critical value to help us interpret that mass of data and use it for human purposes to help us reach our learning goals and to help quality learning be more available to everyone. So, what can artificial intelligence do for us in 2024?

I was honored to participate in a panel on AI and educational technology at the Global MoodleMoot 2023 in Barcelona (1). This post summarizes my responses to the thoughtful questions that were asked there.

Are our educational institutions equipped or prepared to address potential AI bias?

Our educational institutions struggle to address bias without AI. Adding AI without evaluating the data we train our models on will only further entrench those biases (2). However, there are steps we can take to mitigate this bias. We can start to use AI tools themselves to examine and evaluate bias.

Our models are trained on massive amounts of data, but one of the problems that we have with a lot of AI tools right now is that they are trained on what we call the “WEIRD” data set (Western, Educated, Industrialized, Rich, and Democratic (3)), which isn’t representative of all, or even most learners. We need to include more data sources and more voices, and then we will be in a better position to be able to look at bias.

More data is needed!

There has already been some very interesting research about how we can do this. One insightful study I’ve seen was presented at LAK 19, “Evaluating the Fairness of Predictive Student Models Through Slicing Analysis” (4). Machine learning models are typically evaluated in part using a graph called the ROC or “Receiver Operator Curve.” This graph compares predicted outcomes and actual outcomes. A diagonal line represents a model that predicts at the rate of random chance. The curve on the line will ideally pull far away from the diagonal line. Increased “area under the curve” (AUC) is one of the key values used to evaluate how accurate a model is. In this study, the researchers suggested that we slice that data by demographics and ask: “Does it equally accurately predict for all different subpopulations?” They propose a new metric called Absolute Between-ROC Area (ABROCA) and describe how both algorithms and feature sets can affect this metric, and significantly, they show that it is possible to improve both fairness and accuracy by adjusting features and algorithms in a model.

Is it ethical to include demographic data?

Our clients often express concerns in our discussions about whether it would be ethical to include demographic data, such as race, gender, or socioeconomic status, in predictive learning analytics. Learners, instructors, and institutions can’t change that demographic data– is it fair to use it to make predictions about learner success? I think we must include that data in our analyses for two reasons. First, we need to acknowledge that these factors can have predictive value, even if learners can’t change them, and we can’t make useful recommendations to learners without accurate models that can quantify the factors that learners can affect and look for the strongest factors apart from demographic values. Second, we must be able to analyze our AI systems using demographic data, or we will not detect bias. We can have a goal of being colorblind and wealth blind in our day-to-day lives or our written policies, but if we’re going to try to get bias out of our systems, or we’re going to at least acknowledge where our systems are biased, we must start by taking a good hard look and testing our systems against those biases.

What do you think the impact of new regulations in the EU and elsewhere might be on the education industry?

I don’t think we, meaning educational technologists, can wait for various legislative bodies to pass regulations. We need to be the ones driving those regulations. By “we”, I mean members of the educational technology community — instructors, administrators, and researchers who attend conferences and read blog posts about educational technology. We know how humans learn. We are passionate about technology. If you are reading this post, however little you personally may think you know about AI right now, you likely know more than most of the people that you will walk past on the street today (and more than many of those involved in creating legislation). We need to be the community that starts talking about this with each other and with our students who, whether we wait or not, are going to be out there doing things with AI. The educational community needs to be leading the legislative effort.

How do you think AI will have the greatest positive impact on education?

There are so many possible impacts, and this field is changing very quickly, sometimes in very surprising ways. Sometimes it feels like we’re in a rocket ship speeding through space– and we don’t even know what planet we’re headed towards! But let’s consider a couple of positive impacts that are, I think, related.

We’ve heard about the idea that Chat GPT, or another large language model, can take notes written by a student (or even extracted from a transcript or summarized from a digital reading), write a reflective paper, even do a critical reflection in that paper, and then the student might just hand that work in as their own to fulfill a course requirement. Obviously, this would reduce the learning by the student. The process of review and reflection would be short-circuited. But what if the student uses that toolset and reads the results and says, “Yeah, I agree with most of this, but I would change this bit”? In that case, the student could use AI as a learning scaffold. At the MoodleMoot, several people talked about Vygotsky and the “zone of proximal development”– a long complicated phrase that essentially means that learning takes place best at a point when the task is hard, but not too hard (5). There are a lot of things that new learners try to do that seem difficult or even intimidating to them. It’s easy for new learners to get discouraged with a subject. One of the ways in which I think AI could provide a learning advantage would be to give learners the ability to start to do something that they thought was too hard.

Reflect on what we expect students to learn

Of course, this relies on learners who are intrinsically motivated to learn. We may find that we need to justify our assignments to our learners and explain why we think whatever it is we’re asking them to learn is worthwhile to help encourage that intrinsic motivation. Possibly, we should take this as an opportunity to reflect on what we expect students to learn, whether in primary or secondary school, higher education, or the workplace, and the most appropriate ways to assess that learning. Memorizing facts may not be a useful skill in a world where online searches can find facts very quickly — the ability to distinguish high-quality sources of those facts might be more important. Learning to write summaries of long articles might not be a skill that is valuable in a world where an AI tool can do that very rapidly– we might focus instead on the ability to check such a summary and confirm that it has captured the essentials or to review multiple summaries to choose which full-length text to read in depth. We might need to ask ourselves what other learning purposes assignments like summarization were intended to serve and whether there is a better way of assessing that learning. New York Times columnist Frank Bruni, in a defense of fluent writing, suggests, “Writing is thinking, but it’s thinking slowed down — stilled — to a point where dimensions and nuances otherwise invisible to you appear” (6) If we want to convince our students of the value of this writing, we may need to work harder to ensure that our assignments actually require thinking, rather than restating the known in a “disposable” construction with no value beyond a class grade.

Generative AI

On the other hand, I think generative AI tools may help give voice to learners who have struggled until now to be heard. Many learners, in my experience, are very frustrated with trying to communicate their own ideas, either because they haven’t had a chance to develop communication skills fully or because they have physical, developmental, or technical obstacles to trying to create the polished communications that draw attention to their ideas. Generative AI can help amplify the voices of these learners and members of our community. Seeing their own ideas expressed in a mode that will make them more effective in different social registers may help to increase their fluency in these different registers.

Over the past year, there seems to have been an endless stream of articles and commentary voicing the fear that the advent of generative AI will be the end of human creativity. I don’t believe this is the case. A significant study that came out a couple of months ago (7) showed that if you train a large language model like Chat GPT on content that other large language models generated, the quality plummets. This is called “model collapse.” You need human beings to feed the AI original content, at least at this point, and I think it’s going to be true for a long time. (Ideally, the institutions benefiting financially from those models should compensate those who generated that content. Content creators have started to fight back against uncompensated usage using related technologies (8). But what these models are doing is quite similar in many ways to what humans already do. We all learn from each other and borrow ideas all the time. We rely on individual creativity to inspire us and spark new ideas and new communications. At this point, the AI systems are not generating new ideas or even surprising connections between existing ideas. They combine patterns derived from existing works, sometimes in usable and interesting ways but not truly original or insightful.

AI and humans

Finally, in my experience, an AI may generate many documents or images before you get one that’s worth sharing. As humans, we are the curators, the editors, the ones who sift through that output and find something interesting that speaks to us and helps us speak to someone else. There’s still a very strong human role in using these tools, and that will continue to be the human role in education, to help our students sift through and write good prompts, recognize output that’s worthwhile, curate and improve, and be part of sharing those ideas– those human ideas– back into the system and making them so that other people can find them and react to them the way readers are reacting to this post. This cycle is part of the educational process.

AI tools require more power than web searches have required in the past. What do you think of that?

This is something that keeps me up at night. Estimates are in the range of ten times the electricity use for AI tools as compared to previous tools like web searches (9). Recently, I used Midjourney to generate a number of images for a project, and I was very conscious of the amount of energy that I was using up to satisfy my desire to try to get these images out of my head and into a form I could show to somebody else. At the same time, we are learning how to make these processes more efficient, e.g., by scheduling them to run during low-demand times (10), and algorithms for many computational tasks are getting more efficient over time (11). We are also trying to work on more sustainable ways to generate electricity. Every time we take something that humans do and try to automate it, that automation uses energy that has to be generated from some source. This is an equation we ultimately have to balance — how much technology and automation can we afford, as a species and as a culture, and maintain a livable, sustainable world? Which things do we want to spend our energy budget on? Heating or cooling homes, shipping products around the world from less expensive sources, corralling massive quantities of data to try to understand what’s going on in our world? Currently, we rely primarily on the “free market” to make these decisions, but we don’t necessarily incorporate all the true costs of goods into the prices we pay as individuals, especially energy costs. Using energy responsibly for AI is part of a larger conversation about how we use energy in general.

Is AI more of a threat to those just starting their careers than to those who are more established? How can we mitigate that?

The AI tools that are currently emerging do affect less experienced people early in their careers more than those who have developed expertise and advanced skills. A lot of routine tasks will probably be completed by AI in the future, including simple writing, illustration, audio and video generation, coding, etc. This has happened throughout history — there was a time when it was possible to make a good professional living weaving cloth by hand. The advent of mechanized looms put a lot of people out of work, and there were protests and raids to destroy weaving machinery in the early 1800s– that’s where the term “Luddite” originated (12). Today, there are still some people who are professional weavers, but as more of an art form, and it’s quite unusual to have enough skill and luck to be able to make a living wage at it in developed countries– most people today buy less expensive machine-woven goods for everyday use. More people pursue weaving as a hobby than as a profession (I am one of them). The availability of less expensive textiles has contributed to a general increase in the standard of living in developed societies. More people can now afford to own enough garments and bed linens that they can clean one set while using the other.

Don’t leave anybody behind

Perhaps the overall effects of generative AI tools will be to raise the standard of living for the general population in some comparable way. Optimists assure us that as AI replaces some jobs, new jobs will open up. The issue of the long-term effects of “technological unemployment” (obsolescence) is contentious among modern economists (13). I can’t predict whether generative AI tools will ultimately destroy more jobs than they create. Quite a few existing professions are likely to become hobbies rather than livelihoods for all but the most gifted, original creators. What I do believe is that it will be up to us as the community to make sure that any benefits of this shift are not tightly concentrated on a small group and that we don’t leave anybody behind– that nobody is going hungry or unhoused, that people still have the opportunity to contribute constructively, and to feel like they have meaningful work and a meaningful life. The jobs themselves will doubtless change. It’s up to us as humans to make sure that we’re still humans and we’re still taking care of each other.

Could ChatGPT Write This Blog Post?

As a final note, when I accepted the task of writing this post based on my panel participation, I tried taking the transcript of my comments and asking ChatGPT 3.5 to convert it to a blog post. I considered the results unusable, both in content and in tone. Some things are still better done manually by a human.

————————————————————-

(1) Keynote Panel – Artificial Intelligence Discussion | Hosted by Brett Dalton; Barcelona, Catalonia, Spain, 2023. https://www.youtube.com/watch?v=c0updtaWlg8 (accessed 2023-12-21).

(2) O’neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Crown, 2017.

(3) Henrich, J.; Heine, S. J.; Norenzayan, A. The Weirdest People in the World? Behav Brain Sci 2010, 33 (2–3), 61–83; discussion 83-135. https://doi.org/10.1017/S0140525X0999152X.

(4) Gardner, J.; Brooks, C.; Baker, R. Evaluating the Fairness of Predictive Student Models Through Slicing Analysis. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge – LAK19; ACM Press: Tempe, AZ, USA, 2019; pp 225–234. https://doi.org/10.1145/3303772.3303791.

(5) Vygotsky, L. S. Mind in Society: The Development of Higher Psychological Processes; Cole, M., John-Steiner, V., Scribner, S., Souberman, E., Eds.; Harvard University Press: Cambridge, MA USA, 1978.

(6) Bruni, F. Our Semicolons, Ourselves. The New York Times. December 21, 2023. https://www.nytimes.com/2023/12/21/opinion/chatgpt-artificial-intelligence-writing.html (accessed 2023-12-22).

(7) Shumailov, I.; Shumaylov, Z.; Zhao, Y.; Gal, Y.; Papernot, N.; Anderson, R. The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv May 31, 2023. http://arxiv.org/abs/2305.17493 (accessed 2023-12-18).

(8) Shan, S.; Cryan, J.; Wenger, E.; Zheng, H.; Hanocka, R.; Zhao, B. Y. Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models. arXiv August 3, 2023. https://doi.org/10.48550/arXiv.2302.04222.

(9) Leffer, L. The AI Boom Could Use a Shocking Amount of Electricity. Scientific American. https://www.scientificamerican.com/article/the-ai-boom-could-use-a-shocking-amount-of-electricity/ (accessed 2023-12-19).

(10) Xu, T. These simple changes can make AI research much more energy efficient. MIT Technology Review. https://www.technologyreview.com/2022/07/06/1055458/ai-research-emissions-energy-efficient/ (accessed 2023-12-19).

(11) Wendl, M.; Doan, M. H.; Sassen, R. The Environmental Impact of Cryptocurrencies Using Proof of Work and Proof of Stake Consensus Algorithms: A Systematic Review. Journal of Environmental Management 2023, 326, 116530. https://doi.org/10.1016/j.jenvman.2022.116530.

(12) Luddite. Wikipedia; 2023.

(13) Technological Unemployment. Wikipedia; 2023.

Your Custom Learning Analytics Solution

Ready to use these elements to create your organization’s custom learning analytics system? Talk to our experts to learn how we can build a learning analytics solution tailored to your needs.

Did you like what you read? Please share!

Elizabeth Dalton

Elizabeth Dalton is a Learning and Development Analyst for IntelliBoard, Inc., where she helps inform IntelliBoard development with research in teaching and learning. She holds an M.Ed. in Educational Media and Technology from Boston University and is a Doctoral Candidate in Education at the University of New Hampshire, specializing in predictive learning analytics. She has worked in online learning as an instructional designer, instructor, learner and system administrator for over twenty years. Her work emphasizes interpreting data using human values.

Resources

Explore Learning Analytics Insights

2024-01-07T17:52:30-05:00
Go to Top