7 Principles for AI in Education: Part 1 of 2
So hello everyone, I'm Kristen Deso. I'm the chief learning officer at KH Academy. I want to lay the groundwork a little bit for why we're here. The first part is because I'm sure all of you are bombarded by the messages around artificial intelligence.
What we see is that we're all talking about it, but we all don't know a lot about it. In a survey by Heart Research, they asked teachers how much they believe their students know about AI and whether they believe that their students know how to use it. Well, not surprisingly, about 16% of teachers said that they believe that students know how to use AI well, and 18% of teachers said that they personally knew how to use AI well.
So there is certainly room here for us to increase knowledge of our teachers and students about how we use AI. Along with that, we know that teachers, even though they have relatively low expertise, and students as well, 68% of teachers expect to use generative AI tools increasingly to teach, and 66% of students expect to use generative tools increasingly to learn.
On top of that, educators just generally believe that AI education is important. So on a scale of one to five, how important do you think it is to teach students how to use technology tools driven by artificial intelligence and understand their pitfalls? We have 43% who believe it's a top priority or very important for students to learn how to use AI and understand its pitfalls.
So how do we do that in a space where we know that we have limited resources and time and being our biggest resource that we do not have much of? How do we start thinking about setting in place some policies and principles to help students and teachers navigate this space?
One way, of course, we think about professional development as our main way of upskilling teachers and providing them with new information. Interestingly, 87% have never had any professional development about AI—not surprising, because it has all just blown up recently.
But there's certainly space for that to develop. One of the organizations or groups that is trying to address this is a consortium called TJI. At KH Academy, we are part of this consortium. The steering committee for it includes Code.org, ISTE, ETS, and the World Economic Forum. We also have a large advisory group made up of many different organizations who work together to think about how we teach with AI and about AI.
Those are the two things we need to wrestle to the ground here. One of the things we do is help provide guidance for education leaders and policymakers to think about what that actually looks like: to connect the discussion of teaching with AI to teaching about AI, including sometimes in computer science but also across disciplines and domains. Recently, we launched the AI guidance for schools toolkit.
There's a QR code here if you are quick with your phone and want to connect to that, but you can also get it at TJI.org/toolkit. It provides lots of guidance on thinking about how we can start developing policy. It includes specific language that you might be able to take and modify and lots of thoughts about principles.
In this webinar, we're going to talk about some of those foundational principles to think through before we start getting to the specifics of what policy looks like. I want to first bring down hopefully the anxiety levels and acknowledge that we do not have to solve this all at once.
We can start with creating policies that address the immediate risks so that AI doesn't undermine our learning in this school year. So that's things like just being clear on our policies about what is academic dishonesty, what is plagiarism, and what are the basics that we want to cover. That is good for stage one.
Stage two, we might want to think about, okay, let's upskill everyone in the organization. Let's think about how we can facilitate organizational learning by making some investments in individual learning of the educators that are already excited about AI.
So maybe we don't even have to try to cover everyone right off the bat, but start thinking about who are the people who are really excited about this. Bring them in, help them get their skills and knowledge up to where they are, and they can then help us as school leaders work across the system and think about how we can organize everyone else into the third stage.
This involves identifying areas for improvements and transformations that can potentially help scale the support across your system. So again, we don't have to do it all at once. You can actually think in stages about how we might start moving toward having good policy and practice around artificial intelligence.
I want to be clear, one of the things we're all wrestling with here are the benefits and how we can maximize the benefits while mitigating risks. There are lots of potential benefits, and you hear these talked about: timesaving tools.
So how can we use tools that will help save teacher time and help get back some of that time for teachers to spend on the things that really build those human relationships and all of those important things that teachers do, and maybe less time on some of the administrative tasks that all of us are faced with?
Certainly, there's potential for an assistive tool for assessment design and effective feedback. So thinking about how we might be able to make it easier to author and create tools with humans in the loop and also to provide quicker feedback to students.
Potentially tutoring and personalized learning certainly aligns with how KH Academy steps in quite a bit. It is about thinking of how AI could help us actually increase creativity, collaboration, and skills development. There is a switch; there are certainly folks who worry that it's decreasing those things.
But if we think about how we might design activities for students differently, we might be able to increase those things. Then finally, operational and administrative efficiency. There are lots of potential benefits, but what about those risks?
How do we think about plagiarism and academic dishonesty? How do we think about diminished agency and accountability if the AI is just telling us what to do? When we follow that, that certainly isn't rewarding for lots of students and teachers.
Then who's accountable for those actions and activities? Compromise student privacy and data collection—what happens to this data that students are inputting into these systems—what happens to the data that teachers are inputting into these systems, and how is that available, and how is it used?
Thinking about bias—we know that these models are trained on given data sets, and those data sets themselves are likely not free of bias. So how are we replicating existing biases in these new systems, and how do we avoid that?
Then finally, there's the overreliance on technology and less critical thinking—another potential risk. We can look at each of these and think about what are ways that we can ensure this dystopian future doesn't come to pass. The more positive view that we see in some of the above areas does exist.
I think a lot of that is around how we start making decisions right now and how we start thinking and moving ahead with these things. So with that, I'll jump into these seven principles that we think if we start with aligning to some of these, we are more likely to be able to mitigate some of those risks.
First, think about how we use AI to help all students achieve educational goals. There are a couple of key words here. One is all students, because we want to be thinking about not furthering the digital divide. The other side of this coin, that Mike Trano at the Brookings Institute wrote about recently, was a reverse digital divide, where in fact, the students with less resources end up with just the AI tutors, while the more well-off privileged students end up with AI tutors and humans.
We want to think about making sure that all students have access to both of those things and what that means for them to succeed. Second, achieving educational goals. This is important for students when we think about these new technologies.
Sometimes it's fun to be like, "Oh, look at this shiny new thing! How can we use it?" Instead, we should be thinking about what are the learning problems that we want to solve and how might AI help solve some of those problems. Start with the problems we want to solve first and then apply the technologies, rather than starting with a technology.
Second, think about adhering to and reaffirming existing policies. Most districts likely have policies about the use of technology and policies about academic honesty. Many of those will continue to apply to AI or can be adapted with some small tweaks.
So think about reaffirming what you already have; you don't need to start from scratch with all of this. Third, promote AI literacy. Knowledge is an important component of this. It's really hard to set these policies without understanding the technologies and where they can fall and where they're good.
So think about promoting AI literacy. Fourth, balance realizing the benefits of AI while addressing the risks. That’s what we were just talking about; it's pretty easy to fall into just one camp or the other. Trying to keep a balanced view of this technology is important.
Fifth, integrity—advancing academic integrity by making decisions that ensure that students are doing their work and doing their own work at the same time. Sixth, maintain human decision-making when using AI. This is a key one that I think we all want.
Ultimately, our goal for students is to be lifelong learners and to be their own guides in learning. This means they can't just come to rely on technology to make recommendations for what to do next. They shouldn't just be relying on the teachers for recommendations on what to do next.
We want to think about how we can build in that student power, and the same goes for teachers. As we think about maybe using AI to help draft lesson plans, that doesn't mean the AI writes the whole lesson plan for them, and they just use it as is.
It's about how we use these tools as assistance, and how they can help assist, advocate, and bring together better performance by the teachers, rather than thinking about how they're replacing the job that teachers used to do. Finally, evaluation—how are we assessing the impacts of AI?
How do we know what's working and what's not, and iteratively improving on things so they can get even better? So I think these are seven good things to keep in mind and can help provide a framework for how we think about some of the work that we might want to do to set policies.
I'm going to give you a couple of examples of how we use these in developing Kigo, and then we'll open it up for discussion. So first, think about that idea of purpose: students achieving educational goals. Well, as we started designing Kigo, we said, what are those problems that we want to solve about learning?
Starting even further back, what do we know about how students learn? Well, we know students learn more when they're actively engaged with materials to be learned and not just doing busy work but cognitively engaged—that they're thinking through how this fits into what they already know. Linking it to other concepts, explaining in their own words—those are signs of cognitive engagement, and that can be tough sometimes when you're in a class of 30 to get every student cognitively engaged.
So that might be a problem we could solve with AI. Second, we know students learn more when they work on material that's at the edge of what they can do and are provided with just enough support to be successful at that.
If they're working on things that are really easy, they can just do them on their own; they're probably not learning a lot of new things. If they're working on something that's really difficult, they're going to get frustrated and give up. So you want them to be just at that edge, where there's a little bit of support, but it's really hard for a teacher in a big class to provide that little bit of extra support to every student who's practicing independently.
Again, this is something AI might be able to help with. Third, immediate feedback on responses to new things. We know that KH Academy, even without AI, was pretty good at giving immediate responses, for example, on math or science questions, such as whether that's a correct or incorrect answer, elaborating on why it's correct or incorrect.
However, we couldn't give step-by-step feedback or figure out where they were starting to go on the wrong path as they solved a problem. With writing and open responses, we really couldn't give much immediate feedback at all.
This is another problem we've had that maybe AI could help us solve. Finally, seeing value in learning—we know from motivation theory that students do things when they think they're going to be successful with them and when they value doing that thing.
Value can be a lot of things; it can be seeing how this is relevant to their world. Students can value it because it shows them something about the wonder of how the world works. They can value it because their friends are doing it too, or they might value something because they get points for doing it—lots of different things.
But we know teachers—the most common question teachers ask us is, "How can I keep my kids motivated?" So certainly, there's a question of can AI help with that motivation too.
So that's how, when we started off, we didn't begin with "Hey, look at this new technology!" We started off with "We know this is how people learn, and we know this is some of the things that we've struggled with in helping all students learn in a classroom. Could AI help us solve any of these things?"
That's what you see when we have Kigo, for example, providing support just in time. If a student is working on a problem on Khan Academy, as you see here, we see them type into Kigo, "I'm stuck." Kigo will say, "Well, let's break it down together. Let's first try to simplify by combining like terms," suggesting how they might do that.
So it starts walking through step by step. It gets to that question of how many students are sitting up, you know, with their hands raised in class, working on problems. They have a question, they raise their hand, and they end up sitting there.
Why? Because the teacher is running around, answering as many individual questions as they can. But it's really difficult to do that in a large classroom. This is an idea of being able to provide that support for students, which, again—going back to the principles—it's about education for all, being able to support education for all.
It can provide that immediate feedback. So if I say, "Okay, so if I combine minus 3r plus 6r plus one, I get minus 3r plus one." It should be positive 3r plus one.
It's not quite when you combine those terms. You should add them together. What do you get when you add minus three r plus six r? We'll see what the student says, and if they're stuck, they may be able to diagnose. The problem here is actually adding negative numbers.
So those are good kinds of support. What's the education piece we want to solve? We talked about academic integrity, so there's a number of things here on that principle of academic integrity that we've added.
First, we did what's called prompt engineering with AI, where we write, when the student, for instance, just says, "I'm stuck," we send to the model both what the student says and about 500 words of a prompt that tells the model how to respond. That 500 words includes a lot of things about how to act like a good tutor based on the research on tutoring.
But it also includes things like, "Do not give the student the answer." That's literally an instruction we tell them when answering. When a student asks, "Can you just give me the answer?" it says, "As your AI tutor, my goal is to help you learn and understand the process. So I won’t give you the answer directly, but I’m here to guide you through each step.
Let's focus on that right side of the equation, -3R + 6r." So it corrects them and brings them back to the thing we’re trying to solve here. That's a part of how we can build academic integrity into a system that is made specifically for education, which isn’t the case, for instance, with just ChatGPT out in the wild.
We also, when starting to think about academic integrity, wanted the transcripts of these chats to be viewable by teachers and parents so that they could see what students were doing as they interact with Kigo. Those are two key things that we did in terms of working through this.
We've also been thinking about how we can set policy around academic integrity, for instance, by providing very clear expectations about what is plagiarism and what the expectations are for when you will use these kinds of tools and when you won't. Students should know they're not just guessing what the right thing to do is; they should know what the right thing to do is and what that looks like.