May 9, 2025

Episode 323: Jen Manly

Episode 323: Jen Manly

AI Ethics, Overreliance & Honest Talk with Jen Manly In this episode of My EdTech Life, Fonz sits down with returning guest Jen Manly, a computer science educator, TikTok powerhouse, and advocate for ethical tech use, to unpack the complex relationship between AI, teaching, and critical thinking. From data privacy concerns to AI detectors that fail our students, this conversation gets real about what’s hype, what’s helpful, and what needs more scrutiny. Whether you're cautiously curious o...

AI Ethics, Overreliance & Honest Talk with Jen Manly

In this episode of My EdTech Life , Fonz sits down with returning guest Jen Manly , a computer science educator, TikTok powerhouse , and advocate for ethical tech use, to unpack the complex relationship between AI, teaching, and critical thinking. From data privacy concerns to AI detectors that fail our students, this conversation gets real about what’s hype, what’s helpful, and what needs more scrutiny. Whether you're cautiously curious or deep in the AI trenches, this episode offers clarity, nuance, and practical insight from a seasoned voice.

👏 Huge thanks to our sponsors:
🛠️ Book Creator – Amplify student voice
📚 Eduaide.AI – AI-powered teaching assistant
🌐 Yellowdig – Connect learning communities

📌 Timestamps:
00:00 – Welcome & Sponsor Shoutouts
02:00 – Meet Jen Manly: Her work, mission, and CS journey
04:30 – Teaching AI ethics before it was trendy
05:45 – The ChatGPT launch and why mass acceptance raised red flags
08:15 – AI “Fight Club” and the divide in EdTech reactions
11:30 – Is AI really saving time for teachers?
13:45 – Rubrics, productivity hacks, and the danger of using AI for everything
17:10 – Platform overpromises and pricing concerns
19:00 – Environmental cost of AI and hidden labor
22:00 – How AI systems encode and amplify bias
25:45 – AI grading? Why Jen draws the line
27:30 – Why AI detectors (like Turnitin) miss the mark
32:00 – Building assessments that don’t benefit from AI use
36:00 – The illusion of "just Google it" vs thoughtful search
37:00 – What real AI literacy looks like
39:30 – Age-appropriate and ethical integration of AI tools
42:00 – Tech ≠ Learning: The unplugged approach
44:00 – How to teach agency without tech overload
47:00 – Final thoughts, reminders, and calls for caution
49:00 – Jen’s “AI Kryptonite” and billboard message
51:00 – Wrap-up & CTA

🔗 Connect with Jen Manly:
🎥 TikTok: @strategicclassroom
🌐 Website: https://learningandteachingblog.com/

🎧 Catch more episodes and resources at:
🌍 www.myedtech.life

📢 Don’t forget to like, comment, subscribe, and share!
Your support helps amplify educator voices and build a more informed and thoughtful EdTech community.

Authentic engagement, inclusion, and learning across the curriculum for ALL your students. Teachers love Book Creator.

Yellowdig is transforming higher education by building online communities that drive engagement and collaboration. My EdTech Life is proud to partner with Yellowdig to amplify its mission.

See how Yellowdig can revolutionize your campus—visit Yellowdig.co today!

Support the show

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at [email protected]. ✨

 

00:30 - Welcome Back and Introduction

04:47 - Teaching AI Ethics Before ChatGPT

08:43 - Initial Reactions to ChatGPT's Release

18:22 - AI for Teacher Productivity: Help or Hindrance?

31:47 - Environmental and Hidden Labor Costs

40:14 - AI Plagiarism Detection Problems

47:13 - Redefining AI Literacy for Students

52:14 - Maintaining Critical Thinking in the AI Era

Fonz Mendoza: 

Hello everybody and welcome to another great episode of my EdTech Life. Thank you so much for joining us on this wonderful day and, wherever it is that you're joining us from around the world, thank you, as always, for all of your support. We appreciate all the likes, the shares, the follows. Thank you so much for interacting with our content. As you know, we do what we do for you to bring you some amazing conversations so our education space can continue to grow and we continue to amplify many voices and many perspectives. So I wanna give a big shout out to our sponsors. I wanna give a shout out to Book Creator. Thank you so much for your support, Eduaide and Yellowdig as well, for believing in our mission of bringing you these conversations week in and week out. So thank you for all that you do.

Fonz Mendoza: 

And today I'm very excited to have a two-time guest. And you may be saying well, Fonz, you already had a four-time guest, you've got two-time guests, and so on. Well, it's because that's the way the show works. It's like sometimes, you know, I want to catch up with my previous guests, and especially now in the age of AI in education, and I want to get their perspectives so they can bring in their expertise and, just you know, amplify their voices and give them a platform to share their knowledge and their practices and their perspectives. So I would love to welcome to the show Jen Manly. Thank you, Jen, for joining me here today. How are you this evening? I'm great.

Jen Manly: 

I'm excited. I know you said two-time guest, but we're having a really different conversation, so I think it's going to be great.

Fonz Mendoza: 

Yes, absolutely, and a very different conversation, for sure. But, Jen, for our guests that are watching right now or listeners that may not be familiar with the first show that we did as far as that topic is concerned, but if you can give us a little brief introduction of what your context is in the education space, Totally.

Jen Manly: 

Yeah. So my name is Jen Manly. I have been in education for I don't know 13 years now, something like that. It feels like it has not been that long. I started as a middle school computer science teacher. I taught high school computer science and now I teach a course at the University of Maryland every semester. The current course I'm teaching is gender, race and computing, so it's a really interesting class.

Jen Manly: 

My context in education I create content to help teachers work less without sacrificing their effectiveness. I believe in keeping great teachers in the profession, and the way that I kind of attack that problem is by thinking about how we can apply productivity science to the work that we do, and then also like setting boundaries around our time and viewing teaching as you know, teachers as professionals. But my context for this episode and what we're going to be talking about is I have been teaching about the ethics of AI as a computer science teacher. I've taught it to middle, high school and now college students for the last eight years. Eight years, seven years, 2018. So about four years before Chad GPT was released to the general public. And something I'm really passionate about is helping educators use AI and view AI from a, I would say like ethical lens, but really more being critical about when we're using it, understanding that it is a tool, but also that it's not a net neutral tool. So I'm excited to talk with you about it today.

Fonz Mendoza: 

Yeah, and I'm really excited about it too as well, because one of the reasons that we're talking about it just prior to recording the show is really a particular post that really stuck out and I know that we'll get into it because, just like you mentioned right now, you definitely share a lot of great content as how to be very critical of AI and when to use it and when it has its place, and maybe some you know when you just could do a basic Google search to you know to do something like that as far as research or finding something out, and so we'll get into that. But I want to ask you you know, I know that you've been doing this since 2018, as far as teaching computer science, the ethics of AI, working with middle school, high school and then, of course, higher ed, I want to ask you let's go back to November of 2022, when the news broke out. It's like hey, chad GPT is available. What was your initial reaction and your initial thoughts as you heard the news? And, of course, this is being released.

Jen Manly: 

Yeah, so I would say my initial thoughts. Well, I guess I can't like pinpoint exactly what it was released, but I want to look at, like, the first two months right after something that was I was really concerned. And the reason that I was really concerned was because it came out and there was this mass um accepting of it, especially in the education space, without any context or consideration of a lot of the critical components of AI um that we in the computer science space have been talking about for years. And I think back to, you know, maybe like December, January so December of 2022, January of 2023. And I'm watching ed tech experts people who have been in ed tech for a very long time, you know, starting to come out and publish books on using AI in education. And I just remember feeling like I have been teaching this for four years at this point and I would not consider myself an expert you know, I'm not somebody who is at. You know, at the time I now I feel pretty confident in my own prompt engineering right, but at that time I was not using it and programming it in the way that you know. All of these experts at Google, amazon, like people in the tech space had been using AI for much longer than that.

Jen Manly: 

Lots of people had been very critical, and so it was surprising and also concerning, when ChatGPT was released publicly, that in the education space particularly, it came off as a mass acceptance. And to me it was surprising because I don't feel like that is the energy that we have for most things in education. Like most things in education, it takes us time to fully accept, right Like I think about. I was on a national curriculum writing team. We wrote that curriculum for a year. We then piloted it for a year with teachers in a classroom before we then released an updated version of that curriculum to, you know, for the masses, and so I think it's a mix of concern and surprise for the masses, and so I think is a mix of concern and surprise, and then also understanding that this is something our students immediately had access to, and so what are you doing to make sure that students are still understanding how to use it responsibly and that it's not detracting from their overall education experience? I know that was a lot.

Fonz Mendoza: 

No, no, no, it was actually perfect because it was actually very similar views and I don't know. It's very weird, and I had Rob Nelson on the show a couple of episodes back and we're talking a little bit about the AI Fight Club, where you've got really two sides. You've got those that are all in and gung-ho and then some that are a little bit more, you know, cautious, cautious. Well, maybe I consider myself more kind of trying to be in the middle, obviously, because I love to bring both sides of the conversation to the table. But then there's also the other side where it's like no, no, no, like we're going too fast which is something that I truly believe in too as well that this whole move fast and break things doesn't really work.

Fonz Mendoza: 

And you know, we've seen a lot of things that have kind of failed.

Fonz Mendoza: 

We've seen some things that you know show some promise, but at the same time, it's like who's really to decide? Like, yes, this is going to be very effective and this is going to be the most, this is going to be the solution to education's problems. Because I always go back and I always say, well, you know, there really isn't anything new under the sun. I remember people feeling the same way about the Internet. You know people feeling the same way about iPads in the classroom and then Chromebooks, and these are going to be the things that are going to, you know, drive up test scores and are going to revolutionize education, and it just seems like, yes, at the very beginning there was a huge acceptance, which was very scary because everybody just started jumping in and diving in and not being very cautious to the other side. As far as I always focus on the data privacy side, I always focus on do parents know? I know that parents may be familiar with this, but do parents know that this is being used in schools?

Fonz Mendoza: 

and this needs to go past just the tech form that you sign at the beginning of the year and say hey you know you can in at a time with, you know, dealing with burnout, dealing with poor teacher retention, and then everybody jumps on the boat saying, yep, this is going to be it. This is going to personalize the learning, this is going to figure exactly what is wrong with each student and give them exactly what they need to be able to, you know, succeed. And that's really what teachers, we want our students to succeed, but at what cost? And is it really going to be effective? And so I know what the last conversation that we had, we were talking and dealing about how can teachers, you know, fight burnout? And you talk about agile. You know, what is it? Agile? What's the word? It's okay because we can cut this. Yeah, what is it?

Jen Manly: 

It's called Scrum, but it's like project management. Yeah, okay, perfect, let me go back into that.

Fonz Mendoza: 

I know in our previous episode we talked about you know project management, like kind of like Scrum Masters, you know using agile to do that. And so what I see here is people that are really saying like, hey, we can just easily put a student on a computer, it's going to figure out exactly what they need and we can all take it from there and that's it. And then, of course, now we've got the other sites like, no, no, we need that human connection. We need all of this human connectivity as far as making sure that the teacher is still present, they're active, they're engaging students. So we're seeing so many different perspectives.

Fonz Mendoza: 

But I want to ask you because I know the previous show we talked about you know project management and it just seems like, hey, this product it's going to help you just manage your classroom and it's going to manage your workload, and you really just come in and you need a worksheet. Hey, just prompt it and I can make you a thousand worksheets in 30 seconds. So I want to ask you about that. Now, in your perspective and in your classroom experience, and maybe what you have seen, you know both at the middle school, high school and even higher ed level what is it that you're seeing now as far as teachers' acceptance of this? Is it really making their lives a lot easier? Is it really making them productive?

Jen Manly: 

Yeah, I think it's a really good question and I actually spoke about this at iTech Iowa in November of last year October of last year and one of the things that I think like a good first question for teachers is is this actually going to make you faster? Right, like there are certain tasks that teachers are outsourcing to AI, that it is faster to outsource it to AI and the product that AI is giving you is something that is going to be, you know, usable in your classroom. That's going to make sense for your students. So a good example of this is like creating a rubric. Right, you have created an assignment. You want it to create a rubric for you that you're going to use to assess students. That is a very easy task for a CLOD or chat GPT, right, and we're not dealing with data privacy. So that might be a situation where a teacher you know I think about.

Jen Manly: 

When I was teaching middle school, I had four preps and none of them. I had anybody else teaching them, right? So I had 230 students, four unique preps, two of them didn't have curriculum and I was a first and second year teacher. That was all on my own, because there were only three other teachers in the entire district that were teaching any of the classes that I was teaching Right, and I think back to where I was then being able to outsource creating the rubric would have actually been extremely helpful. Right, being able to outsource potentially a worksheet right, like I want it to make me a note catcher, that is something that is very helpful and is much quicker than how I could do it for myself.

Jen Manly: 

The challenge is that a lot of teachers I guess I shouldn't say a lot, some teachers are looking at AI and they're saying, well, if it can help me with this thing, then I want to use it for everything. And that is not like A, and this is, I think, the post that you saw from me right, where I was saying don't use AI for things that you can Google, and there's lots of reasons for that. But to me, the number one reason is it's actually less efficient. Like, if you're trying to save time, but maybe you're not great at prompt engineering, you're going to go back and forth with ChatGPT or Claude or whatever you know system you're going to use. Let's actually be critical of how much time using this tool is taking us, and maybe there are things and certainly for teachers.

Jen Manly: 

There are a lot of things that you can do faster yourself, right, you can do faster because you already have a template that you've used for other units and you're just going to reuse it, right, and so I think, like that's number one is there are absolutely things that AI can help teachers be more productive, help teachers be more efficient, do it more quickly, and then there are other things that it's less efficient to use AI. It's not giving you the output that you want. It's not creating output that's friendly for kids or, like you talked about and this is actually a big concern of mine it doesn't actually consider the fact that it doesn't actually consider the fact that we're using, you know, over that, over reliance on it.

Fonz Mendoza: 

You mentioned something like you know, if I can use it for this, well then that means I can use it for this and this and this. And you know, essentially the way that I saw AI and playing around with it, even before the release of 2022, with, you know, there was Writer, there was Jarvis, there was you know, other programs and so on. To me they seemed more like they were just, like you mentioned, productivity tools and tools that can help you kind of do some like copy, you know, make it a little bit faster, and so on, especially if you're doing marketing. But it just seems like there's always something in education that we kind of try and just put those you know square pegs in the round holes of education and just make it fit in Like we just got to do it because it's going to help out, because if it helps out on the outside, it's definitely going to help me out with everything here. And I do agree with you that there are some things that it will definitely make things a lot easier, possibly like some things that you may need to translate, some things that you would just need to. You know, especially with the translation portion. I believe that that is definitely very helpful. Or I know I had Paul Matthews here on the show stating just different reading levels or Lexile levels just with the same passages and things of that sort. So those things. But just to want to do everything on it, it can be a lot more waste of time, especially, like you mentioned, if you are not putting in the right input to give you that output that you want. You can spend too many minutes or too many hours just trying to get certain things. But then I know that there are platforms that already pre-prompt for you per se and they have a myriad of plethora of tools that are there that you just click and say, hey, I need a rubric, and I just tell it like this is what it's going to do, and then it's going to go ahead and produce it for me, and of course it's not going to be for free. Most of these things they have the freemium, and so those are some of the things too.

Fonz Mendoza: 

That talking about the adoption within many districts is something that to me, also kind of worries me and concerns me, because with mass adoption and not knowing really where the industry may be going, or that uncertainty that, as Chad GPT continues to change, to grow, it definitely wants to be more profitable.

Fonz Mendoza: 

So then, as those platforms that connect to those APIs, then those prices start going up too as well. And then, of course, accessibility to teachers, to districts, to schools you know now you've got, you know upper echelon districts that have access to this, and then, of course, smaller districts may be out of luck. But also the changes in that information, you know, as far as the knowledge cutoff date, like we were talking about, so going to Google and maybe finding something that is, you know, right on par with you know, 2025, the news, or something that's most recent, makes more sense than to just say, hey, type it in cloud and type it into chat GPT and tell me what it gives you. And even some of these platforms, in their privacy or terms of service, it'll say knowledge cutoff date 2023. And I believe it was July 2023 for a lot of them. So we're not actually getting you know, or is it when they say they can search the web? But is it really?

Jen Manly: 

searching the web.

Fonz Mendoza: 

Yeah, so I want to ask you now on that as far as you know, cause I know you, you talk a little bit about the ethics and the data privacy and with your students. So I want to ask you as far as some you know, because I know you talk a little bit about the ethics and the data privacy with your students. So I want to ask you as far as some of these pitfalls you know, how is it that you address this with your students as you're teaching them about AI?

Jen Manly: 

Yeah, totally, so, okay, so I guess I'll go through how I actually teach this with students, how I actually teach this with students. So I'm really fortunate that I now teach at a university that allows us to have these types of conversations, right, these critical tech conversations. So the number one way, the place I start from, is that AI is not a net neutral tool, right, like I think sometimes we look at it especially through the lens of, you know, well, it's a computer that's making decisions, and so it's neutral, because computers can't be biased and it's like, well, the people who program them have implicit biases that they may not even be aware of. Right, and so no tool that is programmed like AI is net neutral or or not biased. That's the starting point, and then the next piece of this is understanding all of the different ways that AI it is non-neutral, right. So. So a couple of places that we can talk about this.

Jen Manly: 

The first is understanding the environmental impact of AI usage. Right, huge demand on water resources. So, ultimately, when we think about AI and the environmental impacts of AI, we're talking about the data centers that are needed in order to run, you know, all of these different searches that we want to have, and when you think about it as one search right or one prompt, it's not that much. But the problem with generative AI is that it's not just you using it right. We've done this mass adoption of generative AI and so you are part of this bigger usage that is contributing to increased water usage, which is a problem because lots of people lack access to clean water. That's contributing to lots of energy uses burning of fossil fuels. These are not neutral things, and if you don't care about that, we can also talk about the hidden labor of AI, right. So there was this study that came out right after ChatGPT not study. The article that came out from Forbes right after ChatGPT was released to the general public, and the way that ChatGPT trained out racism, misogyny, sexism, right. All of these things that exist, because ChatGPT's knowledge base is the entire internet and the internet is problematic. They paid African workers $2 an hour to be exposed to these extremely traumatic and problematic things, and it's hidden labor, right?

Jen Manly: 

We think, well, AI is a computer, it's a robot, but it takes people to be able to do that work, and so you know, we can think about the environmental impacts. We can think about the hidden labor we can also think about. You know what groups are further marginalized by AI usage, right? So this is something that I talk about with students to use AI for grading. I think using AI for grading is incredibly problematic because, number one, your K-12 students can't consent to their data being used. But also, when we think about biases and how they manifest, people are like, well, AI can't tell that this is coming from you know, a Black student or a Brown student or a female student.

Jen Manly: 

But there are ways that AI is biased against certain groups that are not explicit. So, for example, one of the stories I talk about is Amazon used to have a hiring algorithm that was secret. They didn't tell anybody they were using it until they decided not to use it anymore. And the reason they decided not to use it anymore is because they found that the hiring algorithm was discriminating against female candidates. Because the knowledge base for that hiring algorithm was successful, amazon engineers who were predominantly men, right? So certain characteristics that were coming up in female resumes were not being accepted or seen as qualified simply because the knowledge base consisted mostly of men. And so there's all of these ways that AI is non-neutral, but we don't talk about it, right and I think that's the first piece is understanding that you can make an informed decision about when you're going to use AI personally and if you really think about it, it's probably not as often as you're currently using it.

Fonz Mendoza: 

And I want to highlight a couple of things like you're talking about, especially that energy usage, because over the weekend I saw somebody post and it was part of a thread where you know, we're 2025 now and they were actually April 2025 and they just they. They posted like, oh my gosh, I just found out about how much energy is being used and so on, and of course, everybody using with the Ghibli trends and the action figure trends and all of that, they don't think about those things. You know, it's just like hey, I want to fit in, I want to do what everybody's doing and we just follow suit and follow along. So I followed this post on this thread and this is somebody that is very well known, but this was what they posted and it says it says here it says here's one thing that might help though it uses massive amounts of energy, but compared to the energy we use on meat production, it is very small. But it actually has the ability to solve this problem, which gives me hope. Eating one less hamburger a day would have a far bigger impact than using AI less, than using AI less.

Fonz Mendoza: 

And I was thinking to myself. I was like, okay, very interesting, which that sparked a huge conversation on LinkedIn and everybody was just like posting and everything talking about this. And you know, obviously it's it's coming more to the forefront as far as that is concerned, but there's still so much hype around it where you just had, you know, companies doing their big annual conference and showing like, hey, here's our new AI library, and now we've got this and we've got that, and really all that hype covers all what we talked about, because I remember seeing that 60 Minutes interview, too, about the data workers and getting paid that very, very small wage of $2 a day and the horrific things that they were seeing. It just really blew my mind. The other thing that I wanted to talk about, too, is as far as the use of AI we were talking a little bit about. You talked about grading. Obviously, we'll talk about some plagiarism detectors, as we know that they don't work.

Fonz Mendoza: 

But I saw a recent post too as well, on TikTok, by somebody that I follow, stating that, for example, Turnitin we all know is a plagiarism detection platform that now they're kind of relabeling or rebranding in a way, because I guess there's so much hype that these detectors don't work. Now they're saying, oh, we are an integrity checker, so now it's kind of like, well, let's flip it around. And to me I'm thinking you're still doing the exact same thing. Now you're just relabeling for profit, and that's really what it is, and that's the way that I see it. So what are your thoughts on plagiarism detectors? And maybe now that you hear a little bit about because I've not just heard it from Turnitin, but there's some other articles that have come up from higher ed stating oh, it's the integrity of education, so we can still use AI, but we want to show them how to use it with integrity.

Jen Manly: 

So I know it's a two-part question. There might be a lot there, but go for it. Yeah, let's do it. So you know, let's talk about original turn it in right, the original turn it in. That's not doing AI detection, that's just doing plagiarism detection.

Jen Manly: 

I think it would be an irresponsible use of Turnitin to just base your interpretation of plagiarism just by looking at the percent right. Like the percent is really like. If Turnitin flags it, that is, you going in and looking at what's flagged right, that's really an indicator to you as an educator. Something about this is a little bit off, and maybe it's that the student didn't cite their sources correctly, maybe it's that they pulled entire quotes and like that's not really great, but they did cite it right. Like it's really a flag for you to actually then go in and look deeply at every paper. It's a helper tool, right, like if you have and like I said, when I taught middle school I had 230 students. Now teaching college, I often have 120, 130 students a class Like I don't have the ability to read every single paper that a student turns in through the lens of is this plagiarism or not? So turn it in for plagiarism. We'll talk about AI separately, but for plagiarism is a helpful tool insofar as this is a flag for me to then be able to say this paper looks questionable. Let me look at this a little bit more deeply, right? I still think it would be irresponsible to accuse a student of plagiarism without doing that deep work yourself, right? And so the problem with AI plagiarism detectors is that since the advent they have said, there's been that disclaimer that, like, we're not 100% accurate at detecting AI, and what they have found in researching a lot of these AI detectors is that they tend to be biased. They tend to ping more regularly incorrectly for neurodivergent students and for students whose first language is not English, right, like? There's a big thing that came out recently that was talking about how the MDASH is an indicator of AI. I've always used the MDASH, I love the MDASH, it's one of my favorite pieces of punctuation, and so for me again, as somebody who's understood AI, I would never use an AI detector as an indicator of anything, because they say on the front end this actually is not accurate. So we know that, and we know again that it's biased towards certain student groups.

Jen Manly: 

What I think is interesting for educators is that when you start receiving student work that is written with AI. You can see it Like there are certain qualities that I pick up on, where this doesn't necessarily sound like something a student already turned in that they wrote in class. There's an excessive use of bullet points, and all of the bullet points are formatted exactly the same way, right, with a few different words switched out. We were talking before we started. Exactly the same way, right, with a few different words switched out. We were talking before we started recording about how sources right, like a good place to check. If you're like, I kind of think that this might be, AI is checking to see if the sources exist, because a lot of times they don't right, they're made up sources, made up resources, and so that's like a good first step up resources, and so that's like a good first step.

Jen Manly: 

But I think you know, ultimately, especially as these tools get better, they get more human. Students learn how to prompt right. There's no guarantee that it's not written by AI if you give them an assignment and they can go do it at home or they can do it on a computer where they have access to those tools, and so for me, like even at the college level I look at having conversations around AI usage with students as just that, as starts of conversations, right? So if I suspect that a student uses AI, I'll say that I'll say, hey, this doesn't sound like you, potentially right, these cards sound like they were written by AI. Right, and give them an opportunity to be upfront about it and to be honest like that's how I approach a lot of conversations about plagiarism.

Jen Manly: 

Anyway, right, this place of what caused the student to need to cheat?

Jen Manly: 

Right, Because a lot of times, students don't want to cheat, but they're short on time or they don't understand what they're doing and they make a decision to plagiarize or to use AI in ways that are not necessarily acceptable. But I think, like, as we're navigating this, it really we have to stop looking at it as we're trying to find when students are using it and we're like it's like a gotcha moment. It's really opening a conversation, especially if you are an educator that is using AI. Right, if you're using AI in certain components of your classroom, but you expect your students not to be using it at all, it's a little bit of a disconnect. So, and then I think the last thing I'll say, and then I would love I think this is a bigger conversation, right, but, like I think it forces us to be creative about how we design assignments and assessments so that it's either not possible to use AI or it's disadvantageous for students to use AI, that it actually is better for them to not use it at all.

Fonz Mendoza: 

Yeah, and you hit on a lot of great things there. The one that I really want to highlight too, like you mentioned, the original intent of like, turn it in, and, like you said, it's a flag. It's for me, as a teacher, to go and say, okay, this, I'm using this as a tool to help me give proper feedback, but I think it just goes back now to just that over-reliance of well, this is what it gave me. You cheated, you know, that's it. I'm just going to give you the zero, because this is all AI and it's just amazing how quickly that just turned into that.

Fonz Mendoza: 

And now one of the things that I see, too, is just a lot of platforms stating hey, let's work on your students' writing, so, as they input their own writing, in their own words, this platform is going to give them that feedback immediately that they need. And I'm thinking, okay. So I know that as a teacher, you know it can be difficult to give feedback to maybe up to 30 kids and maybe in some schools, like you know, just depending on the class size, but there still has to be that human component of at least checking it and checking it and saying, ok, this is the feedback that it's giving. But now let me go and do a once through. And because that, over reliance, that is what scares me, and I've said this from the very beginning too, because as soon as this came in, teachers using it, and then all of a sudden it's like, hey, like this output, like the very first output, it's like, oh, this is truth, this is gospel. Here you go, guys, here's your handout. And especially, like with history or science, and I'm thinking well, whose history are you sharing? You know, whose science are you sharing? Because it's that confidence that this platform has my best interest in mind and all I got to do is just plug in the standard and it has all the information from the Internet here and it's going to be an accurate output that I'm going to be able to share and it's not. But it's really scary that a lot of teachers really see that platforms that have these pre-prompts already, that you go in there and we were mentioning and talking about it in the pre-chat, you know thinking that this is all completely true when their knowledge cutoff date stops at 2023 and maybe it might've moved up a couple of months, just depending on open AI. But people think like, hey, this is truth. And so my question always to those platforms is well, since you are up to date, you know, my state got new standards. Does your platform have new standards? And they're kind of like, well, yeah, I mean, you can put those in and everything should be fine. I was like no, it's not. I need to know that you are giving proper information, because then you're going to be teaching or creating something that the students don't need to learn, or maybe it's not presented that way when it comes time to learning it and, of course, for state testing.

Fonz Mendoza: 

Then another thing that I saw is when Chad GPT was doing the images, people were like they showed a picture of a water cycle and somebody said, oh wow, this is the most amazing thing. Look at what Chad GPT can do. And that whole thing was wrong. The water cycle, you know image was correct, incorrect. And somebody said, well, it's just a little incorrect. I was like, no, no, there's no little incorrect. It's either incorrect or it's not. And this one was incorrect too.

Fonz Mendoza: 

And I'm thinking to myself you're, you're posting this with just here you go, guys, look what I just created. I'm thinking to myself that's just an example of what I mean not checking that output and, like you said many times, it's going to take you a lot longer to just say, ok, look, my book already has this, or guess what? My content coordinator already created this handout and all I need to do is just tweak it. And, you know, make it a little bit more engaging, add a hook to it and I'm done. But it's almost something like oh, I did it. You know, openai or ChatGPT or this platform did it for me and I'm good to go and I'm ready, and that's kind of just so scary for me, you know.

Fonz Mendoza: 

So let's talk about AI literacy for all, all right. So I want to ask you, you know, everybody talks about AI literacy. We've been talking about the well, the Internet, interwebs, talking about AI literacy, and they talk about it. So so much Many people. They're still considered to be behind, and if you're not using AI in your classroom, you're doing a disservice to your kids. So I want to ask you, as a practitioner, as an educator in the classroom, do you feel that your maybe small use of AI is going to be hurting your kids in their future jobs, or what are your thoughts on that?

Jen Manly: 

Yeah. So I actually like I think it's a really interesting conversation because I think the way that most people are interpreting AI literacy is that every student everywhere should be putting things in to a computer, should be like going in and and prompting chat, gpt or quad or the the edu approved you know version of it and I think, like that's one of the biggest mistakes. So I think about how we teach AI and computer science and you know for anybody who's who's watching and you're like, how do I bring AI into my classroom? Computer science teachers and computer science curriculum providers have been teaching about AI since before the advent of mass use of generative AI and there are so many good unplugged lessons that help students to understand the ethics of it, for sure, but also the tech of it. Understanding, like, how does AI actually work? What are you doing when you put a prompt in? Where is that information coming from? Right? Like, what is the basic version of an LLM, a large language model? How does that actually like generate information, which I think is a really good piece of background for everybody, right?

Jen Manly: 

Like, when we talk about AI literacy, I think a lot of people are taking that to mean students prompting AI and I just think, like number one, for certain age groups, that's super irresponsible. And number two it's not actually AI literacy right Like. Ai literacy is understanding the how, understanding the drawbacks of it right, potentially when you should and shouldn't use it, and then getting to the point where we now can write our prompts. We now can use it for you know, research or using or creating outlines or however you want to use it, helping you study, right, whatever tool, way you're using AI with your student and so like. I think that's a major missing component that, again, computer science teachers have been doing for a very long time and there is a lot of great free curriculum that exists that covers that, that covers AI unplugged, that covers the why, covers the how, covers these ethical considerations that we should have. Okay, so that's number one.

Jen Manly: 

Number two again, I think we need to be very thoughtful about when we're actually having students use AI tools, right. So, for example, if I'm teaching you know college students, they're adults, right, like they are of age, they can consent to their data being used, they can consent to their work being input into a GPT or whatever else, and so I feel really good about having structured ways for them to use AI and understand what AI can and cannot do, right, and I might feel OK about that with with high school students, right, but even then they're children, they can't consent, they don't fully understand what information they're sharing, how that information is being used. I think a big question for me when I think about a lot of these ed tech platforms is okay, yeah, you can grade my students' work, but you're using that work to further build your product, to develop your product, to make more money. My kids can't consent to that, right. And so you know, I think about, let's say, we want students to start learning how to prompt, to start practicing how to prompt. That might be something where students are writing those prompts and the teacher is modeling it. Right, it's on the board, on the, you know, whatever board you're using, and you, as the teacher, are actually going through how to create prompts, how to use that to get information, ways that you might, you know, tweak it, which, again, like another positive of that, is that we now have limit. We're now limiting the number of prompts that we're putting in because we're looking at it through a full class length, versus 30 kids in a class all inputting into a machine at the same time. So I think like it's widening our understanding of what AI literacy is.

Jen Manly: 

And also, you know, and here's the thing, like this is not new to education, right, in math, we teach kids how to do things on paper before we let them use calculator. We teach kids to write a draft, you know handwritten, before they start learning how to put it into computer. Right, they do an outline before they start writing the full draft. Way back when I took my first coding class, mr Rose shout out to Fearless Day. Apparently he's still a teacher. But Mr Rose would not let me get on a computer to write Java unless I could handwrite it and not have errors. So we see this a lot in education, where you have to understand the basics before you can use the tool. And I think a lot of education is looking at it backwards. We're saying put the kids on the tool and that is AI literacy. That's not AI literacy. That's using a tool without fully understanding how or why we're using it. You know.

Fonz Mendoza: 

No, I agree with you a hundred percent, because it's something that I've seen from friends, you know, and it's just because it's like they don't know what they don't know yet and sometimes maybe they do, but it's just like you know what, it doesn't matter. This makes my job a lot easier and they're just willing to just, you know, kind of just take that, let that take over, and say, hey, I'm being more productive, I'm finishing this, I'm doing this, but later on it's like they don't see the cost or the dangers of that until it's too late. And for me, it's just really being very cautious with how you're using it, when you're using it and, like you said, I always feel like there's a time and a place to be able to use that tool effectively. So, kind of, as we start wrapping up, I want to ask you something that kind of is a nice segue into that, because we talked about how AI can become an over-reliance for even students as well, like through your experience and being able to see even that in higher ed, and just kind of questioning things and saying like, hey, you know, and giving them the opportunity to say, ok, let me go ahead and rewrite, or giving them a second chance.

Fonz Mendoza: 

But I want to ask you, in your experience and maybe your thought, with so many tools that are out there, you know, how can educators still encourage the students to maintain agency in their learning process and really use those critical thinking skills when they can easily just jump on? Like I said, I was looking for a citation or just some reference and somebody just said, well, put it in and it'll tell you how to do it. I was like, but it's not accurate. So how would? What are some suggestions from you and your experience on how students can still maintain their critical thinking skills and not be over-reliant on the tech?

Jen Manly: 

Yeah.

Jen Manly: 

So I think and you know again, when I think a lot about AI, I think about a lot of things related to AI, and it goes back to what we already do with good teaching, right, which is even before AI came onto the scene. Right, we were really critical about how many different pieces of tech we were introducing to students, right, and we were being thoughtful about like, okay, well, if I, you know, we were talking about the citations, right, if I were going to introduce a citation machine, which existed before mass use of generative AI, right, where it would just you put in your information output a nice little formatted bibliography for you, right, I'm going to introduce my students to one tool that they can use for that right, and before they use that tool, I'm going to teach them how to do it manually. Right, they need to understand the why before they're using the tech that'm going to teach them how to do it manually. Right, they need to understand the why before they're using the tech that's going to help them. And so I think, teachers thinking about, well, how do we give students agency but still build their critical thinking? One of my favorite people to follow who has been kind of critical about AI is Sinead Bond. I don't know if you know her. Hey, mrs Bond, she's very active on Twitter, but one of the things she talks about is you know, I'm just going to have more handwritten assignments, right, like she's an English teacher and she's like, instead of putting students in a position where, potentially, they're going to use an AI tool in a way that I don't want them to, I'm going to be very critical about how I structure assignments so that it's actually not advantageous for them to use AI. It's more advantageous for them to do it the way that you know that I want them to do it, which is either handwritten or you know they're just doing it on a Google Doc, right, like normal.

Jen Manly: 

So I think, for teachers, number one is understanding that student agency existed before we had the mass advent of generative AI. I was teaching student agency. I was teaching teachers student agency and student voice and student choice in 2017, right, with no tech, like you can build in student agency totally on paper. Okay, so student agency and AI are not synonymous. Certainly, AI seems to make student agency easier and does in some ways, but you can create a classroom where students have voice and choice and control over their learning without any tech at all. Right, and so I think, like that's number one is that those things are not synonymous. And then number two is like modeling what it looks like to be thoughtful, right.

Jen Manly: 

So when we're thinking about what tools we're going to introduce in our classroom, that when we introduce that tool, students already know how to do that task without the tool.

Jen Manly: 

The tool is going to add value, it's going to make it easier, it's going to make, you know, their output more robust, whatever it is, but you've been thoughtful about why you're introducing this particular tool and it's very clear to the students, right? They're not being given here's, 10 different tools you can choose from. You as the teacher. You, as the adults in the room, have exercised that you know caution and thoughtfulness in order to make it easier for your students, because they don't like student agency doesn't mean that they can choose between 10 tools. They do not have the capacity to do that at this point. That is a very advanced skill. That is still your job as a teacher. So I think, like if you're choosing to bring in any tool into your classroom, knowing that part of your job is to be the curator of those tools, and that we are being very intentional about communicating why we're introducing a tool into our classroom, whether it's AI or not.

Fonz Mendoza: 

I love it. I love it. Well, Jen, it's been an amazing, amazing chat. Thank you so much for just your insight and your experience that you brought into this conversation. Obviously, you know being a middle school, high school and then higher ed teacher and then your background in computer science. Thank you so much. I really appreciate it and you know just a lot of gems, a lot of great things to digest, and I know that our listeners will definitely find a little something there that they can sprinkle onto what they are already doing Great.

Fonz Mendoza: 

So please make sure guys, all of you listening, make sure you follow Jen, especially on TikTok. She is huge on TikTok, so there's strategic classroom at strategic classroom, so follow Jen right there. She'll share some great videos, time hacks, great things for a lot of you as educators, just to help you and just be more productive and more efficient. And she, it's amazing. One hundred and thirty nine thousand plus followers, five million likes it's amazing. And she was a guest here on my EdTech Life. I feel honored to have a big TikTok star here on our show and she's fantastic and just the way that you see her, she's very genuine, very authentic and that's the way that. That's the content that you'll see. So nice.

Fonz Mendoza: 

Yes, please make sure you follow her for sure. I promise you you're going to love everything that she puts out. So, Jen, thank you so much. But before we wrap up, Jen, I always love to end the show with the last three questions, so hopefully you're ready to go. So I want to ask you as we know, every superhero has a weakness or a pain point, and for Superman, kryptonite was that weakness and pain point. So I want to ask you and since we're talking about AI, well, I'm going to flip it on that and just say, in the current state of education, what would you say is your current AI kryptonite?

Jen Manly: 

I think it's grading. I have a lot of big feelings when it comes to using AI for grading and I sometimes get a little bit too invested. So I will say using AI for grading is the thing that kind of sets me off. I get a little bit passionate.

Fonz Mendoza: 

There you go, great answer, all right. Question number two If you could have, a billboard with anything on it.

Jen Manly: 

What would it be and why? The best quote that I think every teacher needs to hear is that your worth is not defined by your productivity. So a lot of times, you know we are just trying to get it all done, and it can make us feel like we're not good teachers if we don't get through everything, and who you are as a teacher is so much more than what you get done or what you don't get done, and so I actually, for a very long time I had that as my phone background because I just needed to be reminded of it all the time.

Fonz Mendoza: 

Nice, Excellent message. Love it All right. And the last question if you can trade places with one person for a single day, who would that be and why?

Jen Manly: 

I'm going to pick my almost four-year-old Jack. He is three and a half, almost four, and I just would love to go back to I watch him play and just get so excited about everything. Right, he's very into magnet tiles and baseball right now and I wish I could go back to that like constant state of discovery. It's very cool to watch and I bet it's very fun to be in.

Fonz Mendoza: 

So I love that. That is a great answer. Well, Jen, thank you so much again for just sharing your voice here. As you know, we do what we do for all our amazing guests that listen is just to amplify creator, educator, professional voices, and your voice is definitely a great voice within our space. Please make sure you visit our website at myedtechlife, where you can check out this amazing episode and the other 322 episodes. So we're excited. It's five years in the making, a lot of great episodes, and I promise you you're definitely going to find some little nuggets there that you can sprinkle on to what you're already doing great. So thank you so much. And if you're not following us on socials, what are you waiting for? Please make sure that you follow us on all major social platforms at my EdTech Life and if you haven't done so yet, please subscribe to our YouTube channel, give us a thumbs up and share the content, because we would love for all those wonderful AI algorithms to get our content out to many more people. But as always my friends, Stay Techie!