Online Learning in the Second Half
Education
In this episode, John and Jason talk about going back to school, chat about how conversations have shifted to AI this year, ideate around making assignments unAI-able, and briefly rant about how AI detectors don’t work.
Join Our LinkedIn Group - Online Learning Podcast
Resources:We use a combination of computer generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
[00:00:00] John Nash: You ask me like, what am I doing differently now, this year is I'm abandoning decade longs teaching strategies that I've used in online courses.
[00:00:08] Jason Johnston: That's impressive, John, that you're, that you're changing,
[00:00:14] John Nash: I'm changing
Intro
[00:00:15] John Nash: I'm John Nash here with Jason Johnston.
[00:00:18] Jason Johnston: Hey John. Hey everyone. And this is Online Learning in the second half, the Online Learning podcast. Yes.
[00:00:24] John Nash: We are doing this podcast to let you in on a conversation that we've been having for the last, now two and a half years about online education. Look online learning has had its chance to be great and a lot of it is, and there's still a lot that isn't.
And so I'm wondering how can we get to the next stage? What do you think?
[00:00:44] Jason Johnston: Let's do a podcast and talk about it. What do you think? I think
[00:00:49] John Nash: that's great. What do you want to talk about today?
Episode
[00:00:52] Jason Johnston: I would love to talk about back to school. Here we are coming back to school. The students are returning either virtually or in person. Parking is an issue again as and I'm assuming it's the same at your institution as it is Absolutely. At mine. Are you still biking to work? I am.
[00:01:10] John Nash: Oh, good for you. Yes. Yeah, on an e-bike. So it's a lovely little ride and so I, I do pedal, but I probably don't pedal as hard as I could. Because it's too alluring and fun to push the little electric accelerator lever and just go.
[00:01:26] Jason Johnston: Technology enhanced commuting. Yeah. Yes. Yeah. That's good.
You're not dealing with the same kind of parking issues. I'm still, I'm back in, I'm still back in 2023, where we used to use fossil fueled driven cars and look for parking spaces among everybody else.
[00:01:46] John Nash: Yeah. Which my professor at the University of Wisconsin noted that the parking permit is really just a hunting license.
[00:01:54] Jason Johnston: That's exactly it. Yeah. Yeah. I had to go way out into the woods earlier this week in order to get myself and get myself my parking space. It's good. Here we are. Back at school. But but online does make it a little easier to park if everybody was online. As we're approaching back to school, John maybe, maybe there's some people that are joining us kind of partway through this podcast and haven't joined us from the beginning.
But as a little bit of an introduction I'm an administrator at a large SEC school in Tennessee. I'm the executive director of online learning and course production. And so my big thing is not teaching in the classroom, but helping instructors develop courses for the classroom, for the online classroom, and as well as supporting instructors on how to teach online.
So that's kind of what my day-to-day is about. How about you, John? What do you do day to day these days?
[00:02:55] John Nash: Day-to-day, I'm an associate professor of educational leadership studies in a large SEC institution in Kentucky. Smart listeners can just figure out where we work. And then I am the Director of graduate studies in the same department.
And we are on all online instruction department. So I'm teaching online. I'm helping advise students who are in an online programs master's education specialist doctoral programs. And I also direct A laboratory on design thinking at my institution. And so have to think about ways to humanize online learning and how might that happen in this second half of life for us and these coming years for online both P 12 and higher ed.
[00:03:43] Jason Johnston: That's right. Yep. So we've been having this conversation for a while.
And really he said that two and a half years, that's really just. Post my dissertation conversations. Really? Yeah. For the last two and a half years. John, I was thinking that we really, maybe we need to play up this whole blue versus orange kind of thing a little bit more in the podcast. Are you a sporting event person?
[00:04:07] John Nash: I'm always a supporter of the teams of my institution. Yes. And then and do I go to all the football games? No. Do I do, I like to watch football. I like to watch our team play football. Yeah. I'm a baseball fan, so San Francisco Giants, in case anybody caress who are orange, by the way, they wear orange.
Oh, nice. But yeah. You.
[00:04:31] Jason Johnston: Not too much. My my son particularly is really into it that's what pulls me into it. And then I'm always supportive. I like rooting for the home team.
Absolutely.
There's something really wonderful about being part of a community to have shared interests and just some things to do together and to talk about and to rah.
Yeah. Yeah. Yeah.
[00:04:53] John Nash: I'm smart enough to know that and when I'm in a room I, I root for and support for all my alma maers and every institution I've worked for
[00:05:01] Jason Johnston: yeah. Yeah, that's right. So go Vols as we head into the football season this weekend.
[00:05:06] John Nash: Go Cats.
[00:05:07] Jason Johnston: And I think we might have to relive this again when I think we're gonna meet up maybe somewhere in November.
I think it's gonna be in Kentucky this year we'll have to play that up somehow. We could do a live podcasting episode.
[00:05:19] John Nash: Yeah. Maybe. Yeah,
[00:05:21] Jason Johnston: where we can I'll have very little to say about the actual football, but we could talk about other things.
It could be a big analogy for us about
[00:05:28] John Nash: we could march the respective stadium. If it's up here Yeah. March the stadium and interview people about how they feel about humanizing online learning.
[00:05:35] Jason Johnston: Yeah, exactly. That would be a, that'd be a perfect avenue event for doing that. That's good.
John, as you're going into this this fall, we're kind of coming at it from slightly different. I. Directions. You as a teacher and as a program administrator and myself, more from the development end of things and supporting teachers , are there any ways that you're approaching this fall semester differently, and I guess particularly about your online classes than you have in the, in previous fall semesters?
[00:06:11] John Nash: I am, I'm having been engaged in conversations about AI in particular with you over the last six months and you and I have both been using ChatGPT since November of 2022. Last year when it first came out, watching it evolve and but really in more so in earnest, thinking about how to use it differently as we kick off the fall.
And so I have been changing the way I approach my lesson plans and my graduate doctoral program courses. I've been thinking about revamping the curriculum in a way that has more active and more human centered approaches for the learners so that they are more of a community now than ever before in the projects that they're doing. They do a single project on their own as a part of their dissertation development. But I have not been so thoughtful as I am trying to be this term in getting them to really become a community of learners as they create their own pathway through their dissertation work. And and AI, generative AI models have been helpful to me in developing that.
I think also just from an institutional standpoint, we've changed here at my university because now we have mandated policies for entering text into our syllabi about how AI will be used, generative AI, can and should or will be used in the courses yeah. That's that's on the surface of everything.
And then the other thing I'm doing as we kick off the fall is just watching how my p 12 sisters and brothers navigate these waters and thinking about how they're going to either, ban or use or integrate AI as a tool to support teachers to think about changing assessments or what they're gonna do about equity and access.
And so those are just all all on the forefront of my thoughts this fall.
[00:08:07] Jason Johnston: Yeah. Yeah. I think you've covered some pretty big areas. Working with your students and particularly in regards to their assignments or big assignments. Yeah. Policies as a school, as a institution, and then, How you personally then are the people that you support and that you are helping to guide and consult with and Yeah.
And direct and so on in terms of their own work, particularly in the P 12
[00:08:33] John Nash: and my colleagues here at my institution. And I'm fortunate to sit on a university level advisory board to think about our broad policy work over time here at this university. But so I'm worried about and want to help teachers across this sort of spectrum because professors have a long way to go, I think in thinking about how to be.
Thoughtful of integrating or not integrating generative ai and as do you know, P 12 teachers and principals. I think one of the big questions I'm helping people think through and even think through myself is what kinds of things are un-ai able and how might those be? Activities that students and learners can do, so that we feel like the generative models are used in a way that support the path up to these un-aiable events, like public demonstrations of learning demonstrations of context, critical thinking that can happen, live, as it were, but still in the context of having generative AI help teachers and learners get there. I'm not at all advocating that we just go back to Blue Books. That's not that's not what I'm talking about. That's one. Some people are, I know, and I know that's one sort of spectrum of the milieu of un-aiable stuff.
Writing in a blue book, that's un but the question becomes what is the assessment that you want to do that really demonstrates the learning? And for me, that's not Blue Books. That's sort of stuff on the other side, which is, yeah, these public demonstrations of learning opportunities for students to show what they know on their feet.
Can they think on their feet? Can they defend their decisions? Can they use metacognition and reflection to make good decisions and talk about what those are. Work in teams, AI doesn't work in a team. Those sorts of things I think are really interesting to me. They're old and they're old school techniques that I just think have now really risen to the top.
I was on a webinar a couple of days ago from the group Getting Smart and they're in the Pacific Northwest and they were talking about the ways in which AI might penetrate the the P 12 space and the way in which teachers might be rethinking what they do. And a comment that came across was that because of all the AI able assessments that now just exist, most everything that happens now and teaching and learning is you, if you can, it can be answered with an AI prompt.
So the Thought was, is this gonna drive more teachers to think about ways in which they can get to these more interesting deeper learning assessment approaches that they wouldn't have otherwise thought to do, but now, because AI has made obsolete or uninteresting or cliche, all of the assessment techniques, they had been using -- five paragraph essays, other kinds of written work they may abandon that because there's no guarantee that students are really learning anything when they do those Now.
[00:11:26] Jason Johnston: Yeah, I like that phrase about AI able or un-AIable that's good. Can we go into this fall, assuming that most students we say most students. Let's break it up a little bit. Can we go into the fall, assuming that most undergrad students first are familiar enough with AI that they could use it if they wanted to use it?
[00:11:50] John Nash: I don't know what the percentage of use is, but I think I, I go in assuming, just assuming, yes. 'cause a couple of things. I think good things accrue to instructors that just assume the penetration is high. And it goes back to my comment a minute ago, which is that if you assume that is, then that puts you in a position where you need to really rethink how you're going to assess the learning that you want students to do.
Because if you assume that everybody's using AI to do the written work, then you can you have to rethink all that. Yeah. Yeah. And it's not a dire situation. It's actually it's, for me anyway, and I hope there's a lot of people that think like I do, it's a celebration because it gives you the opportunity to really dive into the mind of the learner and what you want them to achieve and share the joy of them going through that journey and achieving that outcome.
[00:12:41] Jason Johnston: Yeah. Yeah, I agree. The temptation and tendency to take the easiest road is always there for all of us. We're busy. It, it's easy just to, as we've talked about before, just copy over. Our canvas shell from last fall and do the same things again with our students. And much of that is fine, but this is a great opportunity for us to really rethink and retool and revamp for the sake of our students to be able to think about what it is that is maybe forcing us now to, to to take a new, fresh approach to some of these ways of learning and exploring our subject matters.
[00:13:33] John Nash: Yeah. In fact, so one thing that I've done a reversal on a longstanding online activity that I've used which is the discussion board. I think I'm gonna dump them. I. And, our conversation for our listeners I go back an episode or so to our conversation with Enilda Romero Hall and what discussions really play.
Also to our episode with Michelle Miller, where we also talked about discussion boards. But, about 11 years ago, I wrote a paper for the. Journal of Research on Leadership Education on ways to reframe asynchronous discussion boards because they were virtually un-accessable-- it was, post once reply twice, and what do you get from that? And that really frustrated me because it was a field of dreams mentality. If I build this discussion board, the students will come and nothing's further from the truth. And so I thought about ways in which the students could have a question that they must answer in the process of having a discussion to lead up to that answer.
And then you could assess the answer. And I also capped it at no more than 300 words. So you could write a mile. You had to be concise, be thoughtful. So I wrote up this scheme and published it in the journal. And it got good feedback because other professors were feeling the same way.
Learning designers, like there was, how do we assess these things? And so this was great. And then here I am using this for the last decade, and now I've decided I'm not gonna use discussions anymore because they're totally AI able. Now, I don't think the students in my courses, which are pretty scoped and they're really interested in their topics, hopefully, they've told me they like having these discussions, but ultimately the thing that I grade is really that answer at the end of that week long discussion. That 300 word answer is AI able. And I'm not I'm not certain people will run to it, but I don't see the pedagogical value in this so much anymore.
When I can rethink, thanks to generative AI's help, new ways to have high impact activities that don't have to be lengthy, that can still move as formatively to the goal that I have for them to learn. And the, and they're pretty much un-AIable 'cause it's team-based work or it's stuff that we do discussions in class that they have to use and they can't, so I think that's a big change.
You ask me like, what am I doing differently now, this year is I'm abandoning decade longs teaching strategies that I've used in online courses.
[00:15:56] Jason Johnston: That's impressive, John, that you're, that you're changing,
[00:16:02] John Nash: I'm changing
I've also changed about how I feel about the role of writing as a marker for how smart someone is. And I'm thinking about my, my shift I remember was when I was listening to Jason Gulya's podcast, he was interviewing Wilson Sue from Power Notes, and Wilson quoted Kirsten Benson at University of Tennessee, Knoxville saying something to the effect of "we shouldn't let words get in the way of good ideas."
And because generative AI is opening the doors for a lot of great ideas to hit the marketplace-- a marketplace, that is predicated on people writing good English. And so that marketplace is now open to English language learners, neurodivergent learners citizens of cultures who express themselves in oral traditions or pass on knowledge in ways other than writing.
And now that can be. Put together in ways that are packaged for those who have the resources. If you're gonna write a grant, the philanthropists, the foundations, their stock and trade is good, written ideas, expression of ideas in written form. And that doesn't mean that all the people that I've just noted don't have good ideas.
They do. They've been boxed out, one could say. So that's another shift I've made. Because we want to be able to get to the ideas, not necessarily whether you can write them down.
[00:17:24] Jason Johnston: I caught your sporting reference too. Boxing out. Yeah, that's good. Maybe I'll start to say, go Vols every time I hear a sporting reference from you.
[00:17:35] John Nash: Sure you can try. When it comes to the basketball part, you might have a problem, but yeah. You can wish, you can have wishful thinking.
[00:17:41] Jason Johnston: Ah, so those are fighting words now, aren't they? Here we go.
[00:17:46] John Nash: Here we go. But what do you think of that idea? What do you think of this notion that generative AI is shifting the way online courses are designed, particularly when assessments have been relying on written word?
[00:18:01] Jason Johnston: Absolutely. It's all well and good if you are part of a program where it's really more about the ideas than the final product. But it's hardest hitting in those programs of writing. So it's not just about the ideas, it is actually about the communication of such ideas.
As you kind of alluded to. The other area is actually programming. So it's not just about the idea of what you're gonna do in your program, but it's actually about the programming computer programming language specifically. Like the coding You mean the coding? Yeah, actual coding.
And I've said this before in talking to people, let's focus more on the process and the product, right? How can we assess the process that the students are going through versus the product with a little bit of pushback from some people and one of the other, and I'm sure there's lots of other examples of this, is in language colleges.
So they're teaching new languages to people. It's not just about the process, it is actually about the product, about how can you articulate yourself in this new language in the end of the day. And there are a lot, so there are a lot of shortcuts to that product as we know. So in some programs, That is not as impactful as other programs.
So I just, academic programs, you mean? Academic programs? That's this insinuating We were just talking about coding, so I wanted to make sure Oh yeah. Academic programs. Exactly. Yeah.
[00:19:23] John Nash: So do you think just as Orange is the new black is process, the new product?
[00:19:30] Jason Johnston: I think so. For a lot of us, and maybe even some of these kind of product focused, and I don't mean to diminish them by saying product focused.
It's just a different emphasis on where the work is really happening. And probably some of those some of those programs can do with a little bit of shifting as well. Towards the process, but yeah, I like that process is
[00:19:53] John Nash: the new product. Then. What do you think the implications are for instructors regardless of where they sit in the grade spectrum from K through 20?
What's the implication for those that have relied long, have relied for a long time on the product and given less thought to how process should now be product.
[00:20:18] Jason Johnston: Yeah, I think there are a lot of implications as you said, of kind of a reworking and a rethinking and you're kind of talking about you're back to school, shifting some ways that you are changing.
I've had a lot of conversations going into the fall about . OpenAI ChatGPT impact. And often the top concern is around academic dishonesty. And that Is a focus on the product. It's a concern that product, someone's gonna take a shortcut and they're going to produce a product that is not their own product.
And so it's been really helpful, this kind of idea of looking more at the process of breaking down , those steps so you're evaluating along the way. So rather than giving 30% on that final big essay you break it up into smaller chunks. And so you're looking at outlines, you're looking at ideas, you're looking at ways in which you can see under the hood on that process, whether it's through a Google Doc history or through this tool that we've talked about Power Notes and you've already referenced where you can watch the process kind of unfolding as you go along. I had a great conversation last week where we do a community of practice and it just shows that there's a lot of people really interested in talking about this. And I was really impressed that there I had about 20 faculty in a Zoom room plus a number of staff, instructional designers and administrators.
Another handful of those people on top of the faculty. And it wasn't people that were like gung ho. These weren't people that had already, it was a mix, probably 50 50 mix of people that had used. ChatGPT or have not used it. So it wasn't like people that they're here because they're like so excited about using AI in the fall.
They're here because they had some concerns and that was really kind of one of the top concerns. But we really had some great conversation about that process end of things, and particularly around breaking it down. And also, I just thought we had a really great conversation about kind of getting to know your students and this is where it comes back to this humanizing idea.
The instructors were really interested in this idea of having early examples of students. Work that were amiable, that wasn't their word, that's your word that I'm placing on it. But this is exactly at early examples of student work that were low stakes in terms of grading that were amiable, for instance, like personal experiences, what you're hoping to get outta this class, this fall, how your experiences relate to this class, those kind of things.
That it's a personal work of their own writing low stakes. So they didn't have any incentive really to cheat on it. And then using that as a way to get to know your students and your writing so that when you have more products later on, you've got a bit of a bit of a history already with the students and you can maybe identify whether or not they have used AI in their writing.
And if they have, or if you think that maybe they have, it's not a got you moment like this, " TurnItIn tells me that this is 86%, possibly ai." Rather. It's a, "Hey, this feels different than your earlier writing. Let's talk about this.
[00:23:35] John Nash: Yes. Yes. Exactly. And I had one of those conversations with a student of mine a few days ago because I am, people would be shocked if I said, I don't encourage my students to use these tools. I do. And as they do that with me, or they do it on their own and share what they've done, I learn about what I need to do the scaffolding work, to have it be a helpful tool to them because one of my students used it to get some stuff, and it was immediately obvious to me.
This is the other advantage, I think, Jason, of you and I using these tools in depth, like crazy people since November, is that you can spot the ChatGPT
Oh yeah.
3.5 stuff like, like that. And and it was there. And then, so we did, I had the conversations said, look, Yeah, I like what you're trying to say here and is the direction we want -- it looks very different from what you're writing, so, you were using some large language models to support you here. "Yeah, I was." And 'cause we talked about that. "Yes, we have." So let's talk about now what role this can really play for you and how it can be helpful and then go forward.
Wonderful.
Yeah, I also wanna give credit where credit's due.
And actually this is a nice tie in. When I talk about things that are un-AIable, I have this idea only because of Brandeis Marshall's post on Medium, which is entitled exactly the same, "What's Un-AIable" -- and it's lovely. And there are three things from her perspective, contextual awareness, conflict resolution, and critical thinking. These are the things that are not AIable and that are launch points for instructors to think about ways in which they have students demonstrate their learning.
And so I think that's great. The other thing that's neat, Jason, is that Brandeis Marshall is going to be a keynote speaker at the Washington meeting of the Online Learning Consortium.
[00:25:17] Jason Johnston: Nice. Oh, that's great.
[00:25:18] John Nash: So I'm excited about this because as learning designers think about their online work and any other work per se how these sorts of things can be brought to bear as assessment points.
[00:25:31] Jason Johnston: Yeah, that's great. And it's exciting the OLC fall, because John Nash and Jason Johnson are gonna be there, and we're gonna be talking about online learning in the second half.
That is actually the name. They let us slip that name in there, even though it's a, a blatant plug, yeah, for our podcast.
[00:25:50] John Nash: That's okay. We just took a page from the Car Talk textbook of shameless Commerce.
[00:25:56] Jason Johnston: Yes. Shameless Commerce. Yeah. So we're gonna be there in the fall in DC fall of 23 if you're listening to this in in 2023.
Tell me those three things that they said again.
Absolutely.
[00:26:08] John Nash: Dr. Brandeis Marshall in her piece called "What's Un-AIable" published on Medium, and we'll put the link over there to that are as follows, contextual awareness, conflict resolution, and critical thinking.
Yeah, yeah.
Because AI just can't, it can't provide contextual awareness. It doesn't have any, it can't resolve conflicts per se, not like humans can, it can advise how I might approach a problem, but, and it can't think critically, not like the kinds of questions that humans need to ask in the context of, say, a meeting, or in a dialogue.
These are the really useful points of departure for instructors as they wonder what they should do now, what should they measure now that AI can do everything else, maybe, in the worst case scenario.
[00:26:57] Jason Johnston: Yeah. And this may be a good point too to jump in and it's not a popular opinion necessarily, but I've had to make this point this fall with people, is that we cannot detect it using tools.
Yes.
Reliably. Not that it can never be detected, but we cannot reliably detect the use of AI using tools. I heard, again, from somebody at a at a a kickoff talking about the tools that they used for detecting AI with their students. And, I go into this with an open mind and I'm like, okay, maybe they're, they've gotten better.
And so I had Chat four create a English 101 assignment for me based on a prompt, an actual prompt from an English 101 class, and then went through the major AI detectors to see if it would detect it. And none of them did. None of them, they all thought it was human. So It doesn't prove everything.
It's just one test, but it does prove one thing, which is it's just not reliable. We just can't depend on reliability at this point, and I'm not sure that we ever will because of the ongoing development of AI, because of the nature of these large language models, which is that it makes novel content.
Yes.
And or aggregated, it's may not be really novel, but it's aggregated in such a way that it is not detectable from, it's not plagiarism. No. It's, as you've pointed out in the past,
[00:28:24] John Nash: it's generative. So there
it's generative.
It's been generated, it's never been written before.
[00:28:30] Jason Johnston: That's right. So it may be dishonest to use it when you're pretending that it's your own, but it's not plagiarism in the classic sense.
And so that it cannot, in a TurnItIn sense it cannot go back in point to where it came from conclusively. So when I stuck this English 101 ChatGPT in to TurnItIn, and they've got AI detectors, it said it was 0% AI created and I didn't change a thing. I didn't change one single thing in the in the essay and it said it was 0%.
It, I was surprised it didn't even do you know how like weather.com or whatever it's. If there's ever any question about some rain in the air, it's always like 49%, right? Yeah. So it, so you look back and you say, ah, yeah, I guess it was pretty close kind of thing. But it just kind of rides that middle point so often because especially in Kentucky and Tennessee, it never knows really when it's gonna rain.
And so I was surprised TurnItIn was so sure of itself. It said 0%. It wasn't even like a 0.1% chance that this is AI. It's said 0%
[00:29:39] John Nash: That these models are so unreliable at detecting AI written work is extremely problematic because the implications of a false accusation are so huge for students.
To have a plagiarism or an academic dishonesty charge leveled? That triggers all kinds of things. And in a situation where it isn't the case, That's very bad. So a false positive, which is one of the other problems. So it's very, it's going the other direction now. It can't, the models have gotten so good, and if you're prompting them, you didn't even try to prompt it very complexly did you? Or the complex? No, it wasn't,
[00:30:19] Jason Johnston: It was, I'll be honest, I did it just slightly complexly. But it's, this is not beyond the work of our students. So I gave it the assignment. I did ask it to include a couple of errors. And to do it on college level writing.
[00:30:36] John Nash: I think it's hubris on the part of these companies to suggest that they would even have a detector, because it gives this false promise to a lot of instructors who then think that they can rely on this. And the early empirical research out there says, these don't work. And actually they tend to set false positives for English language learners and people from non-English speaking cultures.
Even OpenAI in its own research and its public statements say, "we cannot reliably detect AI written work." Thank you for bringing this up again, because we should be talking about this almost every episode. AI detectors don't work. Don't use them.
[00:31:11] Jason Johnston: Yeah. Prove us wrong. Listeners, if prove us wrong, if, but we're talking reliably. I'm not saying it never works. No. This is so reliable. It cannot be used as a reliable means. And Yeah. If
[00:31:21] John Nash: The risk is too high to ruin a student's life. You're right. So if I ask Alexa if it's gonna rain today, it may say there's a 12% chance at four o'clock, but if it gets that wrong, there's no penalty. Yeah. But getting this wrong here is really bad.
[00:31:37] Jason Johnston: Yeah. And it, it creates a division between you and the students. Yeah. It puts you on the wrong side of this whole thing where, and I think just the kind of effort that would be put into the policing of this versus -- not that's never our job as administrators and as teachers and so on.
We need to be, we need to be helping students develop integrity. In their academic writing without question, I'm not against that at all, but but it just puts us on the wrong side of the the handcuffs, in terms of the, in terms of just trying to police students in that way.
Yeah
[00:32:17] John Nash: a lot of law enforcement metaphors right there.
[00:32:19] Jason Johnston: Whew. Better than sports. Oh I'll g I'll throw it puts us, it makes us more of the referee than the coach. Listen years ago that a good sports
[00:32:29] John Nash: metaphor, my, my sister-in-law years ago put a stake in the ground about using sports metaphor.
She said, I'm gonna stop saying things like the ball's in their court. I'm gonna start saying things that families, you know, do like, okay, it's their turn to drive the carpool.
[00:32:46] Jason Johnston: Yeah, that's good. I identify with that a lot more than I do the I bet, yes. Than the sports. Although I get the sports analogies.
You and your van as well. Yeah. Yeah, exactly. That's good. Yeah. The other thing I was thinking about as we approach this fall, I've had some great conversations. I was part of a kickoff for for one of the colleges here where their whole kickoff was to talk about ai in, and I didn't know of any other colleges doing this, but their whole kickoff, they had me come in to give some kind of context, a little bit of a keynote kind of thing.
And then they broke it down into different parts where they had teachers talking about what they're using and what they aren't using, approaches to research. They had some people coming in from oth other colleges. Any guesses to what this what college it was? Oh,
[00:33:32] John Nash: was it the college of Education?
[00:33:35] Jason Johnston: Nope. Do you have a second? Guess
[00:33:38] John Nash: the college, it wouldn't be a college of writing and rhetoric. Was it did it involve writing and rhetoric people?
[00:33:44] Jason Johnston: No, not really. Okay. That was College of Agriculture. And I was I was fascinated too because as I did a quick survey probably out of, out of the 60 faculty members that were there, I would say about about 58 of them had used ChatGPTor some sort of OpenAI to for something.
Anyways, they had at least played with it. So it was quite impressive. I think that there's ways in which I think that. Maybe we, or maybe it's just me, but maybe we have a little bit of a misconception sometimes about agriculture folks. But they seem to be quite on top of it in terms of really wanting to proactively think about using ChatGPT and AI this fall.
But one of my kind of approaches was talking about education under the influence of ai. Okay. So I use that metaphor because there's a couple different ways of kind of thinking about that, that we have no choice about it at this point. We are under the influence of AI this fall. Yes. From an assignment kind of, we, I wanted to talk about the writing end of things, about, about academic dishonesty and those kind of things.
[00:35:24] John Nash: Thank you for raising that because we've spent a lot of time in this episode talking about what's un-AIable and getting to very human natured and getting to very human centered assessments, the public demonstrations of learning, the sort of deeper learning avenues. But I don't think we've talked enough about, and maybe this is a topic for future conversation, where are we best suited to leverage this crazy?
'cause there's a lot of good places to do that in the learning process. Yeah. Yeah.
[00:35:57] Jason Johnston: Yeah. I think we, we definitely need to round back to that. 'cause I think that there's some there are some amazing resources out there where people are giving different options. And there are also some different ways to think about it framing in terms of different kinds of assignments in what you're trying to get out of of the students in terms of their learning outcomes.
[00:36:15] Jason Johnston: So why don't we, why don't we put a pin in that one and come back and perhaps do a whole episode after we've talked to a few people. Speaking of, we've got some oh, we can't even, I'm not even gonna say yet, but we've got some exciting guests coming up. Yeah. So if you're listening to this now and we have further episodes, please keep listening because we've got some great guests coming up.
Oh
[00:36:33] John Nash: we got a get.
[00:36:35] Jason Johnston: We did. We gotta get, yeah. Yeah. So excited about that. And partly won't say anything too. 'cause we haven't actually recorded it yet, so we don't know. Absolutely everything will work out. But we're very excited about our fall and what we got laid out this fall.
Online learning podcast.com is our website as well as find our online learning podcast LinkedIn Group where you can say whatever you're thinking about this episode and whether or not you agree or disagree, would love to have more connection and communication.
Closing
[00:37:09] Jason Johnston: And if I can say one thing to you, John, before we close here, that this is maybe the most important thing coming from one SEC school to another: Go Vols.
[00:37:19] John Nash: Absolutely. I agree with you. Go Cats
[00:37:23] Jason Johnston: Thanks John.
[00:37:24] John Nash: Bye.
Create your
podcast in
minutes
It is Free