Online Learning in the Second Half
Education
EP 21 - Dangers and Opportunities in the Second Half of Online Learning (with interviews from the OLC 2023 floor)
In this episode, John and Jason talk about dangers and opportunities in the second half of online life, from their Online Learning Consortium (OLC) 2023 presentation and “live off the OLC floor” interviews. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
TranscriptWe use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
False Start
[00:00:00] John Nash: I took a class from a professional in San Francisco for voice acting. I thought I wanted to be a voice actor. So yeah, that
[00:00:07] Jason Johnston: and here you are doing a podcast. You basically are a voice actor, except you happen to be acting like John
[00:00:13] John Nash: Like John Nash, not like Barney the dinosaur, or doing my Louis Armstrong imitation or something like that.
Start of Episode
[00:00:20] John Nash: I'm John Nash here with Jason Johnston.
[00:00:23] Jason Johnston: Hey, John. Hey, everyone. And this is online learning in the second half, the online learning podcast.
[00:00:28] John Nash: Yeah. And we are doing this podcast to let you in on a conversation we've been having for the last two and a half years about online education. Look, online learning's had its chance to be great. And some of it is, but, a lot still isn't. And so how are we going to get to the next stage?
[00:00:43] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:00:47] John Nash: That's perfect. What do you want to talk about today?
[00:00:50] Jason Johnston: So John, would you call yourself a techno? optimist or a techno pessimist? Do you think we're, all of this is winding up into a better world? Or is technology taking us down this path of doomsday and destruction?
[00:01:06] John Nash: If the left side is doomsday and destruction and the right side is optimism and happiness, I'm a cautious optimist. I'm, I think I'm a little bit to the right of a cautious optimist. I'm no Mark Andreessen who's recently come out with a tech manifesto suggesting that anybody who doesn't believe the bros in Silicon Valley can fix everything is crazy. I'm not like that at all.
I do worry about my own critical thinking around technology and how it may be exacerbating environmental problems and social problems. Because I love playing with these tools so much, I think I'm clouded a little at times, but I'm, yeah , I'm right of center on if being right is optimistic I'm over there.
[00:01:55] Jason Johnston: Yeah, I think I'm, find myself in the same space, not because I necessarily have a lot of optimism around technology. I do think it's pretty consumer driven and profit driven. And so that doesn't build in me a lot of optimism for its final outcome. However, I have an optimistic view of humanity, one that we typically work together towards our own survival when it comes down to it, and that there are a lot more good people in this world than bad people. And I think that maybe I'm an idealist and that I think the good will win out over, but not because I believe technology is going to save us by any means, but because there are a Usually enough good people that are helping to drive technology that I think we'll get to a better place.
[00:02:46] John Nash: Yes. Yes, I think that's well put. I think I'm in the same space you are because we're both educators and we surround ourselves with other educators who are interested in applying the use of technology to help learners achieve their goals. I'm not on the side of thinking "the technology we need to have in place to save the world is that which puts billionaires in space."
I'm not thinking that's the way to go, but you're right. I think when we surround ourselves with people who are interested in applying technology, particularly the technology that allows us to have online learning, and create more equitable, lower cost, high impact activities, then I think we're in a good place.
[00:03:29] Jason Johnston: Yeah, I agree. . So you don't think you're going to climb into the next Mars shuttle to help expand us into a multi planet species?
[00:03:37] John Nash: Now, I'm not in line for that. I'll watch the rockets leave earth.
[00:03:40] Jason Johnston: Oh yeah. I will too. I would love to watch the rockets leave, but I don't have any interest in doing it nor do I think it's the best place. I think we have enough issues and good things to put our money towards here on this planet with these people that we have in front of us that I'm not really in line with that.
[00:03:57] John Nash: Yeah, I agree. So where does that put us? We're both on the optimistic side of center here. But that doesn't mean we're not without some dangers.
[00:04:08] Jason Johnston: That's right. And so today I would love to talk about our last OLC presentation, but around the theme of turning dangers into opportunities in online learning. Online learning in the second half. looking at the dangers, turning them into opportunities.
How does that sound?
[00:04:26] John Nash: Yeah, that sounds really good. And let's remind our listeners what OLC is. That's the Online Learning Consortium and they hold two major conferences every year, and this fall conference was in Washington, DC
[00:04:42] Jason Johnston: Fall of 2023. If you're listening to us in the future, it's fall 2023.
And also we're sorry. That's the other part. If you're listening to us in the future we really are trying our best, but I know we could have done more. That's all.
[00:04:55] John Nash: That's right. So we had a presentation where we were able to talk with participants at the conference about the potential challenges that we have in front of us with online learning and really disambiguating those from the dangers that we might face. Also have in front of us. Jason, I think the word danger might sound a little alarmist to some of our listeners.
Maybe we ought to put that into context. Also,
[00:05:22] Jason Johnston: Yeah, and we found that as we were talking to people, so we roamed the snack area, basically, and accosted people with our microphone, asking them this big question, and I think a lot of times, "dangers" took them back just a little bit and say, danger, could I talk about a concern or a problem?
And it was said, yes, but we're really looking for dangers. We're thinking about the big threats here, the big kind of more existential threats to online learning. What are the big things that come to mind? But we did talk a little bit about what "challenges" were versus "dangers," which challenges are more like the obstacles or difficulties, things that you could overcome with some effort and creativity and so on dangers, really these bigger challenges that potentially could pose significant risks or threats and have some harmful consequences if they're not addressed.
[00:06:13] John Nash: Let's also put some more context on the danger and the things that we're concerned about. The people that go to the Online Learning Consortium meetings, there are certainly some vendors who supply tools and packages and other technology for institutions of higher ed and P 12 to do online learning, but It's also significantly populated with instructional designers and people who are really interested in bringing about higher quality experiences for learners in online environments.
And so when we talk about dangers, we're really talking about what may be in front of us that could really threaten quality of learning experience. Is that fair?
[00:06:56] Jason Johnston: I think so. I think most of the people that we talked to are well versed in building online classes, not just from a theoretical stance, but a practical stance of getting in there and making them happen from a quality standpoint. And so that certainly puts a particular context on this. Nobody was talking to us about the enrollment cliff or things like that.
They tended to be around more of the issues that are apparent within the course and programs that are being delivered online.
[00:07:29] John Nash: Yeah.
[00:07:31] Jason Johnston: Shall we listen to a few quotes from the OLC floor?
[00:07:34] John Nash: Yeah, absolutely. Let's get on the floor interrupt some some snack time that people are having and hear what they were thinking was a potential danger to online learning in the future.
OLC FLOOR INTERVIEWS
Yeah. My name's John Ruzicka. I'm with learning Sandbox. I feel like the greatest danger to online learning is overreliance on what I would call the shiny new object. So a couple of years ago at this conference, you might've heard a lot of talk about the metaverse.
Today, it's all about generative AI, open AI. And so what will it be in the next couple of two or three years? It depends. I mean, of course, these are topical things we need to all think about and know about and experiment with, but I think the over reliance and over indexing on that new technology could be a distraction.
My name is Carrie Kennedy. I'm here with the University of North Carolina at Charlotte, and I would say the biggest danger or risk that I'd like to make sure that my university avoids is being too slow to consider workforce impact and mobile pathways between non credit to academic credit.
I think we're already a little bit behind in doing that and I think to, you know, keep up with demands of employers and skill gaps that we need to have those pathways in place.
I'm Ellen Rogers with Penn State University. Big concern might be, if the faculty get too good at all this online learning and instruction, what happens to the need for instructional designers?
Bill Egan, instructional designer of Penn State World Campus, one of the biggest threats play off.
of that, what if we get rid of faculty, because we're using things like AI and other certain content experts to generate the content, which is one of the biggest obstacles from an instructional design perspective, is working with faculty, getting content on time, etc. So I answered a question with a question.
I'm Cody House, I'm the Director of Academic Programs at George Washington University with the College of Professional Studies. I see the thing that most influences or should be challenging to higher education and online learning in the next few years is how slowly institutions are to accept change and to embrace innovations.
You know, obviously I feel like to this question, most people have probably said generative AI. But I think even that conversation shows you how slow institutions are to figure out what their stances on changing. They're coming up with committees about committees to figure out nomenclature for new terms and new credential terminology.
And so I think that institutions need to just figure out how to streamline processes and make decisions quicker to accept change and move on to the next thing.
Jamie Holcomb, and I'm with Unitech Learning, and I think one of the greatest challenges for online learning coming up will be the disparity between the consumer experience and the online learning experience.
And the expectations that consumers have for the quality of interactions that they have from other platforms where they engage with frequently. I think online learning is lagging there. So, to me, that's one of the greatest challenges we have coming up.
I'm Carrie Brown Parker from North Carolina State University, and I guess I think the danger in terms of student work or student productivity maybe is AI tools, although there's also a great inspiration there for instructors to get creative and do new work with students.
Caleb Hutchins instructional designer for the community colleges of Spokane. I think the greatest danger is probably commercialization, to be honest.
I perceive that a lot of different colleges are moving towards standardized publisher content as much as possible. And I think that I think that more and more it's taking away instructor agency and instructor interaction with students. I think publisher content has its place, but I think that when it starts to become a replacement for the teacher, then we have a problem.
My name is Dr. Sonja Dennis, I'm with Morehouse College. So I think the biggest threat would be the lack of in depth knowledge, or lack of in depth understanding where students have at their fingertip so much information, are they really having any deep learning occurring?
MY name is John Moraine LaSalle. I am with Montclair State University. Specifically, I'm an instructional designer, part of the team. And I think that what could potentially be one of the biggest dangers While I want to say it's potentially artificial intelligence, it's not specifically that, but I think it's more so the growing danger of feeling isolated in the online environment, and I feel that artificial intelligence poses the risk of making it even easier for students to disconnect from each other.
They're already struggling in the online environment sometimes with that. So that is what I think the bigger threat is from AI. Not so much, Oh, they could use it to try to get a solution or an answer, but how it could, it could basically almost like Pavlovian make them just immediately go, I'm going to go to chat GPT to figure out what is the best way to discuss the best way to find an answer or a solution than rather than your actual peers in this virtual environment with you.
My name is Yingjie Liu. I'm the leading instructional designer from San Jose State University. I Would say We might, we might be too slow to catch up what's going on in the, in the world, especially like with, with XR with AI. Like we, we are slowly to integrate those into our teaching and learning.
But I, uh, I'm wondering when the students are already using the technology like AI in their learning how we. Update our teaching, especially the, our pedagogy, best practice to catch up with what's going on and what students need, right? So the students might have different ways to learn, they might have different practices.
Preference which will be different and we are, we are exploring that direction. Just hope we catch the speed of things evolving.
My name is Vincent DelCassino and I'm at San Jose State University.
That's a really interesting question. I think the potential for it to become so diffuse that it loses its center point. In the sense that anyone thinks they could get into the game, and it has the potential to lose the kind of engaged pedagogical value. That you sometimes see when, and I think one of the areas is in corporate in particular.
Going out there and building courses and programs and thinking like, they've nailed what we haven't been able to deliver on. But some of the criticality, some of the other things like that can have a real impact on how people think and imagine what value higher ed brings. And we tend to move a little slower sometimes.
But I would argue with a little more thoughtfulness. But I think that could be a risk for us in the future.
END FLOOR INTERVIEWS
[00:14:52] John Nash: Yeah, wow, so what'd you think of those, Jason?
[00:14:56] Jason Johnston: Yeah, there were some parts that I was not surprised and some of the themes that were coming out, especially those around AI and institutional change quality and so on. But yeah, it was really interesting to talk to people just to get their initial reaction on the floor.
[00:15:13] John Nash: Yeah, you never know where people are heading with what their concerns are going to be. We hear them talking about over reliance on new technologies, maybe slow adaptation to workforce needs, to redundancy of instructional designers. It's a conference of instructional designers, of course. AI is on everybody's mind.
Will they be put out of a job? Will faculty be put out of a job? So I think that's, yeah, it's interesting. And then, of course the ever present institutional resistance to change.
[00:15:44] Jason Johnston: Yes. Yeah, which, we talked about a little bit is, and we'll go through and talk about these individually, but is both probably to our benefit and to our demise in some ways, our resistance to change, right? How quickly we move into things like this,
[00:15:59] John Nash: I think it's, yeah it's a risk to our demise. I think that the glacial pace of change in a lot of places is going to be a threat going forward. At the same time, I'm not advocating a move fast and break things approach, but I think we need to find a more middle ground. I think the institution's responsiveness to change through their leadership, to understand what expertise is need to be brought to bear to.
Fix the problems in front of us is just not, it's not responsive enough.
[00:16:31] Jason Johnston: But aren't you from San Francisco? Aren't you one of the bros?
[00:16:33] John Nash: I am not one of the bros because I don't know how to code. I can write rudimentary HTML, but that's about it.
[00:16:40] Jason Johnston: Okay. I thought everybody from San Francisco just believed in, in moving fast and breaking things and seeing what happens.
[00:16:47] John Nash: Yeah I like to prototype things. And and I grew up in Menlo Park where all the VCs are. But but I am not a VC myself, nor do I really know any.
[00:16:56] Jason Johnston: Huh. Interesting. As we were looking at this, we were looking at a pivot, listening to what people were saying from the floor, listening to what people were saying in our conference room, and thinking about how we could create this pivot of transforming dangers into opportunities. What are the top dangers?
And then how could we pivot to opportunities? And we came up with this three part response and approach, which is to assess the threat level. Is it a real danger? How likely will this danger destroy a fundamental part of academic life? And then two, how could we simply survive the danger?
What are the basic skills necessary? And then three, how could we then thrive within this danger or in response to this danger. Use it as an opportunity to create a better, in our case, in our theme, more humanized online Education.
[00:17:52] John Nash: sort of level of discussion where we were talking about the threat level and is it a real danger was really important for folks we were talking to because it, it helps us start to disentangle hyperbole from real concerns. I mean, you get into a room with enough people and there's always going to be some kind of complaint about something that's going on, but is it going to be a real threat?
Is what we're hearing in the rumor mill and the, in the world around AI. And right now we're, we are recording this at the time in which open AI's board has fired Sam Altman, their CEO. Is this a real threat to what we're going to do? No. So I think vetting those discussions in such a way that we think about what is the real danger, reframing it to something that's actually.
Then taking it to a discussion where are we going to survive this? And then how can we actually thrive it? How can we flip it on its head?
[00:18:46] Jason Johnston: And so that we could be specific, we tried to frame the dangers that we're going to present here as the danger of blank to the existence of blank so that we could actually be really specific. So it doesn't become just this just kind of nebulous danger that's out there, but what exactly, if it is a danger, what exactly is it a danger to?
So our first one that was coming up over and over again, obviously a big topic of conversation, was around AI, but specifically, danger number one, AI threatening our ability to assess student learning in online courses. What do you think the threat level is for this? AI threatening our ability to assess student learning in online courses.
What do you think the threat level is and why?
[00:19:36] John Nash: I think the threat level is is in the middle there. If we're going from like a one to a five we're about at a three. I think that AI's threat to our collective ability as instructors to assess student learning in online courses is as large as the instructor's capacity to understand their ability to pivot and change what they assign. I'm going to go back to an, it's not the old adage because AI has only been around 51 weeks, by the way, at this point in time, as we record.
And but it's an...
[00:20:10] Jason Johnston: Happy birthday,
[00:20:11] John Nash: you
[00:20:11] Jason Johnston: AI.
[00:20:12] John Nash: Happy birthday, GPT 3. 5. The adage is something as follows. If you're assigning work that can be done by AI, you need to rethink what you're assigning. And I think that's where the threat sits. So the question is then how do we survive across that but what do you think the threat level is there?
Do you think that AI threatens our ability to assess student learning in online courses?
[00:20:35] Jason Johnston: I think that, yeah, my answer is it's hard to give it a number because it depends, right? And so I think in short, I would say high for those that are inflexible to change and rethinking their assignments, but also high for people or programs where the typical assignment that is being assessed easily replicated by AI, meaning that it's not just about rethinking the process towards whatever it is that you're learning, but this final product is something that could be easily replicated by AI.
So I think it has a high threat to those kind of programs and people and a more challenging threat, I would say. So how do we survive this threat then if we've assessed it and then we're looking to survive it?
[00:21:24] John Nash: I think that one thing that instructors can think about to just merely survive is to start to communicate with their students the presence of AI and how they feel about it they meaning, how does the instructor feel about it and how do students feel about it themselves? And so there's this communication component that I think is going to be the lowest level threshold and highest impact thing at the surviving level.
If you're not prepared to think about your assignments in terms of redesigning them or thinking about the way you assess the assignments that you give at least you could be talking about what it is you believe about this and why you also believe the assignments you give are the ones that you want to do.
[00:22:07] Jason Johnston: Yeah. So taking an active communication stance, being transparent. We heard a lot of people talking about creating policies and principles, which I think are ways to survive, but not necessarily thrive, but they are ways to approach things. And that maybe comes in with some of your communication.
Being in a place where you can really test out, figure out what AI is doing and how it affects. So it's not just this unknown boogeyman threat in the closet. And I don't know what it looks like, but you have a clear sense of, I've heard of instructors basically putting their assignments into AI to see what it would spit out.
And that gives you a clear sense of really where this threat is at rather than this unknown nebulous kind of threat.
[00:22:49] John Nash: Yeah. So what about thriving? How do we flip this on its head?
[00:22:54] Jason Johnston: Yeah, one of the first things that we had talked about was our conversation with Dr. Brandeis Marshall on episode 18 about making assignments un-AIable. And I think that's one way to thrive is, as we've talked about beyond just the communication transparency, but actually reforming, re imagining our assignments under the influence of AI, in the age of AI, so that we could be thinking about how these assignments could not only help us really assess where the students are at, but actually prepare them for a future of work and life and scholarship within AI.
[00:23:30] John Nash: Yeah, that's right. When we talked to Dr. Marshall on episode 18 that was really inspirational and it made me think about ways in which assignments could be turned into more public demonstrations of learning, more about oral defense of ideas. And a polite pushback to that might be that, that takes more time. If I'm gonna do an oral defensive ideas with every student and I have 200 students, that's may not be scalable. So I think we also have to be thinking as a community how we can support instructors at scale.
[00:24:05] Jason Johnston: Yeah. All these ideas, not assuming that there's a one size fits all or a silver bullet. That's going to solve this for every single kind of program and class size and so on. We got to be thoughtful about this. That's right. Yeah,
[00:24:19] John Nash: One way we might think about scale that could work in larger classes and inside a learning management system is, for instance, letting students cheat on purpose with ChatGPT or Claude or Bard and then ask them to rate the quality of that response to a prompt that you might ordinarily just give to students on their own.
And so you start to get to this sort of metacognitive critical thinking lens going and you get ideas as instructor on what the AI can really do and also help students see the limitations of what AI can do.
[00:24:58] Jason Johnston: That's good. And we talked a little bit about scaling online classes and humanizing those classes with Dr. Enilda Romero-Hall in episode 13. And within that thinking too, about how we might focus on skills and maybe focus on grading in those situations as well. And those are things that could be scaled because it's just a shift in what it is we're assessing and also just a different process in terms of grading, which it could actually turn into, let's say on the surface level, less work, not more work for the instructor when it comes to assessing where their students are at.
[00:25:34] John Nash: Yes, and Dr. Romero Hall's presence in the classroom is really predicated on a community presence and with a feminist pedagogy lens bringing in student voice along the way. And so that could also be scaled to some extent through the LMS and through polling and questions and even discussion posts to say, how might we together consider how we want to address this learning goal in the presence of AI and with these kind of activities that we must get done? That could happen as a community.
[00:26:08] Jason Johnston: Yeah, it reminds me of a quote that I ran into this last week by Paulo Friere and the quote is this, "the answer does not lie in the rejection of the machine, but rather in the humanization of man (or people)." This is from "Education of Critical Consciousness."
And what reminds me of this idea of we, we just can't, we can't have large classes and actually humanize them. That may not be the case, right? We can think about our approaches even in the face of AI. We can think about our approaches in large classes that may be because of AI, it's forcing us to then think about the humanization of students within the context of these large classes in ways that we didn't have to think about before because we were just following what Paolo would also call "the massification of education."
We're just following this incremental enlargement of the class size and without really critically reflecting upon what it means to continue to humanize the students in these contexts.
[00:27:10] John Nash: And that's related to the webinar we did recently with the group from Inscribe, looking at the impact of AI on student connection and belonging. With AI,we are able to explore opportunities in large classes to help differentiate instruction, to help think about ways to advance belonging and large swaths of students. So I think that there's ways to get at this if you're thoughtful about it.
[00:27:35] Jason Johnston: Yeah, absolutely. Oh, there's so much there. I just read a great article about belonging from a research with 26, 000 students across 22 institutions. Anyways, that's a whole nother episode. We should go there. We should find somebody that can talk to us about that. And let's do a whole episode on belonging online.
Let's move on to danger to though. So this is what we saw from our group and from the floor of OLC, danger number two was institutional resistance to change in pedagogical approaches. So what do you think the threat level of this is?
[00:28:07] John Nash: I don't know, maybe I'm too close to the mothership, but I feel like it's a little high. I'll give it a four out of five.
[00:28:14] Jason Johnston: Yeah. Yeah. And I think it, for me, again I'm sorry for this big cop out, but it depends, right? I think there's certain units and certain programs that are embracing change. There are others that are quite resistant. And I think there's certainly ways in which a lot of people across units are wanting to hold on to the way we've always done things versus, versus adapting.
So for instance, they want to just have, TurnItIn 2.0 so that it can detect AI versus rethinking the way that we're interacting with students around plagiarism detection and our relationship with students.
[00:28:53] John Nash: Yeah, I mean, what I would hope for is that as institutions think they're responding to the need for change, it's not that they're bringing in new tools like the "TurnItIn 3.0," that's going to let us catch more cheaters, but rather they're thinking about ways to do capacity building that are akin to what we learned from Olysha McGruder's episode and what they do at Johns Hopkins in the School of Engineering, which is everybody who's teaching an online program has to go through the online instructional design process. And so my institution doesn't necessarily require that. I think that would be, that would raise the tide for all the quality across our institution if we did that and right now I think it's more akin to here are tools you can use and we hope you use them.
[00:29:41] Jason Johnston: Yeah, which is probably some of the survival part of it is like we, we have provided you with some tools to use and give you some guidance around that, maybe surviving this threat level right now in terms of this change that AI is bringing about, this disruption really, AI is a disruptor it's not a calculator, I don't think. I've decided that's an interesting analogy about AI being like the calculator.
It's not really like a calculator because it crosses so many boundaries of everything, it's a disrupter across every single discipline and so part of this survival is maybe giving people some ways to adapt and giving them guidance and so on. How would we thrive though in response to the danger of institutions resisting change?
So how do we turn this into something that, that could really take us into a second half of online life , that we're imagining?
[00:30:41] John Nash: I think that when institutions can become learning organizations and start to see the richness of the opportunity when they are able to build capacity amongst faculty, create environments where faculty want to learn, and also for Research-One institutions like you and I are at, incentive structures for faculty to be really interested in taking on that capacity building.
[00:31:08] Jason Johnston: Yeah, that's good. And I think along with that too, that learning organization, capacity building, creating systems and positions that help us adopt and adapt to innovations. So creating pockets to test and educate and try out new things and help us with this transition of new technologies, I think is really important.
I've heard more and more people getting assigned these kinds of roles, like a dean of AI and these kind of things to help move us in that direction, and I think that is really important because I think faculty are really busy, and they're just trying to make it through the semester and I think that they would welcome people who are taking the time to really dig into this and come from an institutional standpoint, guardrails up and think about how this change is affecting and should be affecting
[00:32:01] John Nash: Yeah, there's a quote that came across my desk. Have you heard of the app Readwise?
[00:32:07] Jason Johnston: Oh, yeah. Yep. Yep.
[00:32:08] John Nash: One of my doctoral students recommended Readwise, and its connection the app, Notion, and I get a little push of everything I've highlighted on my Kindle. There's a quote from Daniel Priestley in his book, "24 Assets." And he says, "Systems aren't there to replace people, they are there to make your life easier."
So I think thriving also means that institutions will put into place the resources to create systems that really work across the spectrum of services that we provide as faculty and that the staff do to make things go.
[00:32:41] Jason Johnston: Yeah, that's good. Could you say that quote one more time?
[00:32:44] John Nash: Yeah, "systems aren't there to replace people. They are there to make your life easier. Your teams, your customers, yours. Everybody's life."
[00:32:53] Jason Johnston: That's good. So danger number three, increasingly low quality of online education. So the threat, the danger is low, increasingly low quality of online education. I'll say just from the top that, this seemed to really resonate with the people that we were talking to. Of course, we were with a bunch of instructional designers who were concerned about online education getting watered down, about it becoming a much more mechanized, much more shovelware.
And I think I share that concern as well. That's probably a higher concern for me than a concern around AI taking over things. How about for you?
[00:33:29] John Nash: Yeah I am concerned about that. And I think I want to frame our danger a little bit. because the way we've stated it here, "increasingly low quality of online education," suggests that we are on a downward trend. I think that the threat really is that there's a chance that we will see an increase in low quality of online education doesn't that sound like we're saying it's, there is an increasingly low quality of online education?
[00:33:54] Jason Johnston: I think that is a danger is that online education will not get better. It's going to get worse. It's going to go lower and lower quality as we develop this out into this next next decade.
[00:34:06] John Nash: And that threat manifests itself because of potentially crowded vendor marketplace, a potential run to make money in the space what have you.
[00:34:16] Jason Johnston: Yeah. And tied in with the AI as well. I'm not a,
[00:34:19] John Nash: yes.
[00:34:20] Jason Johnston: I'm not a fortune teller, but I guarantee you a year from now, we're going to have thousands more online learning opportunities that have been created by AI as a subject matters expert. So we're not working with subject matter experts anymore on this.
There are people just cranking these things out because the access to the information is there and there's an opportunity to get it in front of people and maybe make a couple of bucks or not.
[00:34:47] John Nash: Yes, I think that's a part of that threat. And how do I feel about it? I was concerned that the level of quality of online learning that presented itself during the pandemic, which was horrid would bring people to a certain belief that this is what good online learning looks like, or I guess this is as good as it's going to get.
And a lot of new people came into the space with not a lot of experience-- good teachers who had never taught online before, but then did not so great online. teaching and learning. And so I wondered if that was going to, and I posed that question to Dr. Olysha McGruder on episode 20, and she actually turned me around a little bit on that and said yeah, but you know what?
Look at all the people who got exposed to teaching online and, that's more people than would have otherwise. And so at least we have those people knowing that you can teach online and that there is a, there's a light at the end of that tunnel where you can get better and better at it.
[00:35:42] Jason Johnston: Yeah, and there are interventions that can happen to help us get there. And I guess talking about the surviving, would that be surviving or that was really moving into that thriving. I think surviving would be things perhaps like applying quality rubrics and continuing to bang the drum of we've got to continue to have quality online, for all the reasons that we should be building quality online.
I think thriving into that, though, is really continuing to work heavily recognizing the importance of working with our instructors, not just to develop good online learning courses, but also have the right tools and approaches to, to make those courses good, to make them excellent, right?
[00:36:27] John Nash: Yeah. I think that one thing that going online lays bare for new instructors in the space is that good instructional design trumps everything. And so there's a lot of things that you can get around and avoid when you're teaching face to face. You have that sort of context, but when you start to move online, you've really got to have quality rubrics, good instructional design have some professional development under your belt on how to really have a presence online. So yeah, I think you're right.
[00:36:57] Jason Johnston: And of course, in our last episode we talked with Alicia Magruder about a Coursera course that she created about excellence. And teaching online. And the other cool thing is that on December 12th, we were invited to do a podcast wrap up session for their conference called the Excellence in Online Teaching Symposium.
And this is just a, this is just a plug, of course, but a, also just an excellent moment to stop and say, yeah, we can always learn more, and this is how we thrive. in the face of the potential of low quality is by continuing to connect with peers, continuing to work the professional development, look for these opportunities to continue to grow and to learn and get better.
[00:37:47] John Nash: Yeah, I learn best from concrete examples that I can copy and steal. And I think that, and I'm happy to do the same for others. I think that this opportunity with the Johns Hopkins School of Engineering showing excellent examples of online learning is a model for what we need to see more of.
[00:38:05] Jason Johnston: So as we think about wrapping up on this OLC discussion around the dangers of future dangers of online learning what are your overall thoughts about our approach or about your optimism for what we have in the years to come?
[00:38:22] John Nash: I think we don't want to come off as alarmist talking about danger, but I think we can take some time here like we did today to understand challenges versus dangers. Challenges are the hurdles that can be overcome with some effort and the dangers are significant threats that could have potentially harmful outcomes for the way we want to see online learning go forward.
And so, I think by identifying some of these and then pivoting to opportunities is a great way for us to keep optimism in the mix of our conversations. Because I think there's practical strategies that we can take for a lot of these things. And it's just a matter of us working as a community to find out where they are, share those out, and then be kind and empathetic to those that are coming along.
I think that there's a good opportunity here for online learning to be great. We just have to, be vigilant.
[00:39:14] Jason Johnston: Yes. I agree. And I think OLC was a excellent example, again, of connecting with a larger community around these questions as well, so that it means that we're sharing these dangers, we're coming up with solutions together, we're maybe validating some of these or invalidating some of these, as the case may be, in terms of talking us down on the ones that we feel like might be a really, a big danger, but by specifying, we realize that maybe it's not as big as other things.
And I also think like the ongoing community-- want to encourage people and welcome, have them to join us on LinkedIn as well. So they can connect with us there as well of our, as our community there as well.
[00:39:55] John Nash: Yeah.
[00:39:55] Jason Johnston: And we'd love to hear from you about, what you think about these top dangers, these top three that we talked about, but also if there's other ones that you want to talk about, or you have other solutions, just reach out and we'd love to hear more. And I think this will not be the last time we'd probably talk about this.
Do you think John?
[00:40:11] John Nash: No, we're never going to talk about AI again. We're never going to talk about online learning again. Actually we have to, that's the podcast, right? Okay.
I'm actually looking forward to the balance of 2023 and talking to you more. I think we've got some good stuff lined up. A potential year in review. And maybe even a second Super Friends episode.
[00:40:35] Jason Johnston: I think we might even try to combine those two maybe a year in review with our super friends. We'll see how that goes.
[00:40:41] John Nash: Yeah. I think that would be a great idea. Let's do that.
[00:40:44] Jason Johnston: Okay. That sounds good. This has been great, John. And again, as we said, connect with us on LinkedIn. Also, you can find all these podcasts at our website, OnlineLearningPodcast. com, as well as show notes. We'll put as many links as we can about the things we've talked about today in there. And anything else, John?
[00:41:02] John Nash: You can catch the transcript of the podcast episode as well on our website. And yeah, do join our LinkedIn group.
[00:41:10] Jason Johnston: Yeah. And we want to hear from you and the kinds of things that you want to talk about. And if you like what you hear, please review us on Apple podcasts. I understand that the AI likes that and will push us up to even more stardom and success. If real humans go in there.
and review our show.
[00:41:31] John Nash: So we have concerns about AI, but we still treat it with a cheerful tone, because one day when it does become sentient, it's going to remember that we were nice.
[00:41:40] Jason Johnston: That's right. I always say thank you.
[00:41:42] John Nash: Always say thank you. Thank you, Jason.
Create your
podcast in
minutes
It is Free